text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Functional and Structural Insights into ASB2α, a Novel Regulator of Integrin-dependent Adhesion of Hematopoietic Cells* By providing contacts between hematopoietic cells and the bone marrow microenvironment, integrins are implicated in cell adhesion and thereby in control of cell fate of normal and leukemia cells. The ASB2 gene, initially identified as a retinoic acid responsive gene and a target of the promyelocytic leukemia retinoic acid receptor α oncoprotein in acute promyelocytic leukemia cells, encodes two isoforms, a hematopoietic-type (ASB2α) and a muscle-type (ASB2β) that are involved in hematopoietic and myogenic differentiation, respectively. ASB2α is the specificity subunit of an E3 ubiquitin ligase complex that targets filamins to proteasomal degradation. To examine the relationship of the ASB2α structure to E3 ubiquitin ligase function, functional assays and molecular modeling were performed. We show that ASB2α, through filamin A degradation, enhances adhesion of hematopoietic cells to fibronectin, the main ligand of β1 integrins. Furthermore, we demonstrate that a short N-terminal region specific to ASB2α, together with ankyrin repeats 1 to 10, is necessary for association of ASB2α with filamin A. Importantly, the ASB2α N-terminal region comprises a 9-residue segment with predicted structural homology to the filamin-binding motifs of migfilin and β integrins. Together, these data provide new insights into the molecular mechanisms of ASB2α binding to filamin. Characterization of hematopoietic stem cell and leukemia stem cell properties and understanding the molecular mechanisms that control the early steps of hematopoietic differentiation, which are deregulated in leukemia cells, are major challenges. These issues are relevant not only for the development of therapeutic approaches to target leukemia stem cells in vivo, but also for engraftment of normal hematopoietic stem cells following transplantation. Hematopoietic stem cells reside predominantly in a complex bone marrow microenvironment, the stem cell niche (1,2). Hematopoietic stem cell fate decisions are governed by the integrated effects of niche-independent intrinsic and niche-dependent extrinsic signals. Recent studies indicate that changes in the hematopoietic stem cell niche may have a role in hematopoietic malignancies (3)(4)(5). Available data show that during development and following transplantation, integrin adhesion molecules play a major role in anchoring stem cells in the hematopoietic niche (6 -8). Although adhesion of CD34 positive cells to fibronectin inhibits cell proliferation, adhesion of acute myeloid leukemia (AML) 5 cells to fibronectin stimulates proliferation (9). In addition, AML cells showed increased survival as a result of the interaction of ␤1 integrins (␣4␤1/VLA-4 and ␣5␤1/VLA-5) on leukemia cells with fibronectin leading to their reduced chemosensitivity (10,11). Accordingly, antibodies directed against VLA-4 prolong survival of mice in a bone marrow minimal residual disease model (10). Moreover, the FNIII14 peptide of fibronectin that impairs VLA-4-and VLA-5-mediated adhesion to fibronectin overcomes cell adhesion-mediated drug resistance (12). In this context, proteins controlling integrin-dependent adhesion of hematopoietic cells may represent novel therapeutic targets in AML. Our previous work identified ASB2 as a retinoic acid response gene and a target gene for the oncogenic promyelocytic leukemia retinoic acid receptor ␣ (PML-RAR␣) fusion protein in acute promyelocytic leukemia cells (13,14). Expression of PML-RAR␣ has been shown to induce the myeloid differentiation arrest observed in acute promyelocytic leukemia (15)(16)(17)(18). At the molecular level, PML-RAR␣ acts as a transcriptional repressor that interferes with gene expression programs normally leading to full myeloid differentiation. Recently, PML-RAR␣ was shown to be bound to the ASB2 promoter in acute promyelocytic leukemia cells in the absence of retinoic acid leading to hypoacetylation of histone H3 (19). Moreover, following retinoic acid treatment of acute promyelocytic leuke-mia cells, hyperacetylation and recruitment of RNA polymerase II to the ASB2 promoter were observed (19). Furthermore, ASB2 is also a target of another oncoprotein that acts as a transcriptional repressor, the AML1-ETO fusion protein, 6 indicating that ASB2 mis-expression is associated with AML. However, ASB2 is specifically expressed in normal immature hematopoietic cells (13,14) and so is likely to be relevant during early hematopoiesis. Importantly, Notch activation stimulated ASB2 expression (20). ASB2 encodes two isoforms, a hematopoietic-type (ASB2␣) and a muscle-type (ASB2␤) that are involved in hematopoietic and myogenic differentiation, respectively (21,22). ASB2 proteins belong to the family of ASB proteins that harbor a variable number of ankyrin repeats (ANK) followed by a suppressor of cytokine signaling box located at the C-terminal end of the protein (23). These proteins are the specificity subunits of E3 ubiquitin ligase complexes (21,22). Indeed, suppressor of cytokine signaling box-mediated interactions with the Elongin B-Elongin C (EloB-EloC) complex and the Cul5/Rbx2 module allow ASB2 proteins to assemble a multimeric E3 ubiquitin ligase complex, and so regulate the turnover of specific proteins involved in cell differentiation. We have recently shown that ASB2␣ ubiquitin ligase activity drives proteasome-mediated degradation of actinbinding proteins filamin A (FLNa), FLNb, and FLNc (24,25). In addition to their role as actin cross-linkers, FLNs bind many adaptor and transmembrane proteins (26 -28). In this way, FLNs can regulate cell shape and cell motility. We have demonstrated that ASB2␣-mediated degradation of FLNs can regulate integrin-mediated spreading of adherent cells and initiation of migration of both HT1080 and Jurkat cells (24,25,29). FLNs are composed of an N-terminal actin-binding domain followed by 24 immunoglobulin-like domains (IgFLN(1-24)) (30). The CD face of Ig-like repeats of FLNa (IgFLNa), the major nonmuscle isoform of FLNs, represents a common interface for FLN-ligand interaction (31)(32)(33). Interestingly, it was recently demonstrated that FLN ligands can associate with several IgFLNa domains belonging to the same subgroup (34). Among group A, which contains seven IgFLNa repeats, IgFLNa21 binds GPIb␣, ␤7 integrin, and migfilin with the highest affinity (31,32,34). Here, molecular modeling, site-directed mutagenesis, and cell biological studies were used to obtain structural and functional insights into the ASB2␣ E3 ubiquitin ligase complex. Cell Adhesion to Fibronectin-Fibronectin (BD Biosciences) was immobilized overnight at 4°C in 96-well plates (50 g/ml) in PBS. Wells were then saturated with 5% BSA in PBS for 1 h at room temperature and washed three times with PBS. PLB985 cells stably transfected with zinc-inducible vectors encoding ASB2 proteins or the corresponding empty vector were cultured with or without ZnSO 4 for 16 h, loaded with 0.5 M calcein AM in HBSS without Ca 2ϩ and Mg 2ϩ (HBSS Ϫ ) containing 0.5% BSA, and washed once in HBSS Ϫ , 1 mM EDTA. Adhesion to fibronectin was assayed using 200,000 cells/well in HBSS Ϫ , 0.5% BSA wells in the absence or presence of 1 mM MnCl 2 to activate integrins for 10 min. Nonadherent cells were removed with 1 to 3 gentle washes with PBS containing 1 mM MnCl 2 (to maintain integrin activation) or PBS alone (to suppress integrin activation). Fluorescence intensity was quantified using a microplate fluorescence reader FLx-800 (Bio-TEK). The percentage of adherent cells was calculated as follows: (fluorescence intensity of adherent cells/fluorescence intensity of cells plated) ϫ 100. Each assay was performed in triplicate and at least four independent experiments were done. Statistical anal-yses were performed using Prism software. All p values were calculated using the Mann-Whitney t test. Cell Spreading-Cell spreading assays were carried out as described (24). Immunofluorescence Microscopy-Immunofluorescence analyses were performed essentially as described (24). To better visualize ASB2␣ and FLNa localization on stress fibers 8 h after transfection of HeLa cells, immunofluorescence analyses were performed in cytoskeleton buffer containing 10 mM MES, pH 6.9, 5 mM glucose, 150 mM NaCl, 5 mM EGTA, 5 mM MgCl 2 and the cells were pretreated with the same buffer containing 0.1% Triton X-100 for 30 s before paraformaldehyde fixation. Secondary antibodies used were Cy3-coupled goat anti-mouse (Jackson Laboratories). F-actin was visualized with Alexa 633phalloidin (Fisher) diluted 1:500. Slides were viewed with a Zeiss Axio Imager M2 using a ϫ63/1.3 oil DIC Plan Apochromat objective (Zeiss). Images were acquired and processed using AxioVision software and AxioCam MRm camera (Zeiss). Pearson correlation coefficients of the ASB2 protein co-localization with FLNa are mean Ϯ S.E. from 5 random regions from 5 ASB2␣-expressing cells. 2 g of anti-FLNa antibodies or control IgG and 30 l of protein A-Sepharose suspension (GE Healthcare) were added to 880 g of whole cell extracts of demembranated cells. After 3 h in ice, beads were washed twice in XT buffer (50 mM Pipes, 50 mM NaCl, 150 mM sucrose, 40 mM Na 4 P 2 O 7 ⅐10H 2 O, 0.05% Triton X-100) supplemented with 1 mM Na 3 VO 4 , 50 mM NaF, 25 mM ␤-glycerophosphate, 2 mM sodium pyrophosphate, and 1% protease inhibitor mixture. After 5 min of boiling in Laemmli buffer, samples were resolved by SDS-PAGE and analyzed by immunoblotting. In Vitro Binding Assays-7 l of 1PNA serum or 15 l of its preimmune counterpart and 30 l of protein A-Sepharose suspension was added to 500 g of whole cell extracts of demembranated cells. After 3 h in ice, beads were washed twice in XT buffer supplemented with 1 mM Na 3 VO 4 , 50 mM NaF, 25 mM ␤-glycerophosphate, 2 mM sodium pyrophosphate, and 1% protease inhibitor mixture. The immobilized GFP-ASB2␣ANK(1-10) protein was further incubated with 10 g of Escherichia coli extracts expressing GST or GST-IgFLNa21 in whole cell extract buffer for 3 h in ice. Beads were washed twice as described above. After 5 min of boiling in Laemmli buffer, samples were resolved by SDS-PAGE and analyzed by immunoblotting. Alternatively, pulldown assays were performed using 5 g of purified GST-IgFLNa21, GST-IgFLNa21AA/DK, or GST-Ig-FLNb21 bound to glutathione-agarose beads (Macherey-Nagel) and 0.4 g of purified ASB2␣ANK(1-10)-His 6 . After two washes in dialysis buffer, bound proteins were fractionated by SDS-PAGE and analyzed by protein staining and Western blotting. Molecular Modeling-Modeling was performed using the Accelrys modules Homology, Discover, Docking, and Biopolymer, run within InsightII (2005 version) on a Silicon Graphics Fuel work station. Docking of the ASB2␣ N-terminal Peptide into the IgFLNa21 Domain-The FLNa-binding motif within the 20-residue ASB2␣ N-terminal peptide was identified and modeled based on structural homology to the integrin ␤2 (PDB code 2JF1_T), integrin ␤7 (PDB code 2BRQ_C), and migfilin (PDB code 2W0P_C) motifs, using the Align 123 and Homology programs. The residues located at the N-and C-terminals of the ASB2␣ ␤-strand motif were initially built in an extended conformation. Energy was minimized using the Discover consistent valence force field, a forcing constant of 100 kcal/mol, and the steepest descent and conjugate gradients protocols. The common ␤-strand served to preposition the ASB2␣ N-terminal peptide into the IgFLNa21 CD groove in the same way as migfilin. The resulting preliminary structure was then submitted to the Affinity program, an automatic flexible docking refinement procedure applied to predefined residues from the binding interface. The final interaction energy was Ϫ90 and Ϫ42 kcal for the van der Waals and electrostatic components, respectively. The criteria used for identifying hydrogen bonds were a donor-acceptor distance Յ3 Å and a minimum donor-protonacceptor angle of 120°. The criterion used for identifying an interaction between hydrophobic groups was a distance Յ5 Å. ASB2␣-induced FLNa Degradation Regulates Adhesion of Hematopoietic Cells to Fibronectin-We previously found that ASB2␣ expression prevented cell spreading and inhibited initiation of migration, and that these effects were recapitulated by knocking down FLNa and FLNb (24,29). To assess the role of ASB2␣-induced FLN degradation in hematopoietic cell adhesion, myeloblastic PLB985 cells stably transfected with zinc inducible vectors encoding ASB2␣, E3 ubiquitin ligase defective mutant ASB2␣LA, or the corresponding empty vector control were cultured with or without ZnSO 4 , labeled with calcein AM, and allowed to adhere on fibronectin-coated wells in the absence or presence of Mn 2ϩ to activate integrins. Loss of FLNa was observed only in cells expressing ASB2␣ (Fig. 1A). As expected, when adhesion was performed in the absence of Mn 2ϩ , only a low level of adhesion of cells was observed (supplemental Fig. S1A). However, cell adhesion was greatly enhanced with Mn 2ϩ (Fig. 1B). Intriguingly, upon integrin activation by cultivating the cell in the presence of Mn 2ϩ , percentages of adherent cells expressing ASB2␣ after the first and second washing steps in PBS alone were significantly increased (Fig. 1B). However, no significant differences were observed following washes in PBS containing Mn 2ϩ (supplemental Fig. S1B). To extend and further validate the finding that ASB2␣ controls integrin-mediated adhesion of hematopoietic cells to fibronectin, we established PLB985 cells lacking FLNa expression by transfecting cells with a vector encoding an shRNA against FLNa (Fig. 1C). As was observed for ASB2␣-expressing cells, adhesion of PLB985 FLNaKD cells was significantly higher than adhesion of PLB985 LucKD control cells after washing steps in PBS alone (Fig. 1D). Taken together, our results suggest that integrin-dependent adhesion of hematopoietic cells is sustained in the absence of FLNa and establish a role for ASB2␣ in the regulation of hematopoietic cell adhesion through FLNa degradation. The N-terminal Region and ANK 1 to 10 of ASB2␣ Are Necessary to Target FLNa-As previously observed in transfected HeLa cells (24), (i) ASB2␣ was transiently colocalized with F-actin (Fig. 3A); (ii) ASB2␣ is diffuse throughout the cytoplasm when FLNa is degraded (Fig. 3B); and (iii) E3 ubiquitin ligase defective mutants of ASB2␣ do not degrade FLNa and accumulate on stress fibers (Fig. 3, A and B). These results suggest that colocalization of ASB2␣ with F-actin may be the result of ASB2␣ association with FLNa. We therefore examined the sub-cellular localization of the ASB2⌬N N-terminal deletion mutant in transfected HeLa cells. ASB2⌬N did not accumulate on stress fibers (Fig. 3, A and B). Accordingly, deletion of the N-terminal region in ASB2␣ abrogated its ability to induce degradation of endogenous FLNa in transfected HeLa cells (Fig. 3, B and C). Collectively, our data indicated that ASB2␣ residues 1-20 encompass a major determinant for ASB2␣ colocalization with FLNa and subsequent polyubiquitylation and degradation by the proteasome. We next questioned whether ASB2␣ residues 1-20 were sufficient for colocalization with FLNa onto stress fibers. This domain was therefore fused to GFP and expressed in HeLa cells. The resulting GFP-ASB2␣ (residues 1-20) protein was diffuse throughout the cytoplasm and the nucleus (Fig. 4A) indicating that the N-terminal region specific to ASB2␣ is not sufficient for ASB2␣ recruitment to stress fibers. To further delineate the ASB2␣ domain required for the targeting of FLNa, several C-terminal deletion mutants of ASB2␣ were constructed and their subcellular localization assessed in HeLa cells. In contrast to GFP-ASB2␣ANK(1-3), GFP-ASB2␣ANK(1-5), GFP-ASB2␣ANK(1-7), and GFP-ASB2␣ANK(1-9), GFP-ASB2␣ANK(1-12) accumulates onto stress fibers (Fig. 4A). As expected, deletion of amino acids 1-20 (GFP-ASB2⌬NANK(1-12)) abrogated stress fiber localization of this deletion mutant (Fig. 4A). To better visualize ASB2␣ and FLNa localization on stress fibers, cells were permeabilized with Triton X-100 before fixation. This treatment removes the cell membrane and membrane-associated proteins but leaves behind the cytoskeleton and cytoskeletally associated proteins. Eight hours post-transfection, GFP-ASB2␣ANK(1-12) and GFP-ASB2␣ANK(1-10) colocalized with FLNa and F-actin in stress fibers (Fig. 4B). These results indicate that ANK 1 to 10, together with the N-terminal region specific to ASB2␣, is required for colocalization of ASB2␣ with FLNa on stress fibers. Identification of an IgFLNa21-binding Motif Encompassing ASB2␣ Residues 8 -16-We next questioned whether ASB2␣ could target FLNa for degradation through an interaction between its N-terminal region and FLNa (Fig. 5A). A prototypic structural FLNa-binding motif has recently been characterized through a number of x-ray and NMR-based studies of complexes between the IgFLNa21 and peptides derived from either the tails of integrin ␤2 or ␤7, or the N-terminal region of migfilin (31,32,34,36). This 8 -10-residue long ␤-strand motif forms an anti-parallel ␤-sheet with the IgFLNa21 C ␤-strand and anchors into the CD groove through hydrophobic contacts, especially with Ile-2283 and Phe-2285 residues from the D ␤-strand (Fig. 5B,C). Alignment of the ASB2␣ N-terminal peptide with filamin-binding peptides derived from either the tails of integrin ␤2 or ␤7, or the N-terminal region of migfilin pre- dicts the presence of a 9-residue hydrophobic ␤-strand in the ASB2␣ N-terminal peptide (residues 8 -16) (Fig. 5A), structurally homologous to those in migfilin and ␤ integrins. Interestingly, the most stringent element of sequence conservation detected is Ser-11, which was shown to play a critical role in anchoring the motif to the D strand via hydrogen bonding. These results suggest that this newly identified structural ASB2␣ motif might dock into the cavity delineated by the IgFLNa21 CD hairpin similarly to the migfilin and integrin ␤ FLNa-binding motifs. Indeed, the molecular modeling of the complex between the ASB2␣ N-terminal peptide and the IgFLNa21 domain, based on the x-ray structure of the migfilin motif-IgFLNa21 complex, revealed that the ASB2␣ motif could fit into the hydrophobic IgFLNa21 CD groove without steric clashes (Fig. 5D). Binding is first mediated by hydrogen bonding between the ASB2␣ and IgFLNa21 C ␤-strand backbones and by a side chain/main chain hydrogen-bond between ASB2␣ Ser-11 and IgFLNa21 Ala-2281. The interaction is then anchored by a dense network of stacking interactions between ASB2␣ Phe-13 and IgFLNa21 Ile-2273 (C strand) and Ile-2283 and Phe-2285 (D strand). The complex is further stabilized by interactions involving residues specific to ASB2␣, such as Leu-12 and His-14, both stacked with IgFLNa21 Ala-2272. Tyr-9, Ser-11, and Phe-13 of ASB2␣ Are Required to Target FLNa to Degradation-Because we previously showed that colocalization of ASB2␣ with F-actin may be the result of ASB2 association with FLNs, we examined subcellular localization of ASB2␣ proteins mutated within the putative IgFLNa-binding site in transfected HeLa cells. In contrast to GFP-ASB2␣, GFP-ASB2␣Y9F, GFP-ASB2␣S11D, and GFP-ASB2␣F13E did not co-localize with FLNa to stress fibers (Fig. 6A). To better quantify these observations, we measured the co-localization correlation coefficient between ASB2␣ and FLNa staining. The Pearson coefficients for GFP-ASB2␣ expressing cells was 0.539 Ϯ 0.117, for GFP-ASB2␣Y9F expressing cells, 0.006 Ϯ 0.085; for GFP-ASB2␣S11D expressing cells, Ϫ0.003 Ϯ 0.107; and for GFP-ASB2␣F13E expressing cells, Ϫ0.050 Ϯ 0.129. To understand the structural determinants of these observations, ASB2␣S11Dand F13E-mutated peptides were modeled and docked into the IgFLNa21 hydrophobic CD groove as described for the wild-type ASB2␣ N-terminal peptide (supplemental Fig. S2, A and B). In both cases, there was hydrogen pairing between the ASB2␣-mutated motifs (residues 8 -16) and the C ␤-strand backbones, but the respective complexes presented different degrees of stability relatively to the wild-type complex, as estimated by an energy score. ASB2␣F13E mutation was the most deleterious, inducing a marked decrease in the van der Waals energy absolute value, well in line with the loss of the critical hydrophobic contacts between ASB2␣ Phe-13 and IgFLNa21 Ile-2273, Ile-2283, and Phe-2285. Surprisingly, the ASB2␣S11D mutation caused a slight increase in the van der Waals energy absolute value, partially compensating for the decrease in the electrostatic energy absolute value, in correlation with a subtle reorganization and stabilization of the hydrophobic cluster. In addition, there was still a possibility of a side chain/main chain hydrogen bond between Asp-11 and IgFLNa21 Ala-2281. Remarkably, it was impossible to obtain a low energy conformation for the ASB2␣Y9F-IgFLNa21 complex, probably due to the observed perturbation in the initial orientations of the mutated motif side chains. These results further suggest that ASB2␣ residues 8 -16 comprise a determinant for ASB2␣ binding to FLNa. Accordingly, mutation of Tyr-9, Ser-11, or Phe-13 in ASB2␣ abrogated its ability to induce degradation of endogenous FLNa in transfected HeLa cells (Fig. 6B). Although the ASB2␣Y9F protein has intrinsic E3 ubiquitin ligase activity (Fig. 6C), it does not induce degradation of endogenous FLNa in PLB985/MT-ASB2␣Y9F cells induced to express ASB2␣Y9F with zinc (Fig. 6D). We then assessed whether mutation of one of these key residues, Tyr-9, disrupted the ability of ASB2␣ to stabilize adhesion of hematopoietic cells. Indeed, in contrast to cells expressing wild-type ASB2␣, adhesion of ASB2␣Y9F-expressing cells was not sustained following integrin activation by Mn 2ϩ and washes with PBS alone (Fig. 6E). Furthermore, expression of ASB2␣Y9F but also of ASB2␣S11D and ASB2␣F13E did not affect the spreading of transfected NIH3T3 cells on fibronectin-coated slides (Fig. 6F) as previously observed for ASB2␣ E3 ligase defective mutants (24). Collectively, our data indicated that ASB2␣ residues 8 -16 encompass a major determinant for FLNa binding, subsequent polyubiquitylation and degradation by the proteasome, and regulate cell spreading and cell adhesion. DISCUSSION Physical contact between leukemia cells and the bone marrow microenvironment provides a refuge for minimal residual disease. Of importance, interaction of ␤1 integrins with fibronectin is involved in acquired chemoresistance of AML cells (10,11,37). In this context, deciphering the molecular mechanisms controlling integrin-dependent adhesion of normal hematopoietic and leukemia cells may ultimately lead to new treatment strategies that specifically target leukemia cells. Although players that activate integrins have been described, few players that inhibit integrins have been identified so far (38,39). Among them, FLNa has been proposed to compete with talin for binding to the cytoplasmic tail of integrin ␤ subunits (32). We recently demonstrated that ASB2␣ regulates FLNa functions via proteasomal degradation of FLNa (24), suggesting that ASB2 may contribute to integrin activation. We therefore assessed whether ASB2␣, through FLNa degradation, plays a role in the regulation of integrin-dependent functions in hematopoietic cells. We found that expression of ASB2␣ significantly sustained integrin-dependent adhesion of hematopoietic cells to fibronectin. Of importance, FLNa knockdown recapitulated ASB2␣ effects on hematopoietic cell adhesion and concomitant FLNb knockdown was not required. This is in contrast to the spreading or migration defects previously observed in adherent cells following ASB2␣ expression or FLNa and FLNb double knockdown (24,29). However, the levels of FLNb and FLNc are low in PLB985 cells (25) and so may be insufficient to compensate for the loss of FLNa in these cells. Our results are in agreement with previous work in HT1080 fibrosarcoma cells and Jurkat T lymphoblasts showing that loss of FLNs increased the percentage of non-motile cells plated on fibronectin (29). The role of ubiquitin-mediated proteasomal degradation in the control of hematopoiesis has recently been highlighted by the fact that c-Myc stability is controlled by the SCF Fbw7 E3 ubiquitin ligase in hematopoietic stem cells (40). In fact, ubiquitin-mediated degradation is one of the major pathways for controlled proteolysis in eukaryotes. In this pathway, E3 ubiquitin ligases that determine the specificity of protein substrates represent a class of potential drug targets for pharmaceutical intervention. Although proteasome inhibition has proved to be of therapeutic utility, the strategy of modulating the activity of E3 ubiquitin ligases is more specific. In this regard, characterization of the various E3 ubiquitin ligases and their respective substrates and understanding the signals that regulate specific ubiquitin ligation events should contribute to the development of new therapies that target the ubiquitin system. Our data provide evidence that the N-terminal region specific to the hematopoietic isoform of ASB2 plays roles in the targeting of FLNa to proteasomal degradation. Indeed, deletion of this domain or mutation of Tyr-9, Ser-11, or Phe-13 abolished the recruitment of ASB2␣ to actin stress fibers and completely abrogated the ability of ASB2␣ to induce FLNa degradation. Of interest, mutation of Tyr-9 of ASB2␣ abolished ASB2␣ effects on adherent cell spreading and adhesion of hematopoietic cells. The most striking feature of this N-terminal region is that it encompasses residues (8 to 16) that share structural homology with the binding domains of several IgFLNa21 ligands (31)(32)(33)(34). However, our results indicate that, in addition to this region, ankyrin repeats 1 to 10 of ASB2␣ are necessary for in vivo colocalization with FLNa. Interestingly, we further showed that the ASB2␣ANK(1-10) protein, which contains the N-terminal region, can bind directly to the IgFLNa21 domain. As expected, mutation of Ala-2272 and Ala-2274 in strand C of IgFLNa21 strongly inhibited ASB2␣ANK(1-10) binding as previously observed for ␤7 integrin or migfilin (31,32,34), suggesting that ASB2␣, integrin ␤7, and migfilin may bind to a similar site on IgFLNa21. However, because the whole group A of IgFLNa repeats can bind a set of ligands including ␤ integrins and migfilin (34), we do not exclude that other regions of FLNa may also contribute to the ASB2␣-FLNa interaction. Our results reinforce the view that the CD face of IgFLNa is a common binding interface for FLN partners and suggest that ASB2␣ may compete with FLN partners such as ␤ integrins and migfilin for FLNa binding. We therefore cannot exclude the possibility that ASB2␣ affects integrin-dependent functions through the dissociation of FLNa partners from FLNa. In this context, it should be mentioned that as observed in FLN-depleted cells following ASB2␣ expression or FLN knockdown, cells depleted in migfilin exhibit less cell spreading (41). Moreover, it is worth noting that binding of migfilin to FLNa may promote integrin activation by dissociating FLNa from integrins (42). Furthermore, displacement of FLNa from integrin tails or from migfilin by ASB2␣ may allow FLNa polyubiquitylation and subsequently, acute proteasomal degradation of all FLNa molecules. We have previously demonstrated that ASB2␣ induces degradation of all three filamins (29), whereas ASB2␤ induces degradation of FLNb but not FLNa (21,22). Our findings that the N-terminal region specific to the ASB2␣ isoform is required for FLNa degradation, and that ASB2␣-ANK(1-10) preferentially binds IgFLNa21, may help to explain the specificity of ASB2 proteins toward FLNa and FLNb. It will nonetheless be important to identify residues involved in the molecular interactions between ASB2 proteins and FLNa and FLNb to further our understanding of the specificity of ASB2 proteins toward FLNs. In conclusion, our structural and cell biology studies have revealed a region of ASB2␣ that is involved in the recruitment of its substrate, FLNa. By inducing FLNa degradation by the proteasome, ASB2␣ may regulate integrin-dependent functions and thus hematopoietic stem cell fate within the niche.
5,860.4
2011-07-07T00:00:00.000
[ "Biology" ]
Confinement in QCD and generic Yang-Mills theories with matter representations We derive the low-energy limit of quantum chromodynamics (QCD) and provide evidence that in the 't Hooft limit, i.e. for a very large number of colors and increasing 't Hooft coupling, quark confinement is recovered. The low energy limit of the theory turns out to be a non-local Nambu-Jona-Lasinio (NJL) model. The effect of non-locality, arising from a gluon propagator that fits quite well to the profile of an instanton liquid, is to produce a phase transition from a chiral condensate to an instanton liquid, as the coupling increases with lower momentum. This phase transition suffices to move the poles of the quark propagator to the complex plane. As a consequence, free quarks are no longer physical states in the spectrum of the theory. I. INTRODUCTION One of the most important open problems in physics is the question of quark confinement.Quarks are never seen free but only in bound states (cf.Ref. [1,2] and references therein).Several mechanisms have been proposed but none of them has ever been derived fully analytically from theory, i.e. quantum chromodynamics (QCD).An exception can be found in Kenneth G. Wilson's semianalytic approach to QCD regularised on the lattice for which an area law for confinement at strong couplings has been demonstrated [3].Some criteria have been obtained for confinement in four dimensions.Kugo and Ojima were able to obtain a well-known criterion starting from a reformulation of BRST invariance [4,5]. Similarly, Nishijima and his collaborators pointed out some constraints to grant confinement [6][7][8][9][10].A proof of confinement exists in supersymmetric models where a condensate of monopoles like in a type II superconductors is seen [11,12].Indeed, the exact β function for the Yang-Mills theory is known for some supersymmetric and the non-supersymmetric models [13][14][15][16].Lattice simulations have calculated the Yang-Mills theory beta function by using the RG evolution (in several schemes) in order to connect the regularisation scale with the infrared behavior, cf.e.g.Ref. [17].By tuning empirical parameters, the beta functions of the models can be made consistent with the lattice results.Different confinement criteria and their overlapping regions are presented in Ref. [18]. Due to the discovery of Gribov copies [19] and their possible handling as proposed by Zwanziger [20], studies on confinement in Landau gauge seemed to indicate a gluon propagator running to zero in the infrared while the ghost propagator had to run to infinity faster than in the free case.This qualitative picture is essential to have an idea of the potential between quarks.Measures of the gluon and ghost propagators [21][22][23] and the spectrum [24,25] on the lattice have shown that in a non-Abelian gauge theory without fermions a mass gap appears, in evident contrast with the scenario devised by Gribov and Zwanziger that in the original formulation is not able to accommodate this mass gap.As shown in several theoretical works, the behavior seen on the lattice should be expected [26][27][28][29][30][31].These works provide closed form formulas for the gluon propagator with a number of fitting parameters.Indeed, a closed analytical formula for the gluon propagator is an important element to obtain the low-energy behavior of QCD in a manageable effective theory to prove confinement.For the same aim, the behavior of the running coupling in the infared limit is essential [32,33] (see also the review [34]).An instanton liquid picture seems to play a relevant role [35,36].Confinement in its simplest form can be seen as the combined effect of a potential obtained from the Wilson loop of a Yang-Mills theory without fermions and the running coupling yielding a linearly increasing potential in agreement with lattice data [37].Note that in spacetime 2+1 dimensions, the theory is only marginally confining, as there is no running coupling and the potential increases only logarithmically.Still, also in this lower dimension confinement is granted [38]. An essential part in our understanding of confinement in QCD is strongly linked to a proper derivation of the low-energy limit of the theory.In this direction, a couple of seminal papers were written by Gerard 't Hooft for 1+1 spacetime dimensions [39,40].Two important results were obtained by 't Hooft in these papers: 1) A good understanding of the low-energy behavior of the theory could be obtained in principle by considering the limit of the number of colors N c running to infinity and keeping the product N c g 2 constant, with α s = g 2 /(4π) the strong coupling.In this limit the coupling goes to zero, faciliating a perturbative approach.2) In 1+1 spacetime dimensions the theory can be solved and provides the meson spectrum of the theory.One of the conclusions for 3+1 spacetime dimensions was that a discrete spectrum of the Hamiltonian can grant the appearance of condensates providing the right set-up for quark confinement through a string model [39,41].Indeed, in Refs.[38,42] a discrete spectrum was obtained for a Yang-Mills theory without quarks, confirming lattice results.Without providing explicitly the spectrum, this was also proved mathematically [46]. In a series of works [31,47,48], it was recently proved that in the 't Hooft limit a of large number of colors the low energy limit of QCD is given by a non-local Nambu-Jona-Lasinio (NJL) model [49][50][51][52]57].The local version of the NJL-model, as initially conceived in Refs.[49,50], does not confine.For bounded states obtained after bosonization [58], there is a threshold for the decay into quark and antiquark as free states that have never been observed.Non-locality can help to remove such a problem [59].The aim of this paper is to provide evidence that the non-local NJL-model derived from QCD in Refs.[31,47,48] is indeed confining with the principles given in Refs.[59,60].This paper is phenomenological in nature, and some relevant approximations are involved to solve the QCD equations in the low-energy limit.The most relevant of these is the 't Hooft limit N c with the 't Hooft coupling λ = N c g 2 kept finite but large.Accordingly, we expand in 1/λ and terminate the expansion at leading order, neglecting higher-order correlations between fermionic degrees of freedom caused by the gluonic field. The paper is structured as follows.In Sec.II, we derive the NJL-model from QCD and apply the bosonization and the mean field approximation, leading to the gap equation.In Sec.III, we present the proof of quark confinement for the low-energy limit of QCD based on the gap equation we obtained previously.In Sec.IV we give our conclusions. II. LOW-ENERGY LIMIT OF QCD AND NJL MODEL We consider the QCD lagrangian where D µ = ∂ µ + igT a A a µ is the covariant derivative and F a µν are the field strenght tensor components.These can be obtained by igT a F a µν = [D µ , D ν ].The sum over i is quite generic as it implies the sum over quark flavors and colors.Our Minkowskian metric reads From the Euler-Lagrange equations, we obtain From the equations of motion we can derive, in principle, the full hierarchy of Dyson-Schwinger equations.We solve this hierarchy by a method proposed by Bender, Milton and Savage [61], recently exploited in Refs.[31,42,[62][63][64].Note that if a source term is added to the Lagrangian describing the vacuum expectation values, translational invariance is broken, as it is expressed by the separate arguments in the Green functions.In this case, the vacuum expectation values of products of field operators expressing those Green functions are nonvanishing even in the case of a single field operator.This gives sense to starting to solve the tower of Dyson-Schwinger equations with just this one-point Green function.In the end, setting the source term to zero restores the observable physical picture. This procedure is quite similar to the approach provided by the generating functional. Accordingly, to the Lagrangian we add source terms like A a µ J µ a , qi η i and ηi q i .For the sake of simplicity, we omit details on BRST ghosts.After this addition we can evaluate the functional derivatives with respect to these sources.Such a procedure yields the Dyson- Schwinger equations [62] where we have introduced the one-, two-and three-point functions as for the gauge fields, and q1 i (x) = q i (x) and q 2 ij (x, y) = q i (x)q j (y) for the quark fields.We can use the exact solutions already provided in Ref. [31] to write where η a µ are the coefficients of the polarization vector with η a µ η µ b = δ ab , φ(x) is a scalar field and ∆(x − y) is the propagator of the scalar field.The ansatz we can afford so far, namely the gluon field as a constant polarization vector times a scalar function, is based on Refs.[29,30] 1 , and can help us to reach up to essential statements on the confinement.One obtains Using the properties of the symbols η, one has η a µ η µ a = N 2 c − 1 and i q 2 ii (x, x) = N c N f S(0).Accordingly, the first differential equation ( 5) takes the form At this point we can consider the 't Hooft limit N c → ∞ with λ := N c g 2 ≫ 1 being finite but large.Next, we can rescale the space variable as x → N c g 2 x and look for a ) in the 't Hooft coupling, yielding at leading order (after reverting the rescaling) while the next-to-leading order yields Truncating the series expansion in inverse powers of N c g 2 at the first order, one ends up with a NJL model.Higher orders would generate interactions with more than four fermions involved.An attempt in this direction was presented in Ref. [65] where the next-to-leading order terms turn out to depend on products of the gluon Green functions and higher powers of pairs of fermionic fields. A. Zeroth order solution and Green function As a constant, m 2 = 2λ∆(0) can be considered as the mass squared.Even though the leading order equation ∂ 2 φ 0 (x)+m 2 φ 0 (x)+λφ 3 0 (x) = 0 is nonlinear, we have found a solution expressed by Jacobi's elliptic function [66], with where µ and θ are integration constants.sn(z|κ) is Jacobi's elliptic function of the first kind. Inserting this solution into the Green's equation for the two-point function obtained as the next element of the tower of Dyson-Schwinger equations, this equation can be solved in momentum space by where The corresponding mass spectrum reads This procedure ends up with a gap equation by inserting the Fourier transform of the propagator (12) back into m 2 = 2λ∆(0), yielding It can be shown that by this gap equation the spectrum of the theory without fermions is correctly given [42] in excellent agreement with lattice data.Note that the momentum scale k 2 of the one-point function and the momentum scale p 2 of the two-point function are independent.Because of this, in the following section we use an empirical value to fix k 2 . In order to complete this section, we argue that the zeros of the gluon propagator are genuine glueball colorless states.We start by considering the correlation function for the scalar glueballs that is given by [43,44] Using methods explained in Ref. [31], one can see that according to Ref. [45] the four-point correlator defining the correlation function of the glueball can be reduced to convolutions over one-and two-point functions.As the one-point function has no poles but zeros, the poles of the glueball four-point correlator are given by the poles of the two-point correlator. Therefore, these poles represent true colorless glueball states. B. First order solution By convoluting the propagator ∆ with the right hand side of Eq. ( 8) we obtain We observe that the first term is just a renormalization of the fermion mass and can be chosen to be zero via the condition S(0) = 0.The second term is the expected NJL interaction in the equation of motion of the quarks. The solution φ(x) we obtained above can be inserted into Eq.( 5).In the 't Hooft limit, we note that the term φ 0 is negligibly small compared to the NJL term φ 1 .One can see this by observing that φ 0 ∼ λ 1/4 and φ 1 ∼ λ.Therefore, in the strong coupling limit λ ≫ 1 the equation for the one-point function of the quark has just the NJL term.We can write Such an equation can be recognized as the Euler-Lagrange equation for the one-point function of the quark obtained from a NJL model with a non-local interaction [52, 59] We see that η η a µ η b ν = δ ab g µν , where η symbolizes the polarizations.Tracing out the color degrees of freedom with tr(T a T a ) = N c C F , C F = (N 2 c − 1)/(2N c ), and with ψ i (x) being spinors in Dirac and flavor space only, we are led to the NJL Lagrangian After a Fierz rearrangement of the quark fields we obtain (cf.e.g.Refs.[53][54][55][56][57]) ψi (x)iγ 5 ψ j (y) ψj (y)iγ 5 ψ i (x) C. Bosonization Γ α are understood as a set of combined Dirac and flavor matrices given by 1, iγ 5 , γ µ and γ µ γ 5 after the Fierz rearrangement and the flavor matrices 1l and 1 2 λ α relating quarks of equal and different flavor i and j in adjoint representation.For Γ α we have the conjugation rule γ 0 Γ † α γ 0 = Γ α , where α denotes the components of the adjoint flavor representation.Accordingly, the spinor ψ(x) spans over all these spaces.As the coefficients of these two contributions are the same, the sum over these given 1 + 3 = 4 degrees of freedom can be reinterpreted as a sum over the components of a four-vector.Next we apply the bosonization procedure shown in Ref. [57] by adding scalar-isoscalar and pseudoscalar-isovector mesonic fields at an intermediate space-time location w = (x + y)/2 as auxiliary fields M α (w) = (σ(w); π(w)) coupled to the nonlocal fermionic currents.After Fierz rearrangement, this sums up to the NJL action . By performing a nonlocal functional shift the nonlocal quartic fermionic interaction can be removed.Instead, the fermion field starts to interact nonlocally with the mesonic fields, After Fourier transform, in momentum space one obtains where the symbols with tilde are used for the Fourier transformed quantities. D. Mean field approximation Out of the many different contributions obtained after Fierz rearrangement, the mean field approximation makes a choice that is phenomenologically justified.We can expand the physical mesonic fields σ(p) = σ + δσ(p) and π(p) = δ π(p) about the vacuum expectation value σ = σ where the expansion coefficient to zeroth order is the mean field approximation. This approximation without any information about possible correlations is sufficient for our means as it leads directly to the mass gap equation.In this approximation one obtains the simplified NJL action with G = 2 ∆(0) and the unit space-time volume V (4) , where is the dynamical mass of the quark.The bosonization procedure yields where det denotes the direct product of a functional and an analytical determinant, the former in the Fock space transition between space-time points x and y, the latter in the Dirac and flavor indices.On the other hand, one has ln det(p / − M q (p)) = tr ln(p Taking the variation of the action S bos with respect to σ yields the latter quantity.Accounting for the dependence of M q (p) on σ, we have Finally, this result can be re-inserted to Eq. ( 27) to obtain the dynamical mass equation In this way, we have derived this equation directly from the QCD Lagrangian.Following Ref. [48] we do not consider the explicit dependence of M 0 = M q on the momentum.We use this choice to obtain a qualitative picture of the dynamics, giving up the possibility of wavefunction renormalisation and the possibility to estimate of the size of the error.After that, performing the Wick rotation, we obtain the mass gap equation in Euclidean space, III. QUARK CONFINEMENT The idea to understand quark confinement is strongly linked to the expected behavior of the roots of the gap equation (32), i.e. the poles of the quark propagator.The idea presented here is identical with the idea presented in Ref. [68], though without employing a general model for the non-locality.To represent physical propagating degrees of freedom, for these poles one should expect solutions on the real axis.The effect of the gluonic interaction is to move such poles in the complex plane so that no decay into such degrees of freedom is ever expected and the free quarks never propagate.This moves the solution from a chiral condensate phase to a confining phase for quarks at increasing coupling.Indeed, a full comprehension of such roots can only be achieved through the non-approximated gluon propagator ∆(p).Therefore, we performed an analysis of the lowest zero of ∆(p), finding out that for M 0 < 0.39m 0 there are two distinct real zeros while above these two zeros are given by two complex conjugate numbers.This can be seen in Fig. 1.The threshold at 0.39m 0 should be seen as the point beyond which, mathematically speaking, a chiral condensate could possibly appear.We observe that the result depends critically on the mass gap value m 0 .In turn, this value depends on an arbitrary integration constant and, therefore, should be fixed by the experiment.This situation is similar to that of the constant Λ entering in asymptotic freedom, and it is possible that these two constants are related.Our best choice to fix this value is via the mixed gluonic-quark state f 0 (500) that could in principle be identified with the σ meson in the NJL model, giving rise to the breaking of chiral symmetry. The gap equation ( 32) can be simplified by choosing m q = 0, and the factor M q can be cancelled from the numerator to remove the trivial case.This technique permits to find the fixed point solution of the given iterative integral equation avoiding the awkward issue to resum all the iterates.One obtains For Λ = 1 GeV and m 0 = 0.512 (15) GeV one obtains M 0 = 0.427 (29) GeV as in Ref. [48] which is clearly above the threshold 0.39m 0 = 0.2 GeV and, therefore, indicates quark confinement. A. Understanding quark confinement As we have seen, with increasing coupling of the theory the quark confinement arises at the point where the chiral condensates of different flavors perform a transition to a confined phase with an instanton liquid of gluon degrees of freedom.Neglecting the bare quark masses we have found the critical effective quark mass for which such a transition happens.Our choice of the ground state for the gluon field represents quite well a Fubini instanton [67]. Therefore, we expect that the chiral condensate changes into an instanton ground state that could condensate into a liquid. The presence of the instanton liquid removes single quarks from the physical spectrum of the theory.From the mathematical point of view this means that the poles in the quark propagator become complex.The vacuum of the theory appears to undergo a series of phase transitions while the gluon sector generates a mass gap by itself in a dynamical way.The presence of the mass gap in the gluon sector is pivotal for the appearance of the phase transitions in the quark sector and, in the last instance, to the confinement of quarks. IV. CONCLUSIONS AND OUTLOOK There are different approaches to understand the confinement of quarks.One of these is given by solutions of the gap equation of the dynamical quark mass.With a reasonable UV cutoff of Λ = 1 GeV and the glueball spectrum starting at the mass m 0 = 0.512 (15) GeV of the f 0 (500) resonance, the gap equation provides a value M 0 = 0.427 (29) GeV for the dynamical quark mass which turns out to be too large to allow for real valued poles of the quark propagator.As a consequence, free quarks are no longer physical states of the theory and the quarks can be expected to be confined.Even though the multiple approximations applied in this approach do not allow for quantitative estimates, we have shown how this scenario is realized in the low-energy regime of QCD by taking into account the 't Hooft limit.By doing so, the low-energy limit of QCD turns out to be a well-defined non-local NJL model with all the parameters obtained from QCD.This entails a scenario where several condensates are formed that are expected to provide confinement in a regime of very low momentum and strong coupling. Having a low-energy limit of QCD permits to do several computations to be compared with experiments.Indeed, our first application was to the g − 2 problem [48] with a very satisfactory agreement with data.Further research has to show whether and to what extent our description of quark confinement depends on the given parameter values and whether a stricter derivations of observables allows for a comparison with experiments. V. ACKNOWLEDGEMENTS FIG. 1 : FIG. 1:The two lowest zeros of −p 2 = M q (p) in Euclidean domain in units of m 2 0 , splitted into real parts (green straight lines) and imaginary parts (red dashed lines), in dependence on the ratio M 0 /m 0 .The zeros become complex for approximately M 0 /m 0 > 0.39.
4,982.6
2022-02-28T00:00:00.000
[ "Physics" ]
A NEW SIMPLIFIED APPROACH TO FIND THE EQUIVALENT RESISTANCE OF ANY COMPLEX EQUAL RESISTIVE NETWORK As we are familiar with the star-delta transformation to determining the equivalent resistance [4] between any two terminals of a complex network and to find the overall current of a closed circuit, but sometimes circuit become too complex such that it consumes a lots of time while solving the network to find the equivalent resistance between two known terminals. So, here we have developed a new simplified approach for the solution to find the equivalent resistance between the desired terminals for equal resistance network and to find the overall current in a closed circuit. INTRODUCTION Whenever we have a complex resistive networks/circuits such that we cannot determine which branches of resistance are in parallel or in series, for that networks we have star-delta transformation approach [4] to find the equivalent resistance of that resistive networks. Star-delta transformation simply derives the relations between star network to delta and delta to star network. By using these relations we can convert a complex network to simplest network as per the convenience in series-parallel combination and thus we can calculate the overall resistance of any network. It's quite worthy approach to simplify the circuit/network but as the circuit becomes more complex it consumes more time while solving the problem [2]. As we have no other approach to simplify the complex networks, here we have generated a new methodology to solve the complex equal resistance network. The main purpose of introducing this new approach is to simplify the circuit more efficiently and to minimize the time consumption while attempting to solve it. NEW METHODOLOGY We have generated a basic technique followed by number of instructions by following them we can have the solution for the complex network and can find the equivalent resistance of equal resistive network and the overall circuit current. We have shown our method approach by designing the number of complex circuits having equal resistance and comparing it with star -delta transformation approach. It is quite interesting to note that the result obtained by this methodology is exact replica of the one approached by /star-Delta Transformation Technique without any error. PROPOSED ALGORITHM There have three instructions in this proposed algorithm by executing them step by step we can find the equivalent resistance of a complex network and can find the circuit current. We can understand them by following an example.  Keep intact all the branches connected to the terminals across which the equivalent resistance to be find out. Let these terminals be represented by "A" and' B".  Search for that resistive branch in a given network such that it would be a null detection branch of the four-arm bridge.  After that open that branch by removing the resistance between those terminals. Let these terminals be represented by "C" and "D". Complex circuit example (1) Complex network/circuit shown in figure (1) to find resistance between terminals A and B. Figure (1) By star-delta transformation approach [1], by using the star to delta and delta to star conversion relations as given below we can simplify the network as per the convenience and its equivalent resistance can be calculated and for the above network shown in figure (1) it comes out to be 8 ohm between terminals A and B. Figure (2) shows the star-delta transformation three terminal network and its equations for transformation is given below. Equations for transformation from delta to star network are Equations for transformation from star to delta network are Now, by approaching the methodology described above, if we follow those instructions we can find that the equivalent resistance of the network in figure (1) between terminals A and B will be the same as that of the result out from the star delta transformation approach i.e. 8 ohm. Figure (3) It is observed in figure (3) that the branch between terminals C & D is opened as it is null detection branch of the four-arm bridge network. NOTE: whatever be the value of the null detection branch resistance the overall network resistance will remain always be same. Some other complex networks are shown here and both techniques are applied to them and it will be observed that the network equivalent resistance will remain the same. Complex circuit example (2) A complex network shown in figure (4) [4], by star-delta transformation approach the equivalent resistance between terminal A and B comes out to be 0.5 ohm. Figure (4) Now, by approaching the algorithm proposed, we find that equivalent resistance remains the same 0.5 ohm as shown in figure (5). Complex circuit Analysis example(3) Let's take more complex network as shown in figure (6) on next page. The same procedure is applied to this network and matched with the star-delta approach. By following the proposed algorithm the given network is sorted out and compared its result with the predefined approach. RESULT AND CONCLUSION Thus, it is observed that any complex network having equal resistance can be simplified by employing the methodology described in this paper. We can find the equivalent resistance between the desired terminals and also we can determine the closed circuit current by placing a voltage source across those terminals where we have to find the equivalent resistance.What we concluded by employing this methodology is that the time efficiency i.e. this technique does not consumes too much time to solve the complex network as it is observed in star delta transformation approach. In star-delta transformation approach, firstly we have to search for the star or delta connected network and then convert them in a convenient way by using the star-delta transformation equations and simplify the network step by step redesigning it till the complete solution is not obtained but in this proposed methodology, there is no need of cramming any relations and it needs hardly one or two steps to simplify the network. FUTURE SCOPE The methodology employed in this paper is limited to the equal resistive network but further work can be done for the unequal resistive network and also for the finding the equivalent impendence of A.C. network. This technique would be quite worthy if it is getting successful for both the D.C as well as A.C network.
1,454.6
2012-08-01T00:00:00.000
[ "Physics" ]
Generating Input Data for Microstructure Modelling: A Deep Learning Approach Using Generative Adversarial Networks For the generation of representative volume elements a statistical description of the relevant parameters is necessary. These parameters usually describe the geometric structure of a single grain. Commonly, parameters like area, aspect ratio, and slope of the grain axis relative to the rolling direction are applied. However, usually simple distribution functions like log normal or gamma distribution are used. Yet, these do not take the interdependencies between the microstructural parameters into account. To fully describe any metallic microstructure though, these interdependencies between the singular parameters need to be accounted for. To accomplish this representation, a machine learning approach was applied in this study. By implementing a Wasserstein generative adversarial network, the distribution, as well as the interdependencies could accurately be described. A validation scheme was applied to verify the excellent match between microstructure input data and synthetically generated output data. Introduction For modern applications, the microstructures and properties of steels have become exceedingly complex, utilising multiple phases and alloying concepts to improve mechanical properties for the specific use case. Even more basic materials, like dual-phase (DP) steels show complex correlations between their microstructure and mechanical properties, as well as their damage behaviour. The damage initiation and accumulation characteristics are especially important for the materials application in forming processes [1]. Here the damage introduced into the microstructure during forming will have a major influence on the components properties [2]. For dual phase steels the differences in strength between the two phases-ferrite and martensite-leads to strain partitioning in the microstructure, where the local microstructural neighbourhood is the key component for strain heterogeneity [3]. Thus, strain accumulates in ferrite, leading to specific damage behaviour for DP steels. Additionally, the morphology of a given dual-phase microstructure plays an important role in damage initiation and accumulation [4]. Multiple parameters of the microstructure have been identified that each play an important role, like grain size, martensite content, or grain shape. However, it is a very complex task to separate the multiple microstructural influences experimentally, since multiple parameters get changed when experimentally designing a new microstructure. To individually investigate each of the possible influencing factors, a numerical approach is more useful and commonly applied. The method of choice in that regard is microstructure modelling, which has been a key research focus in recent years and is continually developing. Commonly, two different methods are applied: Modelling the real microstructure and representing the microstructure by statistical distributions. For the first approach, the real microstructure is modelled based on microstructural analysis. For that procedure, mostly scanning electron microscopy is used to gather the required information in a suitable resolution [5]. The second option requires a statistical analysis of the base material. The acquired statistical information is then used to create small volume elements that show a good statistical representation of the microstructure [6]. A key requirement for these volume elements is that they need to contain enough information of the microstructure, while also being small enough to be clearly differentiated from the macroscopic structural dimension [7]. Due to this requirement, it is necessary to create statistical descriptions of the microstructure to reasonably represent it. To create a 2D representative volume element (RVE), the required data can be acquired from pictures of the microstructure, either via light optical microscopy (LOM) or scanning electron microscopy (SEM). Since the parameters of interest are on the order of magnitude of individual grains, SEM is used commonly. Here, multiple different detectors can be applied, while the electron backscatter diffraction (EBSD) detector delivers more in-depth information on each grain. The grain parameters that are most commonly applied in the creation of RVEs are the grain size, its shape (elongation, represented by the aspect ratio of the grain), and the slope of the grain describing the angle between the main axis of the grain and the x-axis of the picture (or the rolling direction). For three dimensional RVE these parameters need to be determined in all three directions in space. This can be done by serial sectioning, where the sample rotates inside a strong radiation beam and is scanned slice by slice, until a projection of the material can be acquired [8]. This method is, however, a very complex and time-consuming task. Thus, SEM or EBSD pictures from all three directions in space are often taken to create statistical distribution functions of the desired parameters depending on the respective spatial direction. For the generation of the RVE model, the statistical distribution of the microstructure needs to be translated from the material to input parameters. For that reason, the parameters are commonly described by distribution functions that follow a log-normal distribution or a gamma distribution. Most often, histograms are used to fit the distribution functions to [6,9,10]. It has to be mentioned though that histograms are very dependent on bin size. Thus the resulting distribution function is also dependent on the bin size. Therefore, Kernel Density Estimation (KDE) should be the prevalent method, since they deliver robust values, where every data point is weighted by a function depending on the Kernel chosen (e.g., Gaussian) [11]. For the damage behaviour of a material grain, shape and size both play a very important role, thus modern RVE generation algorithms take many parameters into account that describe the grain shape, like slope or elongation [12][13][14][15] . The input for these parameters are separate, independent distribution functions of the applied parameters. Most materials, however, show some kind of interdependence of the parameters among each other. Thus, this study investigated the interdependencies of the microstructural parameters for steel DP800 with the goal to generate input data for any RVE generator that depicts the relations between the relevant parameters. To describe these connections reliably, some kind of pattern recognition has to be applied to the input data of the EBSD pictures. Machine learning and deep learning methods are particularly suitable for this purpose, as they are able to reproduce the individual distributions of parameters and the relationships between them. They are often used for this purpose, in different areas of industry and science. Examples include the process industry [16], finance [17], the automotive sector [18], as well as research in the field of materials science [19]. Since machine learning has exactly the strengths needed to generate interdependent input data, a deep learning approach was used for this material. This allowed the generation of a virtually unlimited amount of input data, which follows the distribution functions of the individual parameters, but also exactly reflects the dependencies between individual parameters. Analysis of the Input Data from the Real Microstructure For the generation of statistically matching RVE, representing a specific microstructure, the analysis of the input parameters is a key factor. Usually, distribution functions of the input parameters are created and used to generate a discrete amount of input parameters for microstructure modelling. For the present study, a dual-phase steel (DP800) was utilised as the base material, since multiple important parameters are inherent to the microstructure that need to be characterised and represented by the RVE input data. In Figure 1, the microstructure of the DP800 steel is depicted as an EBSD picture. The different colours represent the different crystallographic orientations of ferrite, while martensite appears in white. where the x-axis is the rolling direction (RD) and the y-axis is the sheet normal (SN) [20]. In this study only one direction in space was chosen for the analysis, since strong differences between different directions might apply, which increases the difficulty and complexity for the appropriate fitting of the input data. The rolling direction (RD)-sheet normal (SN) plane was chosen, since it shows the elongation of the grains and therefore more microstructural parameters that need to be present in the RVE. From this picture it can already be seen that multiple different parameters are inherent to ferrite in the microstructure. Since the EBSD pictures yield no specific data for martensite, as it can not be indexed, the focus for this study was put on ferrite, where multiple parameters are characterised by the EBSD evaluation. For any kind of microstructure, modelling geometric parameters of the grains are needed as input. In Figure 2, the geometric parameters of each individual grain are depicted. All of the above mentioned parameters are obtained by using the MATLAB (Matlab R2019a-The Mathworks Inc., Natick, MA, USA) toolbox MTEX (MTEX version 5.3.0) [21]. This toolbox is able to analyse a lot of different characteristics of the microstructure, chief among which are the grain size, the aspect ratio (AR), and the slope, as well as the possible crystallographic orientation. Probably the most important aspect is the grains area, since the grain size also strongly influences mechanical properties. Apart form the size, the slope is an important factor and describes the tilt of a grain towards the x-axis of the EBSD picture taken, in this case the rolling direction (RD), where angles close to 0 • or 180 • means an elongation of the grain along the rolling direction. The last parameter is the AR, the ratio of the two half-axis a and b. A high AR stands for a bigger elongation, while an AR of 1 describes a circle. Both, the AR and slope influence the strain and stress concentration and distribution in the microstructure and are thus important to be depicted in the RVE. Apart from these geometrical factors, there are further material specific characteristics. The first is the grain orientation this is especially important when a texture is present in the material. Next are the different phases, in the case of DP800, ferrite and martensite. Additionally neighbourhood relations between the grains might be taken into account. All of the parameters mentioned above can be gathered from the data generated by the EBSD measurement. From this measurement, a list of grains and their corresponding characteristics can be obtained that is afterwards used for further analysis. Area Aspect ratio : AR = a/b For the sake of simplicity, the focus was put on ferrite in this study as mentioned before. The geometrical factors of the grain were chosen for a deeper analysis, while the grain orientation was not taken into account, since no strong texture is present in the investigated material. For most RVE generators, the geometrical parameters (grain size, elongation, and slope) are commonly implemented into the statistically-generated microstructure model through singular distribution functions of the separate parameters. From these, separate sets of input is generated for each parameter that are not interconnected. However, when the parameters are plotted against each other as two-dimensional scatter plots where each dot represents a unique grain, certain trends can be observed between the parameters Figure 3. The scatter plots show the connectivity between the main parameters. In Figure 3a the grain size against the aspect ratio is depicted. Most grains are concentrated in the bottom left corner meaning they posses a relatively small area and aspect ratio. However, the most noticeable property is the fact that grains above 20 µm 2 grain size show a tendency towards smaller aspect ratios of 2.5 or lower, while for the smaller grains, higher aspect ratios become increasingly likely. Another strong connection is visible in Figure 3c. Here it is identifiable that more grains have a slope around 0 • or 180 • respectively, which both go along the x-axis (RD). Especially notable is the fact that towards a slope of 90 • , which is perpendicular to the RD, a significantly smaller quantity of grains show aspect ratios above 2.5. This means, that grains oriented perpendicular to the rolling direction are not as elongated as the grains that go along it. The same can be said to lesser extent for Figure 3b. For a comparison of area and slope a concentration at 0 • or 180 • is notable, as is the smaller quantity of grains with a bigger area perpendicular to the rolling direction. A few dependencies can therefore be identified: Bigger grains show comparable smaller aspect ratios. They are also less likely to be oriented perpendicular to the rolling direction. Additionally, grains that have a larger aspect ratio tend to have a slope around 0 • or 180 • respectively. This, however, leads to some conclusions in regards to the way the microstructure model needs to be created. Separate distribution functions that provide input data for the creation are not suitable since that way the interdependencies observed above are not taken into account. If all parameters are generated individually, it is possible to create a grain with a large grain size over 50 µm, a high AR near 5, and a slope value near 90 • . This theoretical grain does not exist in the real microstructure. Thus, a solution for the generation of input data is needed that takes the interdependencies of the parameters into account. Machine Learning Networks (MLN) For any problem that revolves around understanding core similarities of any given data set, machine learning (ML) is an ideal approach. Machine learning algorithms (MLA) are able to learn dependencies in data or even images, that would be hard to grasp for the human applicant. They especially thrive, given multidimensional data frames, that show a certain degree of interconnectivity or interdependency. For MLA, the deep learning method roughly replicates the operating principle of the human brain. Therefore, these neural networks (NN) consist of a number of neurons that are structured in layers to reinforce learning procedures. Usually these layers consist of an input layer and a number of hidden layers, one of which is the output. The schematic structure of a NN is depicted in Figure 4. Here, from each of the inputs, a synapse is leading to the first hidden layer. For each of the inputs, a weight and a bias is given to the input values. At the subsequent neuron, in the hidden layer, the incoming values are summed up and an activation function is applied to the weighted sum. The output of the neuron is the input with the applied activation function, multiplied with a new weight. This new value is then again used for the next layer, which would be the output layer in case of the schematic representation visible in Figure 4. After the output is created, a back propagation takes places where the weights of the neurons are updated to improve the NN quality [22]. The activation function is one of the important parameters that need to be chosen carefully for the MLA, while the weights of the neurons, as well as the bias are the training parameters of the NN, that are iteratively fitted during the learning process. The number of hidden layers in a NN are called the depth, while the number of neurons in each layer is called the width. Input Layer Hidden Layer Output Layer The machine learning network that is to be applied in this study has to be able to represent a distribution of different parameters, as well as their interdependencies. For the description of any statistical distribution, unsupervised machine learning is best suited. For generative machine learning methods, generative adversarial networks (GAN) are quasi state of the art. They are especially useful to reproduce the statistical distribution of a set of parameters, as well as their interdependencies, be it information from images or raw text-based data [24]. The training of GANs, however, is well known for being unstable and delicate with the main problems being mode collapse, non-convergence, and diminished gradient [25]. The principle of the GAN is that it pits two NNs against each other. The two NNs are a generator and discriminator. The generator generates data trying to reproduce the input data as well as possible. The key mechanism for GANs is the adversary of the generator network, the discriminator. This NN learns to distinguish between generated and input data. The competition between the two networks leads to a distinct improvement in the results during the training epochs. The instability issues of the GAN model stem from the applied loss function. A loss function in neural networks reduces all aspects of a complex system whether good or bad to a single scalar number that allows for a ranking and comparison of solutions. Originally, the GAN model used a minmax loss function, where the generator tries to minimise the target function, while the discriminator maximises it. To get rid of the major instabilities of the first GAN model, Arjovski et al. [26] changed the loss function of the GAN model to the Wasserstein metric. This change leads to an improved stability of the network during training, while still retaining an excellent capability to describe the parameters distribution and their interdependencies. The advantage of the Wasserstein implementation is that the generator can still learn, even if the discriminator is well trained, in addition to no observable mode collapse. Training of the MLA To gather input data for the training of the machine learning networks, EBSD pictures of the microstructure along the rolling direction of the thickness of the steel sheet were taken at the Institute for Physical Metallurgy and Materials Physics of RWTH Aachen University. Only one direction in space was used for the MLA, since the differences between the directions are quite significant. With these pictures, a list of the grains in the section of the material can be obtained. Since MLN requires a large amount of input data, a large area was measured in the same direction that is indicated in Figure 1. In this way, slightly more than 3000 grains were captured to be used as input for training. As mentioned above, the geometric grain data of ferrite were chosen as training data. Additionally, the mean misorientation angle was used as a mean to validate the results further. This angle defines the average deviation of the orientation inside a grain and is used to define grain boundaries for EBSD pictures. An implementation of the Wasserstein GAN (WGAN) algorithm was applied as the chosen algorithm and an effort was made to change the network to be able to run on a GPU-type Tesla P100, which decreases the training time required quite significantly. For the implementation of the WGAN scheme, two feedforward neural networks were applied: One as the discriminator and one as the generator. For the activation functions a ReLu function was used for all but the Output layer of the discriminator, which uses a Sigmoid function, since the result of this layer has to be a probability and Sigmoid functions give results between 0 and 1 [24]. For all of these implementations, the Pytorch library (version 1.5) was used [27]. The approach that was used to find the best possible NN for the input data is described in Figure 5 as a flow chart and in more detail in the text below. EBSD input Data Specify the range of NN width and depth that is to be investigated No Yes Figure 5. Flow chart of the training approach and calibration regimen applied to find the best fit parameters for the generation of synthetic microstructure data. Neural networks require hyperparameters to be fully functional. These are a set of parameters that are defined for the NN before the training starts. The most important ones are the width and depth of the network, as explained above. These two hyperparameters are key for the training process, since they determine the number of neurons and the depth of the network. For each neuron, a weight exists as was explained above. By adapting the weights of the neurons for each processed batch the network actually learns the target distribution. Therefore, the number of neurons in a layer, as well as the amount of layers contributes immensely to the quality of the trained NN. Thus, width and depth were iteratively fitted in this study. This was done by training multiple NN with varying hyperparameter sets (especially for width and depth), which were changed iteratively in a looped approach. Therefore the parameter sets to be tested were defined before the training and then subsequently tested. To train the MLA, a mini batch gradient descent procedure was utilised. In this procedure, the training data set is split into many, randomly-generated sub sets. The algorithm processes each mini batch separately and compares the cost function in relation to the current mini batch data. Subsequently, the network parameters are updated accordingly. This is done iterating over all mini batches, until the whole data set has been evaluated. This complete process is called an epoch. For the training in this study, a batch size of 64 was used, which is in line with batch sizes recommended in the literature [28,29]. Other hyperparameters usually describe the learning rate of both the generator network and the discriminator network. However, the learning rate was automatically adapted, since an optimisation algorithm called RMSProp was used for the back propagation. This optimisation approach requires a low learning rate which is inherent to WGAN networks. Thus the influence of these parameters is negligible. Two important parameters are the clipping value and the dropout. Both variables represent values that are important in the NN to avoid overfitting and underfitting [30,31]. When the framework is defined, the training of the NN begins. Since the NN changes slightly after every epoch, it is important to create intermediate snapshots of the NN, as well as the output data in the form of a CSV file after a defined number of epochs, 200 in the case of this study. Additionally, only a defined number of epochs should be trained, since MLA are prone to overfitting in higher epochs. After the training of all the different NN for multiple parameter sets is complete, the best fit needs to be found. To do so, a script was written, which creates a KDE of every microstructural parameter taken into account for the MLA training. Additionally, the KDE of every taken snapshot are created of the same parameters, since this is the output of the generator network. Subsequently the fit between the input KDE and synthetic data KDE are evaluated. This is done by investigating three types of deviation between the KDE of the real and synthetic data: The mean deviation, maximum deviation, and mean value of both mean and maximum deviation. All three are returned and can be checked by the user. Ideally, a NN snapshot can be chosen that shows the smallest deviation for all three values in comparison to the rest of the NN. MLA Results After the training of the NN is completed, the network with the best results can be chosen. For this, the deviation of the KDE are utilised as described before. Since they show a significant development over the epochs, the best fit is the best averaged value, where most KDE curves fit the input KDE very well. The comparison of the best fit at epoch 15,400 with the input data, as well as former epochs can be seen in Figure 6. Here, a significant improvement of the fit is visible, where epoch 200 is an unoptimised guess, while the network improves over time with the training and is able to represent the distributions of the input parameters at the best fit epoch accurately. From the trained generator, network data can then be extracted, which is usable as input for e.g., RVE creation. The output of the NN follows the same format of the input. This means that every parameter contained in the input data will be present in the output created by the MLA. Since the focus is the creation of input data that geometrically represents a grain (Figure 2), the main features to compare are area, aspect ration, and slope. In Figure 7, pairplots of both the input data and synthetically generated output data are presented. On the diagonal, the KDE distributions are shown, while the other graphs show the respective parameters plotted against each other. From these pictures the compliance between real and synthetic grain data appears to be excellent. Figure 6. Development of the output of the neural networks (NN) over multiple epochs, compared to the input data. Epoch 15,400 is the best fit for the applied input values. As mentioned before, the best fit of the MLA and the input data is determined by a comparison of the KDE for each investigated parameter. The course of this error margin between input and output over the epochs is depicted in Figure 8. For this figure the mean deviation between the input and output KDE were calculated. The deviation was calculated in percent after the following formula: where x is the input data KDE and x is the artificial output data KDE. Due to this analysis, the lowest deviation between the KDE can be determined. In this case, the best fit was determined to be Epoch 15,400 which is the lowest point in the curve with an error margin of about 5%. Validation of the MLA Results By checking the KDE error of the singular plots it is not sufficiently validated if the NN can fully describe the microstructure. Since the most important task of the machine learning algorithm was to represent the interdependencies of the microstructural parameters, a study was conducted to investigate whether the applied algorithm is capable of handling interdependencies between input data. Therefore the MLA was trained on a specifically created data set with four parameters: where x was generated from a uniform distribution. The other functions were arbitrarily chosen, with the criteria being that they all need to be dependent amongst each other. In this case Equation (2) is dependent on Equation (1), while (3) and (4) are dependent on both, (1) and (2). Thus all values were interconnected and the implemented MLA was trained on this data set. The results presented in Figure 9 show that the MLA is able to reproduce the input data very accurately. Both the distribution of each specific equation and the interdependencies as well show no significant deviation. It can therefore be assumed that the implemented MLA is capable of representing any interdependencies that are present in the input data and it can thus be assumed that the dependencies of the microstructure are represented by the MLA. To further investigate whether the dependencies between the microstructural parameters were accurately reproduced, a clustering analysis was performed on the microstructure data. For this analysis, the area data were divided in three evenly sized batches that were subsequently plotted like the results were before. To see if the dependencies were correctly reproduced, each of the batches was given a different colour, allowing individual batches to be tracked. This clustering analysis is pictured in Figure 10. In this figure, the orange and green points are of special interest. The KDE distributions fit very accurately, which was to be expected after the very good fit of the complete KDE functions before. A comparison of the AR-Slope chart shows that each of the orange and green points (which stand for grains with larger areas) is in a very similar position, but not the same. Most of the relevant points are at smaller aspect ratios and positioned near the 0 • or 180 • . No outliers are found in this analysis, which leads to the conclusion that the interdependencies of the input data can be accurately portrayed. (a) Clustering analysis of the input data. (b) Clustering analysis of the synthetically created data. Figure 10. Comparison between the clustering analysis of the input data and the generated output data. The area was divided into three equally sized sections for this analysis. Conclusions This study presented a solution on how to generate input for microstructure modelling that is true to the real microstructure. Commonly, singular distribution functions are applied to describe the input for the microstructure model. However this study showed that an approach like that is not suitable to describe any given microstructure in detail. Since there are at least three relevant parameters to each grain for each direction in space (area, aspect ratio, and slope) all of these have to be represented accurately by statistic descriptions. The three aforementioned parameters are, in fact, all dependent on each other, where a relatively large grain tends to have smaller aspect ratios as well as a slope more parallel to the rolling direction. To solve this challenge, this study applied a WGAN machine learning network to generate input data that represents all of the interdependencies. The results show that the NN learned the distribution functions of the singular parameters very well. The generated output resembled the input quite accurately, while also representing the dependencies between the parameters. To validate these findings, a study was carried out to see whether the implemented NN was able to recreate numerical relations between the different parameters. It was found that the WGAN algorithm was capable of recreating the equations. To validate the results regarding the microstructural features, a clustering analysis showed that the output data resembled the input data very accurately. The error of MLN when comparing input and output KDE oscillated quite strongly. In Figure 8, an initial significant improvement could be observed after which the curve started to spike around a median value. It could therefore be assumed that the training required for this network could be significantly shortened, since a comparably good result was achieved after about 9000 Epochs. Thus, the time needed for the training of each network could be reduced if the amount of Epochs for a full training were lowered. The implemented network in this study was trained on just one direction in space (rolling direction x sheet normal). To precisely describe any microstructure, however, it is necessary to show the influences of all three directions. Here the approach can be expanded. However, it is not an easy feat to generate statistically relevant information of the three dimensional structure from two dimensional pictures. A possible solution would be to simply train three networks, the question that remains unanswered is how the individual parameters of the three spatial directions relate. Answering this question will be a key focus in future work. In comparison to other machine learning applications, this network was trained on a relatively small sample size. This was done deliberately, since in everyday research it is not always feasible to create a magnitude of EBSD pictures. The results from this study show therefore that the applied concept works very accurately even on small sample sizes, where bigger data sets could only improve the quality of the output. Thus, this approach is suitable to be implemented even for small sample sizes and projects with smaller amounts of time and resources dedicated. However, while the WGAN model can be trained on relatively small data sets, it is important to use enough data. If a very low number of grains (e.g., 200) is used as the input, the resulting data is no longer statistically relevant to the target microstructure. Thus, a sample set has to be applied that is representative of the real microstructure. While the error of the MLA was remarkably low at 5%, there were some bigger deviations when comparing the input and output KDE. To improve this deviation, multiple approaches are possible. First of all the precision of the ML model increased with the addition of more input data. Therefore, a more exact representation of the microstructure could be achieved if the input data was increased likewise. Another option could be to apply a different type of optimisation, where the Wasserstein loss is used as the comparison of real to synthetic data. This approach appears to be promising and is currently developed. For this study, DP steel was chosen as the input microstructure. Since the manufacturing process for DP steels incorporates multiple steps, like hot rolling, cold rolling, and coating, they often show a complex microstructure where even the ferrite phase has multiple characteristics that need to be considered to accurately represent the material. However, the focus of this study was put on ferrite, since the description of martensite cannot be achieved by simple EBSD analysis, as mentioned above. Additionally, for the present material, martensite exists in bands as well as singular islands. A differentiation between those two is necessary for a thorough analysis and an automated band detection algorithm is currently being worked on. To apply the synthetically generated microstructure, an RVE generation algorithm is needed, like the one developed by Henrich et al. [12]. The combination of these two approaches with a complete characterization of the martensite phase will lead to a very close representation of the actual microstructure. Additionally, an extension of the applied ML model is quite easily done, as it only requires adding more columns to the input of the ML model. Thus, the crystallographic orientations of the microstructure can be incorporated and represented in the resulting RVE. From that microstructure model, mechanical properties of the phases can be concluded and in depth studies of the response of the different phases to mechanical loading can be undertaken.
7,676.2
2020-06-05T00:00:00.000
[ "Computer Science" ]
Graphene-based absorber exploiting guided mode resonances in one-dimensional gratings A one-dimensional dielectric grating, based on a simple geometry, is proposed and investigated to enhance light absorption in a monolayer graphene exploiting guided mode resonances. Numerical findings reveal that the optimized configuration is able to absorb up to 60% of the impinging light at normal incidence for both TE and TM polarizations resulting in a theoretical enhancement factor of about 26 with respect to the monolayer graphene absorption (≈2.3%). Experimental results confirm this behaviour showing CVD graphene absorbance peaks up to about 40% over narrow bands of few nanometers. The simple and flexible design paves the way for the realization of innovative, scalable and easy-to-fabricate graphene-based optical absorbers. Introduction Graphene is a single atomic layer of graphite that consists of very tightly bonded carbon atoms organised into a hexagonal lattice [1]. Graphene shows an sp 2 configuration that leads to a total thickness of about 0.34 nm. This two-dimensional nature is responsible of the very exceptional electrical, mechanical and optical properties shown by this material. In particular, it has been theoretically and experimentally demonstrated that the absorption of a monolayer graphene does not depend on the material parameters but only on the fundamental constants since it is equal to !" (defined by the fine structure constant ! = e 2 / !c ) that corresponds to about 2.3% over the visible (VIS) range [2]. Moreover, the absorption of multiple graphene layers is proportional to the number N of added layers [2]. This important property has been exploited in different configurations in order to realize efficient graphene-based photo-detectors [3][4] and modulators [5][6]. At the same time, even if this constant value is very high when compared with other bulk materials, the absorption of monolayer graphene can be boosted and enhanced in different spectral ranges exploiting different technologies and approaches in both linear and nonlinear regimes. In particular, over the last few years, several solutions have been proposed by incorporating the monolayer graphene in configurations operating in the VIS and near-infrared ranges (NIR), that exploit attenuated total reflectance (ATR) [7] or resonant configurations such as one-dimensional (1D) periodic structures [8], two-dimensional photonic crystal cavities [9] and multilayer dielectric Bragg mirrors [10]. In this framework, in our previous work [11], we have shown how it is possible to achieve near perfect-absorption in a one-dimensional Photonic Crystal (PhC) that incorporates a graphene monolayer in the defect. In this paper, we propose and investigate a one-dimensional dielectric grating that exploits guided mode resonances (GMRs) [17] to enhance light absorption in the monolayer graphene. Guided mode resonances define optical modes with complex wavenumber, typically leaky modes, which are strongly confined in the 1D grating. Forced excitation [18] of these lattice modes may be triggered by phase-matching to incident plane waves, thus, the interaction between these discrete modes and the out-of-plane radiation continuum gives rise to narrowband and asymmetric spectral features, also known as Fano resonances [19][20]. In this scenario, we will analyze absorption enhancement when the monolayer graphene is inserted in a lossless dielectric grating. In particular, we will numerically investigate the dependence of the absorption on the geometrical parameters with a plane wave excitation at normal incidence. Finally, we will detail the fabrication process and the experimental results related to the optical characterization of the device. Figure 1 shows the sketch of the proposed 1D dielectric-grating-based absorber made of polymethyl-methacrylate (PMMA) stripes deposited on a tantalum pentaoxide (Ta 2 O 5 ) slab that is supported by a silicon dioxide (SiO 2 ) substrate. The monolayer graphene is sandwiched between the polymeric layer and the Ta 2 O 5 slab forcing it to interact with the guided mode resonances. Finally, the Ta 2 O 5 slab thickness t Ta2O5 , the periodicity p, the PMMA width w PMMA and the PMMA thickness t PMMA are set initially equal to 100 nm, 470 nm, 235 nm (w PMMA =0.6p) and 650 nm, respectively. Numerical results The one-dimensional grating has been simulated in the visible-near infrared (VIS-NIR) range and the dispersion for the different dielectric media has been experimentally measured by means of ellipsometric technique. It is worth stressing that these values almost coincide with the models based on the Sellmeier equation retrieved by the data reported in [21]. Furthermore, we found negligible losses for the Ta 2 O 5 slab and PMMA layer (i.e., the extinction coefficients are equal to 0), hence the device without the graphene will be considered lossless hereinafter. Finally, the monolayer graphene has been modeled using the experimental fit reported in [22]. This model does not take into account the doping effect. In this respect, it is possible to refine the model following the Kubo formulation as proposed in [23] that considers the chemical potential, the temperature and the scattering time. However, in our range of interest the variation of the complex refractive index model is negligible (few percentage points) when the doping, i.e., the chemical potential, is varied. On the contrary, the model based on the Kubo formulation is essential for describing the graphene optical properties in the infrared and terahertz regimes. The spectral response of the configuration has been investigated for both TE and TM polarized incident plane waves. Figure 2 compares reflectance, transmittance and absorbance spectra without and with the inclusion of the monolayer graphene, respectively. In particular, the device displays two asymmetric guided mode resonances located at about 743.9 nm and 712.6 nm for TE and TM polarization, respectively, with a full-width at half-maximum of few nanometers. The asymmetry is evident for the transmittance and reflectance curves while the absorption shows an almost symmetric response. The spectral position virtually satisfies the phase matching condition [18]: where n eff corresponds to the effective refractive index of the mode in the slab, n 0 is the refractive index of cover medium, θ inc is the angle of incidence of the impinging source, m is the diffraction order, λ 0 is the free space wavelength and p is the period of the grating. At normal incidence (i.e. θ inc = 0) in air (n 0 = 1) and for the first diffraction order (m = -1), Equation (1) reduces to n eff = ! 0 / p and, hence, the free space wavelength at which the phase matching condition occurs is equal to 0 eff For the configuration reported in Figure 2, the effective refractive index, for TE and TM polarization, is equal to about 1.582 and 1.511, respectively, leading to a guided mode resonance located at about 743.5 nm and 710.2 nm. Therefore, the wavelength shift between the two polarizations (about 30 nm) is due to the different effective refractive index of the two modes. the results reported in [15] explaining that it is sufficient an initial thickness to excite the guided mode resonance; thus, when the thickness is increased, the modal configuration is unaffected and, hence, the optical response does not change in a noticeable way. Similar considerations can be done for the reflectance spectra (Figures 4(c)-(d)). In conclusion, these maps indicate that the maximum attainable absorption for this configuration is about 60% corresponding to an enhancement factor of about 26 with respect to the monolayer graphene absorption (≈2.3%). Experimental results In order to verify the numerical findings, the device under examination has been fabricated. In particular, a 100 nm-thick Ta 2 O 5 slab was grown on a SiO 2 substrate by means of a RF sputtering system. The sample has been treated by means of oxygen plasma in order to increase the wettability and improve the adhesion properties. Then, a monolayer graphene, grown via Chemical Vapour Deposition (CVD) technique, was manually transferred onto the Ta 2 O 5 slab. The 1D grating was realized in two steps: firstly, a 650 nm-thick PMMA layer was spin-coated onto the sample, and then the PMMA layer was exposed by means of an electron beam lithography system (Raith150) operating at 20 kV. Finally, the sample was developed in a methyl isobutyl ketone -isopropyl alcohol (MIBK-IPA) (MIBK-IPA) mixture and rinsed in an IPA bath. It is worth pointing out that we set the PMMA thickness equal to about 650 nm since this thickness corresponds to a maximum in the reflectance maps for both polarizations as shown in The fabricated device was optically characterized at normal incidence by means of an optical setup constituted by a white-light lamp, filtered in the 600 nm-900 nm range, focused on the sample by means of a low numerical aperture, infinity-corrected microscope objective (5X, NA=0.15) [24]. The reflected light was collected by an aspherical fiber lens collimator and filtered by a linear polarizer. An opaque metallic stage was used to support the sample and avoid collecting unwanted light. The filtered light was sent to an optical spectrometer (HR4000 from Ocean Optics) through a multimode optical fiber. The reflectance spectrum normalization was carried out by using a flat silicon surface as spectrum reference. At the same time, the transmittance spectra were collected through a perforated stage allowing the light to reach another aspherical fiber lens collimator. In this case, the Ta 2 O 5 substrate sample was used as reference. Conclusion We The optical characterization revealed an excellent agreement between the experimental data and the theoretical results confirming the ability of this graphene-based absorber to achieve an enhancement factor of about 19 (15) for TE (TM) polarization. It is worth stressing that we have presented a 1D grating but the same idea can be extended to 2D arrays of square dielectric patches in a similar fashion as reported in [24]: in this way it could be possible to realize devices that are less sensitive to the incident light polarization. In conclusion, we have presented an approach to achieve enhanced absorption in graphene exploiting guided mode resonances. A fair comparison between our previous work reported in [11] and the proposed device allows us to draw some conclusions: the defective photonic-crystal configuration can achieve perfect absorption while the guided-mode resonance approach shows a maximum absorption of about 60%. However, the latter is based on a very simple geometry and shows low sensitivity to the geometrical parameters such as the PMMA thickness, hence significant robustness in terms of fabrication tolerances. Additionally, the proposed device requires polymeric stripes that could be easily realized by low cost and scalable fabrication technologies such as the nano-imprinting lithography. The 1D-PhC also requires low intrinsic dielectric losses due to the high number of layers in the dielectric stack while, here, losses depend only on two dielectric slabs. Therefore, this device could be efficiently exploited as building blocks for innovative optical absorbers or photo-detectors in combination with active materials (e.g. silicon photonics based devices). Finally, the proposed solution appears also interesting in order to enhance the exceptional nonlinear third harmonic and the saturable responses of the monolayer graphene [25][26] in a similar fashion reported in Ref. [11,27].
2,532.6
2014-10-08T00:00:00.000
[ "Materials Science", "Physics" ]
Identifying deer antler uhrf1 proliferation and s100a10 mineralization genes using comparative RNA-seq Background Deer antlers are bony structures that re-grow at very high rates, making them an attractive model for studying rapid bone regeneration. Methods To identify the genes that are involved in this fast pace of bone growth, an in vitro RNA-seq model that paralleled the sharp differences in bone growth between deer antlers and humans was established. Subsequently, RNA-seq (> 60 million reads per library) was used to compare transcriptomic profiles. Uniquely expressed deer antler proliferation as well as mineralization genes were identified via a combination of differential gene expression and subtraction analysis. Thereafter, the physiological relevance as well as contributions of these identified genes were determined by immunofluorescence, gene overexpression, and gene knockdown studies. Results Cell characterization studies showed that in vitro-cultured deer antler-derived reserve mesenchyme (RM) cells exhibited high osteogenic capabilities and cell surface markers similar to in vivo counterparts. Under identical culture conditions, deer antler RM cells proliferated faster (8.6–11.7-fold increase in cell numbers) and exhibited increased osteogenic differentiation (17.4-fold increase in calcium mineralization) compared to human mesenchymal stem cells (hMSCs), paralleling in vivo conditions. Comparative RNA-seq identified 40 and 91 previously unknown and uniquely expressed fallow deer (FD) proliferation and mineralization genes, respectively, including uhrf1 and s100a10. Immunofluorescence studies showed that uhrf1 and s100a10 were expressed in regenerating deer antlers while gene overexpression and gene knockdown studies demonstrated the proliferation contributions of uhrf1 and mineralization capabilities of s100a10. Conclusion Using a simple, in vitro comparative RNA-seq approach, novel genes pertinent to fast bony antler regeneration were identified and their proliferative/osteogenic function was verified via gene overexpression, knockdown, and immunostaining. This combinatorial approach may be applicable to discover unique gene contributions between any two organisms for a given phenomenon-of-interest. Electronic supplementary material The online version of this article (10.1186/s13287-018-1027-6) contains supplementary material, which is available to authorized users. Characterization of FD RM cells and hMSCs for cell proliferation studies. The proliferative capacity of FD RM cells and hMSCs were determined by cell counting studies and cell cycle analysis. For cell counting studies, cells were seeded into 48-well plates at a density of 0.26 x 10 4 cells/cm 2 overnight (Day 0). Three different media formulation were used -1) DMEM, 10 % FBS, 1 % P/S, 2) Mesenchymal Stem Cell Growth Media and 3) Mesenchymal Stem Cell Growth Media supplemented with 10 ng/mL fibroblast growth factor-2 (FGF-2; Peprotech, Rocky Hill, NJ). Media were changed every 48 h. At 2, 4 and 6 days, cells were counted using an automated cell counter (Beckman Coulter Z2 Particle Counter, Beckman Coulter, Brea, CA). Cell doubling times were calculated using R-studio (R Studio, Boston, MA, http://www.rstudio.com) by visually determining the exponential phase of growth, and plotting the log of the cell counts against time to determine the slope (ln(2)/slope = doubling time) for each sample. For cell cycle analysis, cells were seeded into T75 tissue culture flasks at a density of 0.27 x 10 4 cells/cm 2 overnight. The following day (Day 0), media were changed to DMEM, 10 % FBS, 1 % P/S. After 4 days, cells were dissociated with 0.05 % trypsin, resuspended in PBS, pelleted by centrifugation (210 g at 4 °C for 5 min for FD RM cells and 500 g at 25 °C for 5 min for hMSCs), fixed in 4 % paraformaldehyde for 10 min, pelleted by centrifugation (210 g at 4 °C for 5 min for FD RM cells and 500 g at 25 °C for 5 min for hMSCs) and stored in PBS at 4 °C until analysis. On the day of analysis, cells were pelleted by centrifugation (210 g at 4 °C for 5 min for FD RM cells and 500 g at 25 °C for 5 min for hMSCs), permeabilized with 0.1 % Triton X-100 in PBS (Sigma Aldrich, St. Louis, MO) for 15 min and stained using propidium iodide/RNAse solution (Cell Signaling Technology, Danvers, MA) for 30 min. Analyses were performed on a BD Aria II flow cytometer and data were analyzed using Flowjo 9.7.5. For each cell cycle distribution, data were fitted to a Watson-Pragmatic model. Characterization of FD cells and hMSCs for cell differentiation studies. The mesenchyme lineage commitment of FD cells and hMSCs were determined by several cell differentiation assays. The ability of FD cells to differentiate into adipocytes and chondrocytes was confirmed by Oil Red O staining and Alcian Blue staining, respectively. The ability of FD cells to differentiate into osteoblasts was confirmed by osteogenic gene expression, ALP staining and Alizarin Red S staining whereas the ability of hMSCs to differentiate into osteoblasts was confirmed by Alizarin Red S staining. For chondrogenic studies, cells were seeded into 24-well plates at a density of 8 x 10 4 cells/5 μL drops to generate micromass cultures. After 2 h (Day 0), media were changed to StemPro Chondrogenic Basal Media (Gibco, Thermo Fisher Scientific, Waltham, MA), 10 % FBS, 1 % P/S (Control media) or StemPro Chondrogenic Differentiation Kit Media (Gibco, Thermo Fisher Scientific, Waltham, MA; Chondrogenic media). Media were changed every 72 h. After 24 days, cells were washed with PBS, fixed with 10 % neutral buffered formalin for 30 min, stained with 1 % Alcian Blue (in 1N HCl; Electron Microscopy Sciences, Hatfield, PA) for 30 min, washed three times with distilled water and air-dried. Brightfield images were acquired using an inverted Zeiss AxioObserver Z1 microscope equipped with an Axiocam ICC 1 color camera. For osteogenic studies involving ALP staining, cells were seeded into 24-well plates at a density of 1.57 x 10 4 cells/cm 2 overnight. The following day (Day 0), media were changed to DMEM, 10 % FBS, 1 % P/S (Without BMP-2) or DMEM, 10 % FBS, 1 % P/S, 100 ng/mL BMP-2 (With BMP-2). Media were changed every 48 h. After 6 days, cells were fixed for 1 min in 3.7 % formaldehyde. ALP activity (Kit 86C, Sigma Aldrich, St. Louis, MO) was detected according to the manufacturer's instructions. Brightfield images were acquired using an inverted Zeiss AxioObserver Z1 microscope equipped with an Axiocam ICC 1 color camera as well as a Nikon digital SLR camera (Nikon Digital Camera D70, Nikon Corp., Japan). Where necessary, the average pixel intensity was determined using the image histogram tool in Adobe Photoshop (Adobe Systems, San Jose, CA, http://www.adobe.com) as previously described (2,3). Construction of FD RM cell and hMSC cDNA libraries for RNA-seq. RNA-seq studies were performed for FD RM cells (Isolate 2) and hMSCs (Isolate 24268) under proliferation and mineralization conditions to identify proliferation and mineralization genes, respectively. For proliferation studies, both FD RM cells and hMSCs were seeded at a density of 1.26 x 10 4 cells/cm 2 overnight and media conditions included 1) DMEM, 0 % FBS, 1 % P/S (0 % serum) and 2) DMEM, 10 % FBS, 1 % P/S (10 % serum). In parallel, another set of cells was seeded at similar densities in 48-well plates to monitor cell growth daily using an automated cell counter (For 6 days). Media were changed every 48 h. After 2.5 days (when cells were observed to be in the exponential phase of growth), RNA was harvested. For each condition, two replicate RNA samples were isolated. Cells were dissociated with 0.25 % trypsin, the RNA harvested (Qiagen RNeasy Plus Mini kit, Qiagen, Germany) and reverse-transcribed into cDNA (Ovation RNA-seq System V2 kit, NuGEN, San Carlos, CA) for RNA-seq library construction. First, the remainder of the cDNA was sheared (S2 focused-ultrasonicator, Covaris, Woburn, MA). Following this, end repair of the fragmented cDNA, dA-tailing of the end-repaired cDNA, adaptor ligation of dA-tailed cDNA (using custom primers), and PCR enrichment of adaptorligated cDNA were performed for 6 cycles to prepare the cDNA library for RNA-Seq (NEBNext DNA Library Prep Master Mix Set for Illumina, New England Biolabs, Ipswich, MA). Human and FD proliferation samples were sequenced to approximately 87,276,798 reads per library while human and FD mineralization samples were sequenced to approximately 74,904,447 reads per library ( Supplementary Table 1 and 2). RNA-seq and bioinformatics analysis. After cDNA library preparation, samples were sequenced using 100 base-pair, paired-end RNA-seq technology (HiSeq 2000, Illumina, San Diego, CA) and data were analyzed using several bioinformatics software (4,5). The raw sequencing data were concatenated as necessary and the adaptor sequences were removed from the reads to prepare them for analysis. In order to analyze the data, all reads were aligned to the appropriate genome using Spliced Transcripts Alignment to a Reference (STAR; Version 2.3.0, https://code.google.com/p/rna-star/) software (4). Both FD RM proliferation and mineralization reads were aligned to the bovine (Bos taurus) genome (bosTau7 from UCSC Genome Browser, indexed using STAR), as Bos taurus is the closest relative to the fallow deer whose genome has been sequenced (at the time of analysis). Since the human genome is readily available, both hMSC proliferation and mineralization reads were aligned to the human (Homo sapiens) genome (preindexed hg19 released by STAR authors). After converting the STAR output to the appropriate file type and sorting the files using SAMtools (Version 0.1.19 http://www.htslib.org/), the Cufflinks package (Version 2.1.1.1, https://github.com/cole-trapnell-lab/cufflinks) was used to assemble the gene transcripts and to determine differentially-expressed genes between control and treatment conditions (5). Initial attempts at running the Cufflinks package on both FD RM cell and hMSC datasets resulted in errors. These errors originated from Cufflinks' treatment of the soft-clipped regions of aligned reads from the STAR output for some scaffolds (Alexander Dobin, personal communications). When these error-causing soft-clipped regions were removed from both FD and human datasets, the Cufflinks package ran with no errors. The output from Cufflinks was subsequently processed in R-Studio using the cummerbund package (http://compbio.mit.edu/cummeRbund/) to visualize RNA-seq data as well as data quality. Differentially-expressed genes between control and treatment groups were identified based on cut-off values for probability (p ≤ 0.05) and false discovery rate (q ≤ 0.05). Microsoft Excel (Microsoft Corp., Redmond, WA; http://www.microsoftstore.com/), Ingenuity Pathway Analysis (Qiagen, Germany; http://www.ingenuity.com/products/ipa) and Gene Ontology Enrichment Analysis (http://geneontology.org/page/go-enrichment-analysis) were used to compare and analyze differentially-expressed genes in FD RM cell and hMSC datasets. Genes-of-interest for subsequent cloning were identified based on two arbitrary criteria -1) Differentially-expressed genes that exhibited more than 5-fold upregulation in control versus treatment conditions and 2) Genes that were uniquely-expressed in the FD RM cell dataset (i.e. not differentially-expressed in the hMSC dataset). Cloning of FD genes. Uniquely-expressed, FD genes were PCR-cloned, ligated into a plasmid, transformed into bacteria and purified for subsequent overexpression studies in mammalian cells. To perform PCR-based cloning, primers were designed to isolate the gene-of-interest from the cDNA of FD RM cells based on the sequences of homologous genes in the bovine genome (National Center for Biotechnology Information, http://www.ncbi.nlm.nih.gov/). These primers included complementary regions 15 -20 base pairs into the 5' and 3' ends of the bovine genes, and contained enzyme restriction sites necessary for eventual insertion into the pVitro2-MCS-Blast plasmid (InvivoGen, San Diego, CA), which contains 2 multiple cloning sites (MCS), a blasticidin resistance gene as well as necessary elements for bacterial and mammalian expression. Each gene was checked to ensure that the selected enzyme restriction sites were not present within the coding sequence of the published mRNA sequence. PCR (Platinum Blue PCR SuperMix, Invitrogen, Thermo Fisher Scientific, Waltham, MA) was performed according to the manufacturer's instructions. PCR products were purified (Wizard SV Gel and PCR Clean-up System, Promega, Sunnyvale, CA) and successful cloning was confirmed via gel electrophoresis and DNA sequencing (ElimBio, Hayward, CA). To construct the plasmid containing the gene-of-interest, purified PCR products were digested using the appropriate restriction enzymes (New England Biolabs, Ipswich, MA), ligated into pVitro2-MCS-Blast plasmid according to the manufacturer's instructions (NEB Quick Ligation Kit, New England Biolabs, Ipswich, MA). Successful construction of the plasmid containing the gene-of-interest was confirmed by DNA sequencing. To obtain large quantities of DNA for subsequent overexpression studies, plasmids were transformed into chemically competent DH5α E. coli (One Shot MAX Efficiency DH5α-T1 Competent Cells, Invitrogen, Thermo Fisher Scientific, Waltham, MA) and purified. Bacterial cultures were amplified on LB-agar-blasticidin (Invivogen, San Diego, CA) plates in a 5 % CO2 incubator at 37 °C overnight. Successful colonies were grown in TB-blasticidin liquid media at 37 °C, shaking at 225 -250 rpm overnight, before isolating the amplified plasmid DNA (Purelink Quick Plasmid Miniprep Kit or Purelink HiPure Plasmid Midiprep Kit, Invitrogen, Thermo Fisher Scientific, Waltham, MA). Several plasmids including pVitro2-MCS-Blast (empty plasmid) as well as pVitro2-mRuby2-Blast (6), a plasmid encoding the fluorescent protein mRuby2 (7), were similarly obtained to serve as appropriate transfection controls. Confirmation of gene overexpression using non-competitive, semi-quantitative PCR. To confirm gene overexpression, non-competitive, semi-quantitative PCR was employed. RNA was harvested from untransfected cells, cells stably transfected with the empty pVitro2-MCS-Blast plasmid and cells stably transfected with the pVitro2-MCS-Blast plasmids containing the gene-of-interest (Qiagen RNeasy Plus Mini kit, Qiagen, Germany) under appropriate culture conditions. The RNA samples were reverse-transcribed into cDNA according to the manufacturer's instructions. PCR was performed on the cDNA templates for 22 -35 cycles with primers used to originally clone the geneof-interest as well as Bovine gapdh as a reference gene for normalization. PCR-amplified DNA was separated by on a 1.0 % agarose gel (Sigma Aldrich, St. Louis, MO) at 80V for 1 -1.5 h (Bio-Rad Laboratories Inc., Hercules, CA). Images of gel electrophoresis samples were analyzed using the image histogram tool in Adobe Photoshop as previously described (2,3). Proliferation capability of identified FD gene(s). The function of FD gene(s)-of-interest in cell proliferation was measured in stably-transfected C3H10T1/2 cells using cell counting studies and in FD RM cells using gene knockdown studies. For cell counting studies, cells were seeded into 48-well plates at a density of 0.26 x 10 4 cells/cm 2 overnight (Day 0) in DMEM, 10 % FBS, 1 % P/S. Media were changed every 48 h. Cells were counted daily using an automated cell counter for 5 days. Cell doubling times were calculated using R-studio (R Studio, Boston, MA, http://www.rstudio.com) by visually determining the exponential phase of growth, and plotting the log of the cell counts against time to determine the slope (ln(2)/slope = doubling time) for each sample. For gene knockdown studies, cells were seeded into 48-well plates at a density of 0.26 x 10 4 cells/cm 2 overnight in DMEM, 10 % FBS, 1 % P/S. The following day (Day 0), cells were transfected with 30 nM uhrf1 siRNA A and E (Custom-designed based on bovine uhrf1 sequence, Santa Cruz Biotechnology Inc., Dallas, TX) according to the manufacturer's instructions (Polyplus, France) for 72 h. No media change was performed. Confirmation of siRNA-mediated uhrf1 knockdown was determined using non-competitive, semi-quantitative PCR and gel electrophoresis. At 0 and 3 days, cells were counted using an automated cell counter. For ALP staining, cells were seeded into 24-well plates at a density of 1.57 x 10 4 cells/cm 2 overnight. The following day (Day 0), media were changed to DMEM, 10 % FBS, 1 % P/S (Without BMP-2) or DMEM, 10 % FBS, 1 % P/S, 100 ng/mL BMP-2 (With BMP-2). Media were changed every 48 h. After 4 -12 days, cells were fixed for 1 min in 3.7 % formaldehyde. ALP activity was detected according to the manufacturer's instructions. Brightfield images were acquired using an inverted Zeiss AxioObserver Z1 microscope equipped with an Axiocam ICC 1 color camera. Where necessary, the average pixel intensity was determined using the image histogram tool in Adobe Photoshop as previously described (2,3). For immunofluorescence staining, cells were fixed in 4 % paraformaldehyde, washed 3 times in PBS, permeabilized with 0.2 % Triton X-100 for 10 min and washed 3 times in PBS. Following this, antibody staining was performed. Cells were incubated in 10 % donkey serum for 20 min followed by incubation with 10 μg/mL rabbit anti-UHRF1 (Sc98704, Santa Cruz Biotechnology Inc., Dallas, TX) or 1 μg/mL mouse anti-S100A10 (Ab89438, Abcam Inc., Cambridge, MA) primary antibody overnight at 4 °C. The following day, cells were washed 3 times in wash buffer (5 min each), incubated in 15 μg/mL donkey anti-rabbit Alexa 647 (711-605-152, Jackson Immunoresearch, West Gove, PA) or 15 μg/mL donkey anti-mouse Alexa 647 (715-605-150, Jackson Immunoresearch, West Gove, PA) secondary antibody for 1 h at 25 °C and washed 5 times in wash buffer (5 min each). Fluorescence images were acquired using an inverted Zeiss AxioObserver Z1 microscope equipped with an X-Cite® Series 120Q metal halide lamp, appropriate filters and an AxioCam MRm camera. Where necessary, the average pixel intensity was determined using the image histogram tool in Adobe Photoshop as previously described (2,3). Histological and immunofluorescence staining of antler tissue. To examine antler tissue regeneration and ascertain in vivo physiological relevance of in vitro comparative RNA-seq results, deer antler tissue were harvested from an independent fallow deer herd at another local deer ranch (Walking Beam Ranch, Santa Paula, CA) in accordance with the guidelines established by Stanford University's Administrative Panel on Laboratory Animal Care and subjected to histological and immunofluorescence staining. These deer were approximately 2 -3 years old and antler tissues were harvested during early stages of antler regeneration (4 -7 inches in height) as described previously. Tissue samples were fixed in 10 % formalin and stored in 70 % ethanol. Tissue samples were subjected to histological processing via a graded ethanol dehydration series at 4 °C (70 % ethanol overnight, 85 % ethanol overnight, 95 % ethanol overnight and 100 % ethanol overnight) followed by xylene infiltration at 25 °C (50 % xylene in ethanol for 1 h, 50 % xylene in ethanol overnight, 100 % xylene for 1 h and 100 % xylene overnight) and then paraffin infiltration at 60 °C (50 % paraffin in xylene for 2 h, 100 % paraffin for 2 h, 3 washes of 100 % paraffin for 20 min each, 100 % paraffin overnight and 100 % paraffin for 20 min). Subsequently, tissue samples were embedded in paraffin blocks and sectioned at 4 -8 μm intervals using a Leica rotary microtome (RM 2255, Leica Biosystems Inc., Buffalo Grove, IL). Prior to staining, tissue sections were deparaffinized (3 washes of 100 % xylene for 3 min each) and rehydrated (50 % ethanol in xylene for 3 min, 2 washes of 100 % ethanol for 3 min each, 2 washes of 95 % ethanol for 3 min each, 2 washes of 70 % ethanol for 3 min each and PBS for 3 min). For histological staining, tissue sections were stained with Alcian Blue or Alizarin Red S. To stain for cartilage, tissue samples were incubated with 1 % Alcian Blue (in 1N HCl) for 30 min, washed three times with distilled water, counter-stained with Neutral Red for 10 min, washed three times with distilled water, dehydrated in ethanol and mounted. Brightfield images were acquired using an inverted Zeiss AxioObserver Z1 microscope equipped with an Axiocam ICC 1 color camera. To stain for mineralized bone, thick (1 -3 mm) tissue samples were incubated with 2 % Alizarin Red S for 30 min and washed five times with distilled water. Thick tissue samples were imaged using a Nikon digital SLR camera. Statistical analysis. Statistical analyses involving RNA-seq were performed by Cufflinks and R-Studio (5). Statistical significance for differentially-expressed genes was established at p ≤ 0.05 and q ≤ 0.05. Statistical analyses not involving RNA-seq were performed using IBM SPSS Statistics for Windows 22 (IBM Corp., North Castle, NY, http://www.ibm.com). These experiments were performed with at least 3 replicates per condition. Sample sizes were estimated to detect a group mean difference of 50 % ± 1 to 2 standard deviations with a power (1 -β) of 0.8 and α = 0.05 (http://powerandsamplesize.com/Calculators/Compare-k-Means/1-Way-ANOVA-Pairwise). Quantitative data was presented as means ± standard error of mean (mean ± SEM) where appropriate. Relative fold changes for PCR data were log transformed in order to make the data distribution more symmetrical since gene expression data are often log normally distributed (8). To determine whether data were normally-distributed and whether there was equality of variances among groups, p values were computed via the Shapiro-Wilk test and the Levene test, respectively. For two mean comparisons, p values were computed via the t-test. If there was equal variance between groups, p values were calculated using pooled variance. Otherwise, p values were calculated using separate variance. For more than two mean comparisons, p values were computed via Analysis of Variance (ANOVA). If majority of data (two-thirds or more) were normally-distributed or there was equal variance among groups, p values were calculated using ANOVA followed by Tukey's Honest Significant Difference post-hoc multiple comparison test. This approach was based on the robustness of ANOVA under conditions of non-normality and heterogeneity of variance (9, 10). Otherwise, p values were calculated using Welch's ANOVA followed by Games-Howell post-hoc multiple comparison test. This approach enables improved control of Type I errors and greater power under conditions of non-normality and heterogeneity of variance (11). Statistical significance was established at p ≤ 0.05. Supplementary Figures S1-S7 Supplementary Figure S1. Overview of in vitro comparative RNA-seq. The colored lines indicate sites where deer (FP, facial periosteum; PP, pedicle periosteum; RM, reserve mesenchyme) and human (hMSC, human mesenchymal stem cells) skeletal progenitor cells were harvested. Isolated cells were subjected to RNA-seq under proliferation and mineralization conditions independently and the resulting datasets were compared to identify genes that were highly-expressed (> 5-fold increase) and unique to deer. Subsequently, the physiological relevance and contribution of these genes to antler regeneration were determined. Table S1. Data quality of FD RM cell (Isolate 2) RNA-Seq samples.
5,000.8
2018-10-31T00:00:00.000
[ "Biology" ]
Giant Magnetic Band Gap in the Rashba-Split Surface State of Vanadium-Doped BiTeI: A Combined Photoemission and Ab Initio Study One of the most promising platforms for spintronics and topological quantum computation is the two-dimensional electron gas (2DEG) with strong spin-orbit interaction and out-of-plane ferromagnetism. In proximity to an s-wave superconductor, such 2DEG may be driven into a topologically non-trivial superconducting phase, predicted to support zero-energy Majorana fermion modes. Using angle-resolved photoemission spectroscopy and ab initio calculations, we study the 2DEG at the surface of the vanadium-doped polar semiconductor with a giant Rashba-type splitting, BiTeI. We show that the vanadium-induced magnetization in the 2DEG breaks time-reversal symmetry, lifting Kramers degeneracy of the Rashba-split surface state at the Brillouin zone center via formation of a huge gap of about 90 meV. As a result, the constant energy contour inside the gap consists of only one circle with spin-momentum locking. These findings reveal a great potential of the magnetically-doped semiconductors with a giant Rashba-type splitting for realization of novel states of matter. symmetry (TRS) whereby an exchange gap is opened at the crossing point of the two Rashba-split bands, creating a single-circle Fermi contour. The states with opposite spins at opposite momenta of the contour can be paired via the proximity to an s-wave superconductor, which would gap the spectrum globally. This situation can lead to a topological superconductor phase, expected to host exotic quasiparticles, the Majorana fermions 14 . Their incarnation in a solid state device would significantly advance quantum computation and, having the suitable experimental setups theoretically proposed 6,[15][16][17] , several studies have been performed in an effort to probe them [18][19][20][21] . As is seen from the foregoing, the design of a playground for the Majorana fermions realization involves choices of the Rashba semiconductor and the way of its magnetic functionalization. In the experiments performed recently [18][19][20] , the latter is done using an external magnetic field rather than the magnetic proximity effect. However, for application purposes, an external field needs to be substituted by an intrinsic one 22 . This can also be achieved by a magnetic doping of a Rashba semiconductor. In fact, this approach has proven successful in the realization of the quantum anomalous Hall state. It has been observed in magnetically-doped topological insulators 23 -systems where an out-of-plane magnetization breaks the TRS and thus opens a gap in the topological surface state 24,25,26 . On the other side of the system's design problem is the choice of the Rashba semiconductor. To date, the highest known Rashba-type splitting has been reported for the polar semiconductor BiTeI [27][28][29][30][31] . The figure-of-merit of the Rashba splitting, E R , being the energy separation between the crossing point and parabolic minimum, reaches 100 meV in this system. Combination of this extraordinary Rashba splitting with a sizable exchange gap would ideally satisfy the requirements suggested for the appearance of the Majorana modes 6 . Here we show that such a state can indeed be achieved in magnetically-doped BiTeI. Using angle-resolved photoemission spectroscopy (ARPES) we show that the vanadium doping of BiTeI lifts the Kramers degeneracy of its Rashba-split Te-terminated surface state at the Brillouin zone (BZ) center resulting in formation of the gigantic exchange gap of approximately 90 meV. The evanescence of the gap with temperature indicates its largely magnetic origin, which, as is suggested by our density functional theory calculations, may stem from the interplay of the V magnetic moments with those of point defects. The BiTeI system consists of triple layer (TL) blocks, -Te-Bi-I-, stacked along the hexagonal axis with a van der Waals bonding acting between them. The crystal structure of a single, V-doped BiTeI TL block is schematically shown in Fig. 1a, V atoms being incorporated in the Bi layer. As is depicted in the figure, at high temperature their magnetic moments are expected to be disordered and the TRS preserved. The dispersion relation for the V-doped BiTeI 2DEG state at the Te-terminated surface is in this case described by the two electron-like parabolas, split in a Rashba manner (Fig. 1b), pretty much as in the undoped BiTeI case. Consistently with this picture, the ARPES map for Bi 0.985 V 0.015 TeI measured at 300 K reveals an intensity maximum at the Γ-point (i.e. k || = 0 Å −1 ) (Fig. 1c). Assuming an onset of a magnetically ordered state at low temperatures, one may expect the behavior of the Bi 0.985 V 0.015 TeI 2DEG state to change significantly upon cooling down the sample. For instance, if the V magnetic moments were ordered ferromagnetically within the topmost TL and the direction of the moments was perpendicular to the surface (see Fig. 1d), the TRS would be broken and the Kramers degeneracy at the Γ-point would be lifted. This behavior can be described by a Hamiltonian that includes the Bychkov-Rashba and exchange terms, where m * , α R = 3.8 eVÅ, and λ are the effective electron mass, Rashba coefficient for the undoped BiTeI 27 , and the strength of the exchange interaction, respectively, while m z is the magnetization, provided by the V atoms. The resulting dispersion relation is then given by with Δ = λm z defining the halfwidth of the local gap, which appears between the inner and outer parts of the Rashba-split state, as shown in Fig. 1e. It is precisely this scenario that develops at the Bi 0.985 V 0.015 TeI surface after a cooling down to 20 K, as is evidenced by ARPES measurements reported in Fig. 1f, where the appearance of a giant gap of about 90 meV is clearly seen in the Γ-point. Raw ARPES spectra taken for different emission angles at 20 K are shown in Fig. 2a. Far from the Γ-point, an undisturbed Rashba-like behavior is observed. However at the Γ-point the spectrum shape is seen to contain the two peaks, corresponding to the exchange gap edges. Based on the comparative analysis of the fitted energy and momentum distribution curves we estimate the gap size as 90 ± 10 meV. It should be noted that, as we show in the Supplementary Information, the intensity within the gap is non-zero owing to the line width tails of the broadened intensity peaks. The full three-dimensional dispersion relation E(k x , k y ) is shown in Fig. 2b, where the second derivative of N(E) is presented. Although the second derivative data cannot be used as an evidence of the gap formation, the overall behavior of the bands is visualized better in this case. Indeed, one can clearly see that within the gap the constant energy contour consists of only one circle, without any feature at/around the Γ-point. Thus, the results of our ARPES measurements unambiguously point at largely magnetic origin of the Kramers degeneracy lifting, which takes place only at low temperatures. This implies that, with a decrease of temperature, Bi 0.985 V 0.015 TeI experiences a transition from the TRS-preserving paramagnetic state to the magnetically-ordered state breaking it. To determine magnetic ordering of the V-doped BiTeI we have computed the exchange coupling parameters and magnon spectra using density functional theory (see Supplementary Information). Our calculations show that although each TL of the V-doped BiTeI is ferromagnetic on its own, the ordering between the adjacent TLs rapidly disappears with the increase of temperature. This, however, does not impede formation of the magnetically ordered state at the surface since the topmost TL is ferromagnetic. These results indicate that it is the topmost FM triple layer that is responsible for the TRS breaking and giant exchange gap opening at the V-doped BiTeI surface. In order to get deeper insight into the giant exchange gap formation we have performed first-principles surface electronic structure calculations. Since only the magnetism of the topmost TL is essential, we only introduce the dopant into the subsurface Bi layer (see Methods section), which allows us to perform calculations in the low concentration limit. Figure 3a shows the Rashba-split surface state of the Te-terminated undoped BiTeI(0001). It is characterized by a slightly smaller α R than is typically reported in DFT studies 30,32 , which is a consequence of a big in-plane supercell choice. However, this is not expected to affect the results and conclusions of the present work (see Supplementary Information). Then, we study the effect of vanadium doping on the Te-terminated BiTeI(0001) surface band structure. One might indeed expect the breakdown of the TRS and, consequently, lifting of the surface state degeneracy at the Γ-point since, being embedded in the Bi layer, a V atom features a magnetic moment of 2.88 μ B . Note, that the gapping of the surface states as a consequence of the TRS breaking often assumes induction of the spin polarization on the atoms of a nonmagnetic host by different magnetic agents 33,34 . In the case of V-doped BiTeI, the magnetic moments are mainly induced on Te and I atoms, which are located in the immediate vicinity of the V one and have their p-states hybridized with the d-states of the latter. This leads to a lifting of the Rashba-split surface state degeneracy at the 2D BZ center, i.e. to an opening of the exchange gap (Fig. 3b). Surprisingly, the magnitude of the calculated gap of 10 meV turns out to be significantly smaller than that of the measured one. This means that other factors must be contributing to the formation of the giant exchange gap in the Rashba-split surface state. One of these might be a presence of point defects in the V-doped BiTeI. As a recent scanning tunneling microscopy study shows, there is a large number of point defects seen at both the Te-and I-terminated (0001) surface of the pure BiTeI 35 . These defects were suggested to be either vacancies or antisites. We suppose that V-doped BiTeI demonstrates a similar situation as long as the dopant concentration is small, which is the case for our samples having rather low V content (0.5 at.%). Therefore, assuming the coexistence of V atoms with point defects, we have separately considered the magnetism of all possible antisites as well as the vacancies in each atomic layer, that have been placed in the topmost Te-terminated TL of the surface of V-doped BiTeI. It turned out that among all the point defects considered, only a Bi vacancy directly induces the host magnetization (mainly on Te atoms) beyond that caused by the V moments. Moreover, even in the V absence it can supply up to 1.55 μ B per a vacancy -a case which is by 7 meV more favorable than the one with a suppressed magnetization around a vacant site. Our exchange coupling constants calculations show that the magnetic moments appearing on the Te atoms that surround Bi vacancies tend to align antiparallel to those of V atoms, rather insensitively to the dopant-vacancy distance (the coupling strength is distance-dependent, though). The Te atoms directly bound to the V one are magnetized antiparallel to its local moment as well. Thus, both V atoms and Bi vacancies lead to the enhancement of the overall magnetization of the Te layer. It should be noted, however, that the Bi vacancies do not create any magnetic order in the absence of vanadium. On the other hand, their presence increases somewhat the vanadium local moments and enhances the strength of the exchange interaction between them slightly elevating the Curie temperature. Importantly, V doping can facilitate appearance of the Bi vacancies, since their formation energy gets reduced by approximately 0.23 eV in the dopant vicinity. To see how the additional magnetization induced by the presence of point defects influences the magnitude of the exchange gap, we have performed band structure calculations with both V and Bi vac located in the subsurface atomic layer of BiTeI(0001). However, the results obtained do not allow to unambiguously determine the size of the exchange gap since its edges are broadened due to the Bi vac presence (see Supplementary Information). Therefore, to scrutinize the effect of the coexistence of the V dopants and point defects, we resort to a tight-binding approach (see Methods section), where instead of including defects directly, we rather take into account the magnetization they supply. This has been done by introducing an additional Zeeman-like contributions to the Hamiltonian, whose values have been set to the exchange splittings of the Bi, Te and I p-states calculated ab initio in the presence of both V dopant and Bi vacancy. The validity of the approach employed has been verified against the DFT result shown in Fig. 3b and the tight-binding description has yielded a gap of similar magnitude (11 meV). Figure 3c shows the result of a tight-binding calculation, performed for the case of the V-doped BiTeI in the presence of Bi vac . It can be seen that the exchange gap size gets increased appreciably upon accounting for an additional magnetization due to the vacant sites and amounts to 34 meV. This result shows the crucial role, which can play the point defects in the V-doped BiTeI. Note that in real samples the magnetic interplay of point defects and V dopants is probably more intricate than it is theoretically exemplified here. Indeed, in ab initio calculations we have only considered simple point defects. For example, a Bi vacancy has been created by a complete removal of one Bi atom from the cell, and not by its relocation to an interstitial or a surface. Similarly, an antisite atom A has been introduced in a layer B (A B ) without leaving behind a vacancy in the layer A or creating the B A antisite which would correspond to the site exchange between atoms A and B. However, in the experimental situation the defects may be complex, as, e.g., a vacancy-antisite pair, proposed for the undoped BiTeI(0001) surface 35 . Moreover it is not excluded that other complex defects, which have not been observed for the undoped BiTeI case are enabled to form due to V doping. Therefore, further specific and detailed studies, such as scanning tunneling microscopy measurements and simulations, combined with careful theoretical characterization of the observed dopant-defect complexes, are required to uncover missing factors contributing to the formation of the giant exchange gap in the Rashba-split surface state. There is, however, a fundamental reason why the calculated exchange gap size is smaller than the experimentally measured one. The stationary density functional theory is not designed to describe excited state properties and typically underestimates the band gap sizes by 30-70% of the experimental ones 36 . Lately, the issue of the Kramers degeneracy lifting observed for the surface states of different bulk-doped systems is under intense discussion 9, 24, 37-39 . Noteworthy, a 2DEG Rashba-split state with an exchange gap of 90 meV has recently been observed at the SrTiO 3 surface 38 , however the origin of the strong Rashba-like splitting in this system is not clear yet 40 . In the case of impurity-doped topological insulators, the Kramers point splitting was experimentally observed 9, 24 , even at room temperature 37,39 . The explanations involving appearance of the superparamagnetic clusters 37 or a strong resonant scattering due to the dopant in-gap states 39 have been put forward. The latter case is particularly interesting since it may lead to the surface state gapping without magnetism if a sufficiently strong potential is created by the impurities 41 . However, this effect would not depend on temperature and, besides, such a perturbation would gap the surface state not only in the Γ-point, but at finite k as well. The results of our ARPES measurements exclude such a scenario, since the gap is seen only in the Γ-point and disappears with the increase of temperature. Another contribution may possibly come from the photoemission process during ARPES measurements of Bi 0.985 V 0.015 TeI. Photoexcitation of electrons in V-doped BiTeI by ultraviolet light can induce the spin accumulation at the surface owing to giant spin-orbit interaction. Similar effects had been observed in BiTeI in Refs 31, 42 by laser excitation. Spin accumulation acts like effective magnetic field, and thus, can result in enhancement of the value of the exchange gap at the Γ point. Conclusions In summary, by means of ARPES we have shown that the giant-Rashba-split electronic state residing at the Te-terminated surface of the vanadium-doped polar semiconductor BiTeI exhibits a huge band gap at the Brillouin zone center. Our photoemission measurements, complemented by detailed theoretical calculations, allow to conclude that the origin of this gap largely lies in the time-reversal symmetry breaking, that stems from the magnetically ordered state, onsetting at the vanadium-doped BiTeI surface at low temperature. With this finding, magnetically doped BiTeI poses itself as a promising material to be combined with s-wave superconductors with the objective of Majorana fermion formation. Methods Single crystals of BiTeI + 0.5% V were grown using 99.999% pure powders of Bi, Te, I and V by the Bridgman method. ARPES experiments were carried out at Helmholtz-Zentrum Berlin (BESSYII) at beamlines UE112-SGM with the assistance of a Scienta R4000 energy analyzer using synchrotron radiation with the energy of 23 eV. This photon energy allows to obtain the photoelectrons mainly from the surface layers and to avoid the resonant effects appearing due to the Bi 5d levels excitation. The overall energy resolution for the ARPES measurements was 5 meV. Samples were cleaved insitu at the base pressure of 6 × 10 −11 mbar. In order to check the contamination of the surface, the cooling down to 20 K had been done both before and after the cleavages. Part of the experiments were performed out in the Research Resource Center "Physical methods of surface investigation" of Saint Petersburg State University. The crystalline order and cleanliness of the surface were verified by low energy electron diffraction (LEED) and X-ray photoelectron spectroscopy (XPS). Ab initio calculations were performed using the density functional theory in the generalized gradient approximation to the exchange-correlation potential 43 . The localized V 3d-states were described in the framework of the GGA + U approach 44 , with an effective U − J value of 3 eV (Dudarev's scheme 45 ). Surface electronic structure calculations were performed within the projector augmented-wave method 46 in the VASP implementation 47,48 . The Hamiltonian contained the scalar relativistic corrections and the spin-orbit coupling was taken into account by the second variation method 49 . In order describe the van der Waals interactions we made use of DFT-D2 approach proposed by Grimme 50 . We set the energy cutoff for the plane-wave expansion of wave functions to be equal to 250 eV and chose a Γ-centered k-point grid of 3 × 3 × 1 to sample the two-dimensional Brillouin zone. The (0001) surface of V-doped BiTeI was simulated by a 3TL-thick slab and (3 × 3) hexagonal in-plane supercell with 9 atoms per single layer. Vanadium atom and/or Bi vacancy were placed in the surface TL, which is terminated by Te atoms. The doping of the subsurface TL wasn't considered since the Rashba-split state at Te-terminated BiTeI(0001) resides predominantly in the topmost TL block. Thus the V concentration in our calculations was of about 1.23%. Upon introduction of V atom and Bi vacancy, a structural optimization was performed using a conjugate-gradient algorithm and a force tolerance criterion for convergence of 0.05 eV/Å. Vanadium's magnetic moment was directed out-of-plane, i.e. perpendicular to the surface. As far as the iodine surface is concerned, it was passivated by placing a hydrogen atom on top of each iodine one. All calculations were performed using a model of repeating slabs separated by a vacuum gap of a minimum of 10 Å. Magnetic ordering of the V-doped BiTeI was studied using the Korringa-Kohn-Rostoker method 51, 52 within a full potential approximation to the crystal potential 53 . Final concentration of the V atoms and Bi vacancies was accounted for using the coherent potential approach as it is implemented within the multiple scattering theory 54 . We took an angular momentum cutoff of l max = 3 for the Green's function and a k-point mesh of 50 × 50 × 50 (50 × 50 × 1) for the 3D (2D) Brillouin zone integration. Ab-initio-based tight-binding calculations were performed using the VASP package with the WANNIER90 interface 55,56 . The wannier basis chosen consists of six spinor p-type orbitals > ↑ p x , | > ↑ p y , | > ↑ p z , > ↓ p x , | > ↓ p y , | > ↓ p z for each atom, while the low-lying s orbitals were not taken into consideration. The band-bending potential was obtained from previous DFT calculations 57 . A Zeeman term in Eq. (2) was set to the exchange splitting of the Bi, Te and I p-states, as calculated within a scalar-relativistic approach. The surface band structure was calculated for the semi-infinite system within the Green function approach 58, 59 .
4,874.8
2017-06-13T00:00:00.000
[ "Physics" ]
The Mobile Version of the Predicted Energy Efficient Bee-Inspired Routing ( PEEBR ) In this paper, the previously proposed Predictive Energy Efficient Bee-inspired Routing (PEEBR) family of routing optimization algorithms based on the Artificial Bees Colony (ABC) Optimization model is extended from a random static mobility model, as employed by its first version (PEEBR1), into a random mobility model in its second version (PEEBR2). This random mobility model used by PEEBR-2 algorithm is proposed and described. Then, PEEBR-2’s was simulated in order to compare its performance relative to the first version (PEEBR-1) in terms of predicted optimal path energy consumption, nodes batteries residual power and fitness. The simulation results have shown that PEEBR-2’s optimal path is predicted to consume less energy and realizing higher fitness. On the other hand, PEEBR-1’s optimal paths nodes possess higher batteries residual power. At last, the impact of mobile nodes speeds was studied for PEEBR-2 in terms of optimal path’s predicted energy consumption and path nodes batteries residual power showing its performance stability relative to nodes mobility speed. Keywords—PEEBR; PEEBR-1; PEEBR-2; Energy Efficient Routing; Bee-inspired; Artificial Bee Colony (ABC) optimization; Random Mobility Model INTRODUCTION Swarm Intelligence (SI) is a computational intelligence approach, as described by Mayur Tokekar and Radhika D. Joshi (2011) in [7] that is based on the study of collective behavior of social insects in decentralized, self-organized systems.SI involves a collective behavior of autonomous agents that locally interact with each other in a distributed environment to solve a given problem in the hope of finding a global solution to the problem as defined by J. Wang et al. (2009) in [8].Ant Colony Optimization (ACO) introduced by M. Dorigo et al. (2006)in [9] and Artificial Bee Colony (ABC) Optimization by D. Karaboga and B. Basturk (2007) in [1][2][3] are among SI optimization techniques that are relatively more robust, reliable, and scalable than other conventional routing algorithms. The Artificial Bee Colony (ABC) Optimization model introduced by D. Karaboga and B. Basturk (2007) was a general purpose swarm Intelligence SI optimization technique based on efficient labor employment and efficient energy consumption through a multi-agent distributed model.The Predictive Energy Efficient Bee-inspired Routing (PEEBR) was inspired from the honey bees foraging behavior based on the natural bees food source search behavior that aims to discover, exploit and direct the swarm of working bees into the highest quality food source.It tends to map ABC"s optimization model equations onto Mobile Ad-hoc wireless Network MANET routing parameters used to evaluate potential paths between a certain source and destination.Therefore, PEEBR could be considered as a power-cost efficient algorithm that uses two types of bee agents to collect information about every potential path from source to destination then evaluate their efficiency and assign an equivalent fitness value and goodness ratio for each potential path based on the number of hops, the amount of energy to be consumed and the path nodes residual battery energy. The proposed Predictive Energy Efficient Bee-inspired Routing mobile version (PEEBR-2) is the second version of PEEBR considering nodes mobility while routing.The critical mobility parameter represented by the mobile nodes speed"s effect on average predicted energy consumption by the optimal path nodes and on the optimal path nodes battery residual power while routing using PEEBR-2 algorithm will be shown and analyzed in this chapter.This paper is organized as follows: the second section presents the Predictive Energy Efficient Bee-inspired Routing-1: PEEBR-1 algorithm and model.Then, PEEBR-1"s path selection mechanism is discussed by the third section.The fourth Section will propose the mobile version PEEBR-2 mobility model and algorithm.PEEBR-2"s simulation results are discussed by the fifth section and the paper"s contribution is summarized by the conclusion in the sixth section. II. PEEBR-1 ALGORITHM AND MODEL The proposed Predictive Energy-Efficient Bee Routing-1 (PEEBR-1), proposed by Imane M. A. Fahmy et al. 2012 [4] then improved and evaluated in [5][6], is assumed to be a reactive routing algorithm that enables a source node to discover the optimal path to a destination node based on the expected energy to be consumed during packets reception and the path nodes residual battery power. A. Mapping ABC Model onto PEEBR In the following table 1, the inspired ABC model"s elements are mapped to the PEEBR"s algorithm elements together with their optimization interpretation in order to clarify the inspired parts of the ABC model including: The fitness function and the probability associated with each potential path.www.ijarai.thesai.orgBy selecting the path that consumes less energy and hence reserve the nodes batteries along the path, this will certainly extend the network lifetime by extending the nodes batteries lifetime. B. PEEBR-1 Model In figure 1, there are three potential routing paths from the source node "A" towards the destination node "J": the first potential path R1: A, B, E, J, the second potential path R2: A, C, F, J, and finally, the third path R3: A, D, G, I, J. The selection of the optimal path among these three paths depends on the amount of energy expected to be consumed by the mobile nodes over that path during communication.This energy information will be collected by the forward bee agents during path discovery journey and includes mainly two essential metrics: the first is nodes battery power residual to determine their expected lifetime and hence their efficiency in relaying the data packets along the path.While the second is the total amount of energy expected to be consumed by all nodes along the path for each path represented by E(R j ) where j is the path index.It is noteworthy to mention that the first objective for routing optimization is minimizing the amount of energy consumed during the communication.In order to calculate the total amount of energy consumed by all the nodes along each path during reception is given by the general expression (1) that could be applied for the former example on three possible paths: R1, R2 and R3. ) Where ( ) is the number of hops over a path and E r (p) is the amount of energy consumed during reception of a packet p.If ( )< ( )< ( ) then R1 is the routing path that achieves the least power consumption in case it meets all other threshold constraints such as: the nodes battery residual power P(n) that enables reliable data packets transmission during communication along the path and the then R1: A, B, E, J will be selected as the optimal path between the source node A and the destination node J. Therefore, the path goodness ratio g(R j ) could be a ratio assigned to each potential path from A to J reflecting the path quality combining: the total amount of energy expected to be consumed by the path"s nodes, the path nodes residual battery energy ( ) and the number of hops ( ). The second objective for routing optimization is maximizing the battery residual power of the selected path"s nodes B(R j ) In order to achieve both objectives: the least power consumption and selecting the path with the maximum nodes battery residual energy, these objectives could be combined in a cost function that aims to minimize energy consumption and maximize battery residual power as given by expression (2). Finally, the path fitness ( ) relative to all potential M paths is represented by the general expression (3): Each potential path between the source node and the destination node could be represented by a goodness ratio reflecting its energy consumption and its nodes battery residual energy.The probability of a path selection ( ) reflecting its goodness ratio could be computed by expression (4): In order to test PEEBR"s performance, it was run on T max =100 iterations.The nodes battery residual power ( ) was decreased after each iteration to reflect the real world"s battery power decay as given by expression (5): Where ( ) is the initial node battery residual power, t is the iteration number and τ is a time constant.Finally, PEEBR-1 termination conditions were: reaching the maximum number of iterations T max or a minimal predefined fitness value.The candidate path solution R c is the path with the highest goodness ratio deduced at the end of each iteration (It): R c = R o [It] www.ijarai.thesai.orgFinally the optimal path R o is the optimal path deduced before one of the stopping conditions (The maximum number of iterations T max or the Minimum Goodness Ratio MGR) is reached that is: The proposed predictive energy efficient model inspired by the BCO swarm intelligent model is an optimization model for the MANET. B. PEEBR-1 ALGORITHM PEEBR-1 algorithm for the optimal candidate path discovery process from source n s to the destination node n d could be summarized by algorithm 1. PEEBR-1(Net_Topology, n s , n d , MGR) //The Scout bee phase: 1. Source node n s, floods a "Scout packet" associated with a Time-To-Live TTL to all N j neighboring nodes on all M potential paths.2. Each "Scout" j flies over one of the M potential routes R j until it reaches destination node n d . If (TTL==0) //TTL packet expires over a neighboring path Then "Scout" bee agent packet will die indicating failure to reach destination to the source and the corresponding routing path will be avoided.4. When a bee agent reaches the destination node n d , it is sent back to its source n s through the same traveled route. 5.The "Scout packet" collects the potential route"s routing information including: -Count number of hops h(R j ) -Collect each route nodes residual battery power B( nji ) where i=1 to N j nodes -Collect the amount of predicted receiving power ( ) //The Forager bee phase: 6.At the source node n s , the "forager" evaluation process starts by calculating the predicted amount of energy to be consumed for each "Backward Scout" discovered route using (1).www.ijarai.thesai.org 7.Each potential route cost ( ) is calculated for each route dependent on its hop count h(R j ), its nodes residual battery power B( nji ) and its expected amount of receiving power consumed ( ) using (2).8. Associate a fitness value reflecting the goodness of each route ( ) using (3).9. Assign each route a goodness ratio G( ) using (4) 10.If (G( ) < MGR) Then exit //stopping criteria for minimum goodness rati 11.Get the candidate path solution R c = R o [It] between n s and n d is the route with the maximum goodness as given by (5) Therefore, PEEBR-1 algorithm for the optimal candidate path discovery process from source n s to the destination node n d could be summarized by algorithm 2. Algorithm 2: The Iterative Predictive Energy-Efficient Bee Routing-1 (PEEBR-1) Optimization 1. Choose one of twenty randomly generated MANET topologies.//select Net_Topology 2. Generate randomly nodes batteries residual power from a pre-defined range.Update routes nodes batteries residual power for next iteration using (5-10) 9. End 10.Get the optimal route R o between n s and n d with the maximum goodness as given by (5-11) IV. THE PROPOSED PEEBR-2 MODEL AND ALGORITHM The proposed Predictive Energy Efficient Bee-inspired Routing (PEEBR-2) mobility model, flow chart and algorithm are described by the following sub-sections. A. The Proposed PEEBR-2 Mobility Model The random mobility model used by PEEBR-2 is based on MANET topologies variation.Each MANET topology Net (k) is formed according to expression (7): Where i is the mobile node ID on MANET topology k.Moreover, N is the number of nodes (e.g.N = 25 nodes in the studied mobility scenarios) and L is the number of random topologies generated for the mobility model (e.g.L = 20 random topologies). Then, the random topology variation from time interval to another could be computed as given by expression (8) as follows: The random mobility model used by PEEBR-2 is based on the mobile nodes positions random variation.Assuming that every mobile node is moving in a circle of the MANET area with radius r k with a random angle θ k , the initial nodes coordinates for the first MANET topology are given by equations ( 9) and (10): Where the random angle is within the range: θ k є [0,2π] and r k є [0, r max ] is the mobile node"s displacement distance limited by its maximum displacement distance r max .Then, the next random topologies, the new mobile nodes coordinates are deduced from equations ( 11) and ( 12): Therefore, the next MANET topology could be generated from the sum of the previous and its variation using expression (13): The pre-defined (constant or within a random range) mobile node"s displacement distance r in meters and the input mobility speed ν in meters/second are used to compute the time t in seconds for each random topology using equation ( 14): The mobility simulation parameters data are generated from mobility simulation software based on the former expressions.Then, each MANET mobility topology represents a time interval.Within the same time interval, PEEBR-2 runs for the pre-defined number of iterations.Afterwards, at end of each topology, the computed time t is used to update the potential paths nodes batteries residual power for the next time interval. B. PEEBR-2 Flow Chart and Algorithm The proposed Predictive Energy Efficient Bee-inspired Routing for mobility mode: PEEBR-2 for the MANET is based on applying the random mobility model.It could be summarized as shown by the flow chart in figure 3. Then PEEBR-2"s detailed algorithm will be discussed by algorithm (3).www.ijarai.thesai.orgThe experiment is based on optimal path selection between the arbitrary source and destination nodes for both protocols.PEEBR-2 is working on first random mobility scenario with 150 meters displacement distance (i.e., r = 150m). In the following sub-sections, PEEBR-1 is compared to PEEBR-2 for different MANET sizes (number of nodes) in order to evaluate the impact of mobility with number of nodes variation on PEEBR"s algorithm performance. A. Optimal Path Predicted Energy Consumption In figure 4, PEEBR-1"s and PEEBR-2 are compared in terms of optimal path predicted energy consumption (joules) for different MANET sizes. B. Optimal Path Nodes Batteries Residual Power In figure 5, PEEBR-1"s and PEEBR-2 are compared in terms of optimal path nodes batteries residual power (joules) for different MANET sizes.As shown by figure 5, PEEBR-1"s optimal path nodes batteries residual power is higher and outperforms PEEBR-2 for all different MANET sizes (7-nodes, 10-nodes, 15-nodes and 25-nodes) especially for medium size MANETs (15-nodes) since mobility results nodes batteries power exponential degradation. C. Optimal Path Fitness In figure 6, PEEBR-1"s and PEEBR-2 are compared in terms of optimal path fitness for different MANET sizes.Fig. 6.Impact of number of nodes on optimal path average fitness As deduced from figure 6, PEEBR-2"s optimal paths achieved higher optimal path fitness.The static PEEBR-1"s optimal path fitness is clearly lower since it is inversely proportional to the path cost.Hence, PEEBR-1 resulted in higher cost paths relative to the mobile PEEBR-2 for all different MANET sizes (7-nodes, 10-nodes, 15-nodes and 25nodes) especially for medium size MANETs (15-nodes). D. Impact of Mobile Nodes Speed Variation The impact of the mobile nodes speed variation is studied in terms of the optimal path energy consumption and batteries residual power. 1) Optimal Path Predicted Energy Consumption In figure 7, PEEBR-2"s predicted energy consumption firstly decreased with mobile nodes speeds from 10 to 20 m/sec, then, it maintained a stable consumption for speeds in the range from 30 to 50 m/sec. VI. CONCLUSION AND FUTURE WORK The previously proposed Predictive Energy Efficient Beeinspired Routing-1 (PEEBR-1) algorithm and flow chart were presented after model improvements.Then, the mobile version of the Predictive Energy Efficient Bee-inspired Routing-2 (PEEBR-2) algorithm and flow chart were introduced considering MANET nodes random mobility. Then, PEEBR-2 algorithm"s random mobility model was proposed.Its performance evaluation was performed using simulation in order to compare it with (PEEBR-1) in terms of the predicted optimal path predicted energy consumption, the optimal path nodes batteries residual power and the optimal path fitness. The simulation experiments showed that PEEBR-2 outperformed PEEBR-1 in terms of optimal path predicted energy consumption and fitness.However, PEEBR-1"s optimal paths attained higher batteries residual power. Finally, the impact of mobile nodes speeds was assessed by means of optimal path"s predicted energy consumption and path nodes batteries residual power revealing a stable performance. Fig. 1 . Fig. 1.An example of path discovery process using bee agents 3 . 4 . 5 . 6 . Define a Maximum Number of Iterations T max to run the following phases.Define a Minimum Goodness Ratio MGR threshold.Define a Time-To-Live TTL for the "Scout packet".Input source and destination nodes n s and n d 7.For (iteration e=1; e <= T max ; e++) PEEBR-1(Net_Topology, n s , n d , MGR) 8. Fig. 3 . Fig. 3. PEEBR-2 Iterative Optimization Algorithm Flow Chart Therefore, PEEBR-2 algorithm for the optimal candidate path discovery process from source n s to the destination node n d could be summarized by algorithm 3. Algorithm 3: The Predictive Energy Efficient Bee-inspired Routing-2: (PEEBR-2) PEEBR-2 (v, r k , θ k, n s , n d ) For (iteration It=1; It<= MNI; It++) //MNI is Maximum Number of Iterations -PEEBR-1(Net_Topology, n s , n d , MGR) -Deduce R C the candidate optimal path between n s and n d is the route with the maximum goodness as given by (5) -Calculate the topology mobility time t given the input mobile node displacement r k and speed ν using (14) -Update the potential routes nodes batteries residual power using (4) as function of the mobility time t.-Return the candidate path R c = R o [It] from n s to n d End EndHence, algorithm 4 presents PEEBR-2 iterative optimization algorithm for optimal path discovery process between source n s to the destination node n d . Fig. 4 . Fig. 4. Impact of number of nodes on optimal path average energy consumptionAs shown by figure4, the mobile PEEBR-2 outperforms the static PEEBR-1 for different MANET sizes from small to medium sizes since it results in less energy consumption. Fig. 5 . Fig. 5. Impact of number of nodes on optimal path average nodes batteries residual power Fig. 7 . Fig. 7. Impact of mobile nodes speed on optimal path predicted energy consumption2) Optimal Path Nodes Batteries Residual powerIn figure7, PEEBR-2"s optimal path nodes batteries residual power decreased with mobile nodes speeds from 10 to Fig. 8 . Fig.8.Impact of mobile nodes speed on optimal path nodes batteries residual power on battery Residual Power
4,278.4
2016-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
A novel MPPT design based on the seagull optimization algοrithm for phοtovοltaic systems operating under partial shading The use of a maximum power point (MPP) tracking (MPPT) controller is required for photovoltaic (PV) systems to extract maximum power from PV panels. However, under partial shading conditions, the PV cells/panels do not receive uniform insolation due to several power maxima appear on the PV array's P–V characteristic, a global MPP (GMPP) and two or more local MPPs (LMPPs). In this scenerio, conventional MPPT methods, including pertub and observe (P&O) and incremental conductance (INC), fail to differentiate between a GMPP and a LMPP, as they converge on the MPP that makes contact first, which in most cases is one of the LMPPs. This results in considerable energy loss. To address this issue, this paper introduces a new MPPT method based on the Seagull Optimization Algorithm (SOA) to operate PV systems at GMPP with high efficiency. The SOA is a new member of the bio-inspired algorithms. When compared to other evolutionary techniques, it uses fewer operators and modification parameters, which is advantageous when considering the rapid design process. In this paper, the SOA-based MPPT scheme is first proposed and then implemented for an 80 W PV system using the MATLAB/SIMULINK environment. The effectiveness of the SOA based MPPT method is verified by comparing its performance with P& O and PSO (particle swarm optimization) based MPPT methods under different shading scenarios. The results demonstrated that the SOA based MPPT method performs better in terms of tracking accuracy and efficiency. variable step to determine the ideal duty cycle value, resulting in a quicker time response and greater stability under varied operating circumstances. However, PV arrays are frequently subjected to partial shading conditions (PSCs), which are the root cause of the majority of output power decrease and mismatch 13 . When the PV array is operating under these conditions, the P-V curves are characterized by the appearance of many local peaks, which are caused by the activation of bypass diodes, which protect shaded cells 14 . In such partial shading conditions, standard MPPT algorithms may miss the target by converging to a local maximum rather than the global maximum, resulting in a large loss in output power and, as a result, a poor overall system yield. A variety of enhancements to traditional MPPT algorithms have been developed to deal with the impact of shading on the P-V curves. Some are topology-based and require extra power circuits to accomplish global MPPT (GMPPT) 15 . As a result, overall efficiency is lowered. Others are algorithm-based strategies such as fuzzy logic with polar controller and sequential extremum searching control 16 . The effectiveness of soft computing methods in handling nonlinear problems, such that encountered in PV array behaviour, and their implementation simplicity make them very attractive to solve the MPPT problem of PV systems, especially in the case of partial shading and module mismatches 17 . Artificial Neural Networks, are one of soft computing methods that was used in MPPT techniques. Typically, they were used to estimate the MPP with respect to the randomly changing weather conditions 18 , and to improve the P&O and IC algorithms 19 . These approaches are expensive, time-consuming operations that necessitate the use of complicated technology. However, this solution can increase the cost of the PV system due to the high number of used sensors. Evolutionary computation techniques, such as Differential Evolution (DE) 20 , has been also proposed to deal with the MPPT problem. However, EC techniques might present a poor convergence rate and slow convergence time 21,22 . The metaheuristics techniques have a good convergence rate and fast convergence compared to EC techniques. In addition, the application of the metaheuristic algorithm for MPPT has attracted the interest of many researchers due to its ability to handle nonlinear functions without requiring derivative information. Since metaheuristic MPPT approaches are an efficient search and optimization method for real-valued multi-modal objective functions, it is envisaged to be very effective to deal with MPPT problems. Various metaheuristic approaches are found in the literature but the more popular ones are particle swarm optimization (PSO), grey wolf optimization (GWO), ant colony (ACO), Artificial Bee Colony (ABC), Whales Optimization Algorithm (WOA) [23][24][25][26][27][28] . Sarvi et al. in 29 proposed the PSO-based MPPT for PV systems under PSC to find the GMPP. Nevertheless, this solution presented oscillations around the steady-state. Hence, some researchers have attempted to improve the PSO to reduce oscillations 30,31 . However, their improved method cannot follow the dynamic GMPP under various shading patterns. Furthermore, Jang et al. in 32 proposed an ACO algorithm and showed that this method has a faster convergence speed compared to the Basic PSO. ACO and PSO methods present a major disadvantage in terms of convergence linked to the initial placement of the agents into the research space. In addition, both PSO and ACO need the determination of many parameters, making them rigid and complicated. To overcome these complexities found in the PSO and ACO methods, the authors of 33 established a comprehensive bio-inspired approach for addressing computationally costly issues called the seagull optimization algorithm (SOA), which mimics the search and attack behaviors of seagulls in nature. This algorithm is one of the latest effective optimization methods, which is gradient-free and applicable to optimize all engineering problems occurring in real life. Additionally, compared to other evolutionary algorithms, SOA requires fewer variables for adjustment and fewer operators, which is advantageous when considering a speedy design process 33 . This algorithm is divided into two phases: the exploration and exploitation phases. During the exploration phases, the search agent makes larger update steps to the candidate solutions. On the other hand, the search agents seek to make use of the search process's history and experience during exploitation. In 34 , the authors present a Modified Seagull Optimization Algorithm (MSOA) based MPPT approach by incorporating Levy Flight Mechanism (LFM) and the formula for heat exchange in Thermal Exchange Optimization (TEO) into the original Seagull Optimization Algorithm (SOA). Thus, in this article, their results from the simulation of the exploration phase is not clear. In which it is calculating the fitness values of each search. Yet, to the best of the authors' knowledge, no research has been done on MPPT based on SOA so far, which motivates us to study this method and to enrich the scientific references with the developed version of the original SOA for MPPT controllers. To this end, this work proposes an SOA-based metaheuristic MPPT method for tracking the GMPP to maximize the PV power output in PV systems operating under both uniform and partial shading conditions. This method is considered best suited for real engineering problems compared to another metaheuristic algorithm. The MPPT's speed and efficiency will be considerably improved. After the introduction, in "The effect of shading on PV array" section briefly presents the effect of partial shading on the PV array characteristics. ïn Section ″Selection the parameters of Boost converter″ introduces the SOA's fundamentals and mathematical model. in Section ″Seagull optimization algorithm (SOA)″ presents the proposed MPPT controller and how it developed based on SOA. in Section ″Results and discussion″ gives and discusses the simulations results of the proposed SOA-based MPPT method, along with a comparison of its performance with PSO and P&O based MPPT methods. Finally, "Processor In the Loop (PIL) testing" section summarizes the results and suggests some recommendations for further research. The effect of shading on PV array PV cell and module modeling. The electrical model of the PV cell, as illustrated in Fig. 1, consists of a current source, a diode, and a resistor Rsh linked in parallel, as well as a series resistor Rs. Where the current source is proportional to sun irradiation. The Rs is primarily determined by the metal base's contact resistance with the p semiconductor layer, the p and n bodies' resistances, the n layer's contact resistance with the top metal grid, and the grid's resistance. The Rsh resistance is mostly influenced by the leakage current of the p-n junction and is affected by the PV cell production procedure 35 Fig. 1, the following equation 35 gives the output current: where I ph , I, and I 0 are denoted the phοtocurrent, the οutput cell current, and the reverse saturatiοn current respectively. V represents the οutput cell vοltage. R s and R sh present a series resistance and parallel resistance respectively.A is the diοde ideality factor. The value of the termal voltage is given by (2). where K, q, and T are denoted the Bοltzmann cοnstant (1.38 × 10 −23 (J/K)), the charge of electrοn (1.6 × 10 −19 (C)), and the sοlar cell temperature (K) respectively. The photocurrent I ph,c of a solar cell depends on many material characteristics. However, it can be approximated as linear-dependent on irradiance and temperature with sufficient accuracy as follow 36 : where I sc,ref is solar cell short-circuit current at standard test conditions (STC): µ sc is the solar cell short-circuit temperature coefficient, normally provided by the manufacturer (A/K). G is the actual irradiance intensity (W/m 2 ); The well-known diode saturation current estimation equation is given by 36 : where, the nominal saturation current I 0,ref at STC is given by: To achieve the desired voltage and current levels, Ns cells are connected in series and NP cells are connected in parallel respectively, thus forming a PV module. There for the PV module parameters are scaled according to NS and NP as given bellow 37 : The characteristics of the PV panel used in this work are shown in Table 1. Effect of partial shading on PV array. When PV cells (or modules) are partially shaded, they function as a load on other cells/modules and become reverse biased. As a result, instead of generating energy, they will dissipate it, resulting in a rise in cell temperature. The cell/module can be damaged and influence the entire PV module/array if the temperature becomes too high, which is called the hot spot issue. One of the most prevalent methods to avoid the hot spot problem is to connect a bypass diode to a set of cells connected in series 38,39 , as illustrated in Fig. 3. To apprehend the current flow direction of the PV array under PSC, consider the PV array in Fig. 3. The PV array consists of four PV modules where two PV panels are unshaded and the other are shaded, as illustrated in Fig. 3b. The P-V curve of the PV array under PSC can be divided into two phases. During uniform solar irradiance, the bypass diodes are reverse biased and therefore have no effect (Fig. 3a). In the other phase (Under PSCs), when the load current is higher than the shaded PV module, the bypass diode active. But, when the load current is lower than the shaded PV module, the bypass diode stays inactive as can be seen in Fig. 3b. Selection the parameters of boost converter A boost converter (step-up converter) is a power converter with an output DC voltage greater than its input DC voltage 41 . It is a class of switching-mode power supply (SMPS). A simple boost converter consists of an inductor L, a controlled switch S and a diode D, filters made of a capacitors are normally added to the output and the input of the converter to reduce voltage ripples (see Fig. 4). The critical inductance value of the boost converter is given by the Eq. (13): where: • V in is the input voltage; • V out is the desired output voltage; • f sw is the designed switching frequency; • I L is inductor ripple current; Output capacitor selection of boost converter. The current store to the output circuit is discontinuous. Therefore, to limit the output voltage ripple must use a big filter capacitor. When the diode is off, The filter capacitor should supply the output DC to the load. where: • C out_min is the output capacitance(minimum); • V out is the ripple of output voltage; • f sw is switching frequency in kHz; • I out_Max is maximum output current; • D is the duty cycle; The selection of C min must be higher than the calculated value to make sure that the converter's output voltage ripple remains within the specific range and its equivalent series resistance (ESR) should be low. ESR can be minimized by connecting many capacitors in parallel. Therefore, it can be assumed that the ESR is as in Eq. (15): Intput capacitor selection of boost converter. As stated earlier, the output of the PV voltage has ripples due to the change in temperature and irradiation. Therefore, it is necessary to replace the input capacitor in parallel with the voltage supply to minimize ripples produced by the solar panel. The ripples have an adverse effect on the output as the input voltage is proportional to the output current. Similarly to the output capacitor, ESR on the input capacitor should be considered by selecting a greater capacitor value than the calculated one. Equation (16) computes the value of the input capacitor while considering the ripples limit. where: The electrical parameters of the used boost converter are depicted in Table 2. Seagull optimization algorithm (SOA) The basics of the SOA. Seagulls are a type of coastal bird that has been around for roughly thirty million years. Their wings are large, and their rear legs have developed to enable them to travel on the water. Seagulls come in a range of sizes and shapes, and they may be found in practically every corner of the world. Seagulls are capable of drinking both fresh and saltwater. Most animals are unable to do this. On the other hand, Seagulls have a unique set of drums covering their eyes that they used to clean the salt out of their system by opening their beaks. Seagulls inhabit in vast groups and use a variety of voices to communicate with one another. With their expertise, they can find and attack the prey. They steal food under the influence of other birds, animals, and even people, which is one of their strangest behaviours. Seagulls eat mostly fish, although they also eat earthworms and insects. To discover and attack prey, seagulls use their intelligence. The most prominent characteristics of seagulls are their migratory and attacking habits. A group of seagulls migrated from one area to another using mathematical models of predator movement and attack. A seagull must satisfy the following requirements: The migration behaviour is described as follows: • They move in groups when migrating. To avoid accidents, their starting locations differ from one another. • They use their swarm experience to their benefit that is they try to go in the way of the highest survival to acquire the lowest cost value. Seagulls typically attack migratory birds over the sea. This procedure is influenced by the natural structure of the spiral's activity during the attack. During exploration, the algorithm must satisfy three conditions (avoiding collisions, move in the direction of the best neighbour, and stay close to the best search agent) to replicate how a group of seagulls moves from one place to another. The behaviour of the migration can be modelled by the following equation 33 : www.nature.com/scientificreports/ where the distance between the current search agent and the best-fit search agent is provided by D s , X s (t) denotes the current place of search agent, X bs (t) denotes the place of the best-fit search agent. t denotes the current iteration, A presents a linearly decreases from fc to 0, B is a randomized variable that ensures a correct balance of exploration and exploitation. During exploitation, seagulls seek to make use of the search process's history and experience. In this phase, seagulls use their wings and weight to keep their height. During the iteration process, the search agents might update their locations about the best search agent. As a result, the following equation is used to determine the search agent's updated position 33 : where � X s (t + 1) represents the equation of updating the position of other search agents. X ′ ,Y ′ and Z ′ described the spiral movement that behavior produces in the air, and which are defined as follows 33 : The radius of each spiral turn is r, while k is a random value in [0, 2π]. e is the natural logarithm's base, while u and v are constants that determine the spiral form. Here is the detailed pseudo-code of the SOA algorithm 33 : where d min and d max represent the limit of the search band mechanism. The distance D s can be calculated using the following equation : The new seagull's solution can then be generated using the following equation: Since the GMPP changes continuously as the weather conditions change, the SOA-MPPT algorithm must be restarted to search for the new GMPP. Therefore, to detect if ever a change in weather conditions takes place to restart the search, the following inequality is adopted in the algorithm: Whenever the inequality indicated above is met, the process of finding a new MPP will be repeated to ensure that the algorithm can always identify the GPPM regardless of the operating condition. Figure 6 depicts the principle working of the SOA based MPPT algorithm. Results and discussion To examine the performance of the SOA based MPPT method, the 80 W PV system depicted in Fig. 7 is considered, which is designed under MATLAB/SIMULINK environment. This system comprises a PV array formed by four 20 W PV modules serially connected, a boost DC-DC converter, an MPPT controller and a DC load. The DC-DC converter is controlled, using a PWM generator, by the duty cycle "α" which is generated by the SOA based MPPT controller. Table 3 shows the irradiation values considered in the simulation tests. The P-V characteristic obtained in each test is presented in Fig. 8. As can be seen in this figure, the P-V curve shows multiple peaks under PSCs. Each of these peaks is characterized by its voltage and power. The peaks number is depending on the number of shaded panels. The key parameters of the implemented MPPT methods, namely SOA, PSO and P&O, are seated in Table 4. To verify the ability of the SOA-based MPPT method to track the GMPP, the PV system was simulated with various PSCs (PSC1, PSC2, and PSC3) in addition to the standart test condition (STC). The system was first started by testing it under STC, and then each PSC was applied every 7.5 s for a period between 0 and 30 s, as depicted in Fig. 12 Table 5. The obtained power in the case of the SOA-based optimization method is significantly more than those of the PSO and P&O algorithms throughout the whole profile. Table 6 presents a comparison of SOA-based MPPT versus different MPPT methods existing in the literature according to these criteria: converter used, sensors used, convergence speed, tracking eficiency, steady-state oscillations level, implementation complexity and GMPP tracking ability. www.nature.com/scientificreports/ Figure 13 presents the tracking efficiency of PV system output power under various PSC using P&O, PSO and SOA methods. It can be concluted that the proposed SOA-MPPT is guaranted the tracking of GMPP with high efficiency better than P&O and PSO. Under all simulations tests, it can be observed that the the proposed SOA-MPPT and PSO algorithms successfuly converge to the GMPP corresponding to the different PSCs with a noticeable superiority of the proposed SOA-MPPT in terms tracking speed. Although the tracking speed of the P&O algorithm is higher than that of SOA and PSO. Yet, P&O algorithm is not even able to track the GMPP in the most case of PSC and is traped in the local MPP of the P-V curve (in the case of PSC1 and PSC2). In addition, we note that the P&O and PSO algorithm are not able to track the true MPP when the PV system operating under weak uniform conditions (see Fig. 14). In otherwise, the proposed SOA-MPPT successfuly converge to the MPP when the PV system operating under weak uniform conditions. Finally, it can be interpreted that the SOA based MPPT converges with a good speed and zeros oscillates around the GP compared to PSO and P&O based MPPT. Processor In the Loop (PIL) testing The goal of this part is to put the MPPT controller model onto a real embedded processor and run a closed-loop simulation with the simulated plant model; this is known as the processor-in-the-loop (PIL) test. In this way, the SOA-MPPT controller is replaced by a PIL block that have the controller code running on the hardware. The PIL test will help us identify if the processor is capable of executing the developed MPPT controller to validate the proposed MPPT control strategy on an actual embedded board. Figure 15 shows the embedded board used to perform the PIL experimentation, which is the Arduino MKRZERO board. The microcontroller integrated in this board is the ATMEL SAMD21 from Microchip Technology. This microcontroller contains a 32-bit Arm® Cortex®-M4F processor with Floating Point Unit (FPU) running at up to 120 MHz with 256kB flash memory, 32kB SRAM. As presented in Fig. 15, the PIL block is generated and connected to the plant model so as to acquire the PV output voltage and current, after that the PIL block will identify the required duty cycle by using the proposed algorithm and send it to the plant model. During the PIL process, the generated code is tested in realtime while the plant model runs on a computer which allows to detect and correct possible errors. Figure 16 depicts the result from the PIL test. It can be observed that the results obtained using PIL test are similar to the simulation results obtained in MATLAB/Simulink. Therefore, the MPPT control algorithm proposed in this work is verified on a real microcontroller (or embedded board). Conclusion In this research, a new metaheuristic-based MPPT has been proposed. The letter is designed by using seagull optimization algorithm. The performance of this method is simulated and compared with that of PSO and P&O. Consequently, the effectiveness of the suggested SOA-based MPPT method was verified for an 80 W PV system using the MATLAB/SIMULINK environment. It is noted that the average tracking efficiency of the proposed method is higher than 98.32%.The simulation results were performed under various partial shading scenarios and weak uniform conditions. It prouves the superiority of the proposed method in terms of tracking efficiency and fast response time, as compared to other methods (P&O and PSO). In this work, the design and implementation of a new MPPT method were carried out. Moreover, a Processor in the Loop (PIL) test was performed, using the Arduino MKRZERO embedded board, to confirm the functionality of the proposed SOA-MPPT approach. In this work, a PV array consisting of four modules connected in series is considered to test the proposed MPPT algorithm. However, we also aim to make complex partial shading conditions in our future research to test and confirm the effectiveness of the MPPT approach based on the proposed SOA. Data availability Corresponsdence and requests for materials should be addressed to A.C.
5,409.6
2022-12-01T00:00:00.000
[ "Engineering" ]
Large deviations asymptotics and the spectral theory of multiplicatively regular Markov processes We continue the investigation of the spectral theory and exponential asymptotics of Markov processes, following Kontoyiannis and Meyn (2003). We introduce a new family of nonlinear Lyapunov drift criteria, characterizing distinct subclasses of geometrically ergodic Markov processes in terms of inequalities for the nonlinear generator. We concentrate on the class of"multiplicatively regular"Markov processes, characterized via conditions similar to (but weaker than) those of Donsker-Varadhan. For any such process {Phi(t)} with transition kernel P on a general state space, the following are obtained. 1. SPECTRAL THEORY: For a large class of functionals F, the kernel Phat(x,dy) = e^{F(x)}P(x,dy) has a discrete spectrum in an appropriately defined Banach space. There exists a"maximal"solution to the"multiplicative Poisson equation,"defined as the eigenvalue problem for Phat. Regularity properties are established for \Lambda(F) = \log(\lambda), where \lambda is the maximal eigenvalue, and for its convex dual. 2. MULTIPLICATIVE MEAN ERGODIC THEOREM: The normalized mean E_x[\exp(S_t)] of the exponential of the partial sums {S_t} of the process with respect to any one of the above functionals F, converges to the maximal eigenfunction. 3. MULTIPLICATIVE REGULARITY: The drift criterion under which our results are derived is equivalent to the existence of regeneration times with finite exponential moments for {S_t}. 4. LARGE DEVIATIONS: The sequence of empirical measures of {Phi(t)} satisfies an LDP in a topology finer than the \tau-topology. The rate function is \Lambda^* and it coincides with the Donsker-Varadhan rate function. 5. EXACTR LARGE DEVIATIONS: The partial sums {S_t} satisfy an exact LD expansion, analogous to that obtained for independent random variables. Introduction and Main Results Let Φ = {Φ(t) : t ∈ T} be a Markov processes taking values in a Polish state space X, equipped with its associated Borel σ-field B. The time index T may be discrete, T = Z + , or continuous T = R + , but we specialize to the discrete-parameter case after Section 1.1. The distribution of Φ is determined by its initial state Φ(0) = x ∈ X, and the transition semigroup {P t : t ∈ T}, where in discrete time all kernels P t are powers of the 1-step transition kernel P . Throughout the paper we assume that Φ is ψ-irreducible and aperiodic. This means that there is a σ-finite measure ψ on (X, B) such that, for any A ∈ B satisfying ψ(A) > 0 and any initial condition x, for all t sufficiently large. Moreover, we assume that ψ is maximal in the sense that any other such ψ ′ is absolutely continuous with respect to ψ (written ψ ′ ≺ ψ). For a ψ-irreducible Markov process it is known that ergodicity is equivalent to the existence of a solution to the Lyapunov drift criterion (V3) below [34,17]. Let V : X → (0, ∞] be an extended-real valued function, with V (x 0 ) < ∞ for at least one x 0 ∈ X, and write A for the (extended) generator of the semigroup {P t : t ∈ T}. This is equal to A = (P − I) in discrete time (where I = I(x, dy) denotes the identity kernel δ x (dy)), and in continuous-time we think of A as a generalization of the classical differential generator A = d dt P t | t=0 . Recall that a function s : X → R + and a probability measure ν on (X, B) are called small if for some measure m on Z with finite mean we have t≥0 P t (x, A) m(t) ≥ s(x)ν(A), x ∈ X, A ∈ B. A set C is called small if s = ǫI C is a small function for some ǫ > 0. Also recall that an arbitrary kernel P = P (x, dy) acts linearly on functions f : X → C and measures ν on (X, B), via P f ( · ) = X P (·, dy)f (y) and ν P ( · ) = X ν(dx) P (x, · ), respectively. We say that the Lyapunov drift condition (V3) holds with respect to the Lyapunov function V [34], if: For a function W : X → [1, ∞), a small set C ⊂ X, and constants δ > 0, b < ∞, Condition (V3) implies that the set S V is absorbing (and hence full), so that V (x) < ∞ a.e. [ψ]; see [34,Proposition 4.2.3]. As in [34,32], a central role in our development will be played by weighted L ∞ spaces: For any function W : X → (0, ∞], define the Banach space of complex-valued functions, with associated norm g W := sup x |g(x)|/W (x). We write B + for the set of functions s : X → [0, ∞] satisfying ψ(s) := s(x) ψ(dx) > 0, and, with a slight abuse of notation, we write A ∈ B + if A ∈ B and ψ(A) > 0 (i.e., the indicator function I A is in B + ). Also, we let M W 1 denote the Banach space of signed and possibly complex-valued measures µ on (X, B) satisfying µ W := sup F ∈L W ∞ |µ|(F ) < ∞. The following consequences of (V3) may be found in [34,Theorem 14.0.1]. Theorem 1.1 (Ergodicity) Suppose that Φ is a ψ-irreducible and aperiodic discrete-time chain, and that condition (V3) is satisfied. Then the following properties hold: 1. (W -ergodicity) The process is positive recurrent with a unique invariant probability measure π ∈ M W 1 and for all x ∈ S V , where P x denotes the conditional distribution of Φ given Φ(0) = x. (W -regularity) For any where E x is the expectation with respect to P x , and the hitting times τ A are defined as, 3. (Fundamental Kernel) There exists a linear operator Z : L W ∞ → L V +1 ∞ , the fundamental kernel, such that AZF = −F + π(F ), F ∈ L W ∞ . That is, the function F := ZF solves the Poisson equation, A F = −F + π(F ) . Multiplicative Ergodic Theory The ergodic theory outlined in Theorem 1.1 is based upon consideration of the semigroup of linear operators {P t } acting on the Banach space L W ∞ . In particular, the ergodic behavior of the corresponding Markov process can be determined via the generator A of this semigroup. In this paper we show that the foundations of the multiplicative ergodic theory and of the large deviations behavior of Φ can be developed in analogy to the linear theory, by shifting attention from the semigroup of linear operators {P t } to the family of nonlinear, convex operators {W t } defined, for appropriate G, by x ∈ X , t ∈ T . Formally, we would like to define the 'generator' H associated with {W t } by letting H = (W − I) in discrete time and H = d dt W t | t=0 in continuous time. Observing that W t G = log(P t e G ), in discrete time we have HG = (W − I)G = log(P e G ) − G = log(e −G P e G ), and in continuous time we can similarly calculate, whenever all the above limits exist. Rather than assume differentiability, we use these expressions as motivation for the following rigorous definition of the nonlinear generator, when e G is in the domain of the extended generator. In continuous time, this is Fleming's nonlinear generator; see [22] for a starting point, and [20,21] for recent surveys. In this paper our main focus will be on the following 'multiplicative' analog of (V3), where the role of the generator is now played by the nonlinear generator H. We say that the Lyapunov drift criterion (DV3) holds with respect to the Lyapunov function V : X → (0, ∞], if: For a function W : X → [1, ∞), a small set C ⊂ X, and constants δ > 0, b < ∞, [This condition was introduced in [32], under the name (mV3).] Under either condition (V3) or (DV3), we let {C W (r)} denote the sublevel sets of W : C W (r) = {y : W (y) ≤ r}, r ∈ R. The main assumption in many of our results below will be that Φ satisfies (DV3), and also that the transition kernels satisfy a mild continuity condition: We require that they possess a density with respect to some reference measure, uniformly over all initial conditions x in the sublevel set C W (r) of W . These assumptions are formalized in condition (DV3+) below. Condition (DV3+) captures the essential ingredients of the large deviations conditions imposed by Donsker and Varadhan in their pioneering work [14,15,16], and is in fact somewhat weaker than those conditions. In Section 2 an extensive discussion of this assumption is given, its relation to several well-known conditions in the literature is described in detail. In particular, part (ii) of condition (DV3+) [to which we will often refer as the "density assumption" in (DV3+)] is generally the weaker of the two assumptions. In most of our results we assume that the function W in (DV3) is unbounded, W ∞ := sup x |W (x)| = ∞. When this is the case, we let W 0 : X → [1, ∞) be a fixed function in L W ∞ , whose growth at infinity is strictly slower than W in the sense that Below we collect, from various parts of the paper, the "multiplicative" ergodic results we derive from (DV3+), in analogy to the "linear" ergodic-theoretic results stated in Theorem 1.1. Under (DV3), the stochastic process m = {m(t)} defined below is a super-martingale with respect to F t = σ{Φ(s) : 0 ≤ s ≤ t}, t ≥ 0, From the super-martingale property and Jensen's inequality we obtain the bound, which gives the desired bound in (1.), where η := δη 0 . The multiplicative ergodic limit (7) follows from Theorem 3.1 (iii). The existence of an inverse G to H is given in Proposition 3.6, which establishes the boundF ∈ L V ∞ stated in (1.), as well as result (3.). Theorem 2.5 shows that (DV3) actually characterizes W -multiplicative regularity, and provides the bound in (2.). As in [32], central to our development is the observation that the multiplicative Poisson equation (8) can be written as an eigenvalue problem. In discrete-time with Λ = Λ(F ), (8) becomes (e F P )eF = e Λ eF , or, writing f = e F ,f = eF and λ = e Λ , we obtain the eigenvalue equation, The assumptions of Theorem 1.2 are most easily illustrated in continuous time. Consider the following diffusion model on R, sometimes referred to as the Smoluchowski equation. For a given potential u : R → R + , this is defined by the stochastic differential equation where u x := d dx u, and W = {W (t) : t ≥ 0} is a standard Brownian motion. On C 2 , the extended generator A of X = {X(t) : t ≥ 0} coincides with the differential generator given by, When σ > 0 this is an elliptic diffusion, so that the semigroup {P t } has a family of smooth, positive densities P t (x, dy) = p(x, y; t)dy, x, y ∈ R [33]. Hence the Markov process X is ψ-irreducible, with ψ equal to Lebesgue measure on R. A special case is the one-dimensional Ornstein-Uhlenbeck process, where the corresponding potential function is u(x) = 1 2 δx 2 , x ∈ R. Proposition 1.3 The Smoluchowski equation satisfies (DV3+) with V = 1 + uσ −2 and W = 1 + u 2 x , provided the potential function u : R → R + is C 2 and satisfies: Proof. Let V = 1 + uσ −2 . We then have, It is thus clear that the desired drift conditions hold. The proof is complete since P t (x, dy) possesses a continuous density p(x, y; t) for each t > 0: We may take T 0 = 1, and for each r we take β r equal to a constant times Lebesgue measure on C W (r). Proposition 1.3 does not admit an exact generalization to discrete-time models. However, the discrete-time one-dimensional Ornstein-Uhlenbeck process, does satisfy the conclusions of the proposition, again with V = 1 + ǫ 0 x 2 for some ǫ 0 > 0, when δ > 0 and W is an i.i.d. Gaussian process with positive variance. Notation. Often in the transition from ergodic results to their multiplicative counterparts we have to take exponentials of the corresponding quantities. In order to make this correspondence transparent we have tried throughout the paper to follow, as consistently as possible, the convention that the exponential version of a quantity is written as the corresponding lower case letter. For example, above we already had f = e F ,f = eF and λ = e Λ . Large Deviations From now on we restrict attention to the discrete-time case. Part 1 of Theorem 1.2 extends the multiplicative mean ergodic theorem of [32] to the larger class of (possibly unbounded) functionals F ∈ L W 0 ∞ . In this section we assume that (DV3+) holds with an unbounded function W , and we let a function W 0 ∈ L W ∞ be chosen as in (6). For n ≥ 1, let L n denote the empirical measures induced by Φ on (X, B), and write ·, · for the usual inner product; for µ a measure and G a function, µ, G = µ(G) := G(y) µ(dy), whenever the integral exists. Then, from Theorem 3.1 it follows that for any real-valued F ∈ L W 0 ∞ and any a ∈ R we have the following version of the multiplicative mean ergodic theorem, wheref a := e G(aF ) is the eigenfunction constructed in part 3 of Theorem 1.2, corresponding to the function aF . In Section 5, strong large deviations results for the sequence of empirical measures {L n } are derived from the multiplicative mean ergodic theorem in (15), using standard techniques [9,7,12]. First we show that, for any initial condition x ∈ X, the sequence {L n } satisfies a large deviations principle (LDP) in the space M 1 of all probability measures on (X, B) equipped with the τ W 0 -topology, that is, the topology generated by the system of neighborhoods Moreover, the rate function I(ν) that governs this LDP is the same as the Donsker-Varadhan rate function, and can be characterized in terms of relative entropy, where the infimum is over all transition kernelsP for which ν is an invariant measure, ν ⊙P denotes the bivariate measure [ν ⊙P ](dx, dy) := ν(dx)P (x, dy) on (X × X, B × B), and H( · · ) denotes the relative entropy, [Throughout the paper we follow the usual convention that the infimum of the empty set is +∞.] As we discuss in Section 2.6 and Section 5, the density assumption in (DV3+) (ii) is weaker than the continuity assumptions of Donsker and Varadhan, but it cannot be removed entirely. Further, the precise convergence in (15) leads to exact large deviations expansions analogous to those obtained by Bahadur and Ranga Rao [1] for independent random variables, and to the local expansions established in [32] for geometrically ergodic chains. For real-valued, non-lattice functionals F ∈ L W 0 ∞ , in Theorem 5.3 we obtain the following: For c > π(F ) and x ∈ X, where a ∈ R is chosen such that d da Λ(aF ) = c,f a (x) is the eigenfunction appearing in the multiplicative mean ergodic theorem (15), σ 2 a = d 2 da 2 Λ(aF ), and the exponent J(c) is given in terms of I(ν) as A corresponding expansion is given for lattice functionals. These large deviations results extend the classical Donsker-Varadhan LDP [14,15] in several directions: First, our conditions are weaker. Second, when (DV3+) holds with an unbounded function W , the τ W 0 -topology is finer and hence stronger than either the topology of weak convergence, or the τ -topology, with respect to which the LDP for the empirical measures {L n } is usually established [24,4,13]. Third, apart from the LDP we also obtain precise large deviations expansions as in (18) for the partial sums with respect to (possibly unbounded) functionals F ∈ L W 0 ∞ . Following the Donsker-Varadhan papers, a large amount of work has been done in establishing large deviations properties of Markov chains under a variety of different assumptions; see [12,13] for detailed treatments. Under conditions similar to those in this paper, Ney and Nummelin have proved "pinned" large deviations principles in [37,38]. In a different vein, under much weaker assumptions (essentially under irreducibility alone) de Acosta [10] and Jain [28] have proved general large deviations lower bounds, but these are, in general, not tight. One of the first places where the Feller continuity assumption of Donsker and Varadhan was relaxed is Bolthausen's work [4]. There, a very stringent condition on the chain is imposed, often referred to in the literature as Stroock's uniform condition (U). In Section 2.5 we argue that (U) is much more restrictive than the conditions we impose in this paper. In particular, condition (U) implies Doeblin recurrence as well as the density assumption in (DV3+) (ii). More recently, Eichelsbacher and Schmock [19] proved an LDP for the empirical measures of Markov chains, again under the uniform condition (U). This LDP is proved in a strict subset of M 1 , and with respect to a topology finer than the usual τ -topology and similar in spirit to the τ W 0 topology introduced here. In addition to (U), the results of [19] require strong integrability conditions that are a priori hard to verify: In the above notation, in [19] it is assumed that for at least one unbounded function W 0 : X → R, we have E x [exp{a|W 0 (Φ(n))|}] < ∞, uniformly over n ≥ 1, for all real a > 0. This assumption is closely related to our condition (DV3), and, as we show in Section 3, (DV3) in particular provides a means for identifying a natural class of functions W 0 satisfying this bound. Structural Assumptions There is a wide range of interrelated tools that have been used to establish large deviations properties for Markov processes and to develop parts of the corresponding multiplicative ergodic theory. Most of these tools rely on a functional-analytic setting within which spectral properties of the process are examined. A brief survey of these approaches is given in [32], where the main results relied on the geometric ergodicity of the process. In this section we show how the assumptions used in prior work may be expressed in terms of the drift criteria introduced here and describe the operator-theoretic setting upon which all our subsequent results will be based. Drift Conditions Recall that the (extended) generator A of Φ is defined as follows: For a function g : X → C, we write Ag = h if for each initial condition Φ(0) = x ∈ X the process ℓ(t) := t−1 s=0 h(Φ(s)) − g(Φ(t)), t ≥ 1, is a local martingale with respect to the natural filtration {F t = σ(Φ(s), 0 ≤ s ≤ t) : t ≥ 1}. In discrete time, the extended generator is simply A = P − I, and its domain contains all measurable functions on X. The following drift conditions are considered in [34] in discrete time, where in each case C is small, V : X → (0, ∞] is finite a.e. [ψ], and b < ∞, δ > 0 are constants. We further assume that W is bounded below by unity in (V3), and that V is bounded from below by unity in (V4). It is easy to see that (V2)-(V4) are stated in order of increasing strength: (V4) ⇒ (V3) ⇒ (V2). Analogous multiplicative versions of these drift criteria are defined as follows, where H is the nonlinear generator defined in (4). The following implications follow easily from the definitions: Proof. We provide a proof only for k = 3 since all are similar. Under (DV3), P e V ≤ e V −W +bI C . Jensen's inequality gives e P V ≤ P e V , and taking logarithms gives (V3). We find that Proposition 2.1 gives a poor bound in general. Theorem 2.2 shows that (DV2) actually implies (V4). Its proof is given in the Appendix, after the proof of Theorem 2.5. Spectral Theory Without Reversibility The spectral theory described in this paper and in [32] is based on various operator semigroups { P n : n ∈ Z + }, where each P n is the nth composition of a possibly non-positive kernel P . Examples are the transition kernel P ; the multiplication kernel I G (x, dy) = G(x)δ x (dy). for a given function G; the scaled kernel defined by for any function F : X → C with f = e F ; and also the twisted kernel, defined for a given function h : X → (0, ∞) by This is a probabilistic kernel (i.e., a positive kernel withP h (x, X) = 1 for all x) provided P h (x) < ∞, x ∈ X. It is a generalization of the twisted kernel considered in [32], where the function h was taken as h =f for a specially constructedf . It may also be regarded as a version of Doob's h-transform [40]. The most common approach to spectral decompositions for probabilistic semigroups {P n } is to impose a reversibility condition [23,5,41]. The motivation for this assumption comes from the L 2 setting in which these problems are typically posed, and the well-known fact that the semigroup {P n } is then self-adjoint. We avoid a Hilbert space setting here and instead consider the weighted L ∞ function spaces defined in (2); cf. [30,31,25,35,32]. The weighting function is determined by the particular drift condition satisfied by the process. In particular, under (DV3) it follows from the convexity of H (see Proposition 4.4) that for any 0 < η ≤ 1 we have the bound, which may be equivalently expressed as ∞ is a bounded linear operator for any function f satisfying F + W ≤ ηδ (where F + := max(F, 0)), and any 0 ≤ η ≤ 1. Under any one of the above Lyapunov drift criteria, we will usually consider the function v defined in terms of the corresponding Lyapunov function V on X via v = e V . For any such function v : X → [1, ∞) and any linear operator P : L v ∞ → L v ∞ , we denote the induced operator norm by, The spectrum S( P ) ⊂ C of P is the set of z ∈ C such that the inverse [Iz − P ] −1 does not exist as a bounded linear operator on L v ∞ . We let ξ = ξ({ P n }) denote the spectral radius of the semigroup { P n }, In general, the quantities ||| P||| v and ξ depend upon the particular weighting function v. If P is a positive operator, then ξ is greater than or equal to the generalized principal eigenvalue, or g.p.e. (see e.g. [39]), and they are actually equal under suitable regularity assumptions (see [2,32], and Proposition 2.8 below). As in [32], we say that P admits a spectral gap if there exists ǫ 0 > 0 such that the set S( P ) ∩ {z : |z| ≥ ξ − ǫ 0 } is finite and contains only poles of finite multiplicity; recall that z 0 ∈ S( P ) is a pole of (finite) multiplicity n if: (i) z 0 is isolated in S( P ), i.e., for some ǫ 1 > 0 we have {z ∈ S( P ) : |z−z 0 | ≤ ǫ 1 } = {z 0 }; (ii) The associated projection operator can be expressed as a finite linear combination of some where [s ⊗ ν](x, dy) := s(x)ν(dy). See [32,Sec. 4] for more details. Moreover, we say that P is v-uniform if it admits a spectral gap and also there exists a unique pole λ • ∈ S( P ) of multiplicity one, satisfying |λ • | = ξ({ P t }). Recall that a Markov process Φ is called geometrically ergodic [32] or equivalently Vuniformly ergodic [34] if it is positive recurrent, and the semigroup converges in the induced operator norm, where 1 denotes the constant function 1(x) ≡ 1. It is known that this is characterized by condition (V4). Under this assumption, in [32] we proved that Φ satisfies a "local" large deviations principle. In this paper under the stronger condition (DV3+) we show that these local results can be extended to a full large deviations principle. The following result, taken from [32,Proposition 4.6], says that geometric ergodicity is equivalent to the existence of a spectral gap: (a) If Φ is geometrically ergodic with Lyapunov function V , then its transition kernel P admits a spectral gap in L V ∞ and it is V -uniform. Next we want to investigate the corresponding relationship between condition (DV3) and when the kernel P has a discrete spectrum in L v ∞ . First we establish an analogous 'near equivalence' between assumption (DV3) and the notion of v-separability, and in Theorem 3.5 we show that v-separability implies the discrete spectrum property. [ψ], we say that the linear operator P : it can be approximated uniformly by kernels with finite-rank. That is, for each ǫ > 0, there exists a finite-rank operator K ǫ such that ||| P − K ǫ ||| v ≤ ǫ. Since the kernel K ǫ has a finite-dimensional range space, we are assured of the existence of an integer n ≥ 1, functions Note that the eigenvalues of K ǫ may be interpreted as a pseudo-spectrum; see [8]. The following equivalence, established in the Appendix, illustrates the intimate relationship between the essential ingredients of the Donsker-Varadhan conditions, and the associated spectral theory as developed in this paper. Note that in Theorem 2.4 the density assumption from part (ii) of (DV3+) has been replaced by the more natural and weaker statement that I C W (r) P T 0 is v-separable for all r. 1 The fact that this is indeed weaker than the assumption in (DV3) (ii) follows from Lemma B.3 in the Appendix. Applications of Theorem 2.4 to diffusions on R n and refinements in this special case are developed in [26]. We say that a linear operator P : L v ∞ → L v ∞ has a discrete spectrum in L v ∞ if its spectrum S has the property that S ∩ K is finite, and contains only poles of finite multiplicity, for any compact set K ⊂ C \ {0}. It is shown in Theorem 3.5 that the spectrum of P is discrete under the conditions of (b) above. Taking a different operator-theoretic approach, Deuschel and Stroock [13] prove large deviations results for the empirical measures of stationary Markov chains under the condition of hypercontractivity (or hypermixing). In particular, their conditions imply that for some T 0 , the kernel P T 0 (x, dy) is a bounded linear operator from L 2 (π) to L 4 (π), with norm equal to 1. Multiplicative Regularity Recall the definition of the empirical measures in (14), and the hitting times {τ A } defined in (3). The next set of results characterize the drift criterion (DV3) in terms of the following regularity assumptions: The Markov process Φ is called geometrically regular if there exists a geometrically regular set C, and η > 0 such that (ii) A set C ∈ B is called H-multiplicatively regular (H-m.-regular) if for any A ∈ B + , there exists η = η(A) > 0 satisfying, The Markov process Φ is H-m.-regular if there exists an H-m.-regular set C ∈ B, and η > 0 such that In [34, Theorem 15.0.1] a precise equivalence is given between geometric regularity and the existence of a solution to the drift inequality (V4). The following analogous result shows that (DV3) characterizes multiplicative regularity. A proof of Theorem 2.5 is included in the Appendix. (ii) The drift inequality (DV3) holds for some V : X → (0, ∞) and with H ∈ L W ∞ . If either of these equivalent conditions hold, then for any A ∈ B + , there exists ǫ > 0, 1 ≥ η > 0, and B < ∞ satisfying, where V is the solution to (DV3) in (ii). In a similar vein, in [44] the following condition is imposed for a diffusion on X = R n : For any n ≥ 1 there exists K n ⊂ X compact, such that for any In [44,42] it is shown that this condition is closely related to the existence of a solution to (DV3), where the function W is further assumed to have compact sublevel sets. Under these assumptions, and under continuity assumptions similar to those imposed in [43], it is possible to show that the operator P n is compact for all n > 0 [42, Theorem 2.1], or [11,Lemma 3.4]. We show in Proposition 2.6 that the bound assumed in [44] always holds under (DV3+). We say that G : X → R + is coercive if the sublevel set {x : G(x) ≤ n} is precompact for each n ≥ 1. Coercive functions exist only when X is σ-compact. Proposition 2.6 Let Φ be a ψ-irreducible and aperiodic Markov chain on X. Assume moreover that X = R n ; that condition (DV3+) holds with V : X → [1, ∞) continuous; W unbounded; and the kernels {I C W (r) P T 0 : r ≥ 1} are v-separable for some T 0 ≥ 1. Then, there exists a sequence of compact sets {K n : n ≥ 1} satisfying (27). Proof. Lemma B.2 combined with Proposition C.7 implies that we may construct functions (V 1 , W 1 ) from X to [1, ∞), and a constant b 1 satisfying the following: sup{V (x) : Lemma C.8 combined with continuity of V then implies that (27) also holds, with K r = closure of C W 1 (n r ) for some sequence of positive integers {n r }. Proposition 2.6 has a partial converse: Proposition 2.7 Suppose the chain Φ is ψ-irreducible and aperiodic. Suppose moreover that X = R n ; that the support of ψ has non-empty interior; that P has the Feller property; and that there exists a sequence of compact sets {K n : n ≥ 1} satisfying (27). Then Condition (DV3) holds with V, W : X → [1, ∞) continuous and coercive. Proof. Proposition A.2 asserts that there exists a solution to the inequality H(V ) ≤ − 1 2 W + bI C with (V, W ) continuous and coercive, C compact, and b < ∞. Under the assumptions of the proposition, compact sets are small (combine Proposition 6.2.8 with Theorem 5.5.7 of [34]). We may conclude that C is small, and hence that (DV3) holds. Perron-Frobenius Theory As in [32] we find strong connections between the theory developed in this paper, and the Perron-Frobenius theory of positive semigroups, as developed in [39]. Suppose that { P n : n ∈ Z + } is a semigroup of positive operators. We assume that { P n } has finite spectral radiusξ in L v ∞ . Then, the resolvent kernel defined by R λ := [Iλ − P ] −1 is a bounded linear operator on L v ∞ for each λ >ξ. We assume moreover that the semigroup is ψ-irreducible, that is, whenever A ∈ B satisfies ψ(A) > 0, then ∞ k=0 P k (x, A) > 0, for all x ∈ X. If Φ is a ψ-irreducible Markov chain, then for any measurable function F : X → R, the kernel P = P f generates a ψ-irreducible semigroup. In general, under ψ-irreducibility of the semigroup, one may find many solutions to the minorization condition, with λ > 0, s ∈ B + , and ν ∈ M + , that is, s : X → R + is measurable with ψ(s) > 0, and ν is a positive measure on (X, B) satisfying ν(X) > 0. The pair (s, ν) is then called small, just as in the probabilistic setting. Theorem 3.2 of [39] states that there exists a constantλ ∈ (0, ∞], the generalized principal eigenvalue, or g.p.e., such that, for any small function s ∈ B + , The semigroup is said to beλ-transient if for one, and then all small pairs (s, ν), satisfying s ∈ B + , ν ∈ M + , we have ∞ k=0λ −k−1 ν P k s < ∞; otherwise it is calledλ-recurrent. Proposition 2.8 shows that the generalized principal eigenvalue coincides with the spectral radius when considering positive semigroups that admit a spectral gap. Related results may be found in Theorem 4.4 and Proposition 4.5 of [32]. Proposition 2.8 Suppose that { P n : n ∈ Z + } is a ψ-irreducible, positive semigroup. Suppose moreover that the semigroup admits a spectral gap in L v ∞ , with finite spectral radiusξ. Then: (iv) For any λ >ξ, and any (s, ν) that solve (28) Proof. Suppose that either (i) or (ii) is false. In either case, for all small pairs (s, ν), It then follows that the projection operator Q defined in (25) satisfies ν Qs = 0 for all small s ∈ L v ∞ , ν ∈ M v 1 . This is only possible if Q = 0, which is impossible under our assumption that the semigroup admits a spectral gap. To complete the proof, observe that the semigroup generated by the kernel R λ also admits a spectral gap, with spectral radiusγ = (λ −ξ) −1 . It follows that there is a closed ball D ⊂ C containingγ such that the two kernels below are bounded linear operators on From (i) and (ii) we know that R λ isγ-recurrent, which implies that νYγs = 1, and that P h =ξh (see [39,Theorem 5.1]). Moreover, again from (i), (ii), since νYγs < ∞ it follows that the spectral radius of ( R λ − s ⊗ ν) is strictly less thanγ, which implies (iii). Finally, since |||Yγ||| v < ∞ we may conclude that h ∈ L v ∞ , and this establishes (iv). On specializing to the kernels {P f : F ∈ L W 0 ∞ } we obtain the following corollary. Define for any measurable function F : X → (−∞, ∞]: (i) Λ(F ) = log(λ(F )) = the logarithm of the g.p.e. for P f . Lemma 2.9 Consider a ψ-irreducible Markov chain, and a measurable function G : Proof. We have |||P n g ||| v < ∞ for some n ≥ 1 when Ξ(G) < ∞. Consequently, since G and V are assumed positive, we have g( Proposition 2.10 Under (DV3+) the functional Ξ is finite-valued and convex on L W 0 ∞ , and may be identified as the logarithm of the generalized principal eigenvalue: Proof. Theorem 2.4 implies that P f is v-separable, and Proposition 2.8 then gives the desired equivalence. Convexity is established in Lemma C.1. The spectral radius of the twisted kernel given in (21) also has a simple representation, when the function h is chosen as a solution to the multiplicative Poisson equation: ∞ , the twisted kernelPf satisfies (DV3+) with Lyapunov functionV := V −F + c for c ≥ 0 sufficiently large. Consequently, the semigroup generated by the twisted kernel has a discrete spectrum in Lv ∞ , and its log-spectral radius has the representation,Ξ Proof. The kernels P f andPf are related by a scaling and a similarity transformation, It follows that (DV3+) (i) is satisfied with the Lyapunov functionV , and we haveV ≥ 1 for sufficiently large c sincef ∈ L v ∞ . The representation ofΞ also follows from the above relationship betweenPf and P f . Since the set C W (r) is small for the semigroup {P t f : t ≥ 0}, there exists ǫ > 0, T 1 < ∞, and a probability distribution ν such that Consequently,f −1 is bounded on C W (r). Doeblin and Uniform Conditions The uniform upper bound in condition (DV3+) (ii) is easily verified in many models. Consider first the special case of a discrete time chain Φ with a countable state space X, and with W such that C W (r) is finite for all r < W ∞ . In this case we may take T 0 = 1 in (DV3+) (ii), and set This is the starting point for the bounds obtained in [2]. A common assumption for general state space models is the following: See [13,12], as well as [43,27,29]. It is obvious that (31) implies the validity of the upper bound in our assumption (DV3+) (ii). Somewhat surprisingly, Condition (U) also implies a corresponding lower bound, and moreover we may take the bounding measure equal to the invariant measure π: Proposition 2.12 Suppose that Φ is an aperiodic, ψ-irreducible chain. Then, condition (U) holds if and only if there is a probability measure π on (X, B), a constant N 0 ≥ 1, and a sequence of non-negative numbers {δ n : n ≥ N 0 }, satisfying, Proof. It is enough to show that condition (U) implies the sequence of bounds given in (32). Condition (U) implies the following minorization, Since the chain is assumed aperiodic and ψ-irreducible, it follows that the chain is uniformly ergodic, a property somewhat stronger than Doeblin's condition [34,Theorem 16.2.2]. Consequently, there exists an invariant probability measure π, and constants B 0 < ∞, b 0 > 0 such that, Condition (U) then gives the following upper bound: On multiplying (31) by π(dy), and integrating over y ∈ X, we obtain, Let Γ denote the bivariate measure given by, Γ(dx, dy) = π(dx)P T 1 (x, dy), for x, y ∈ X. The previous bound implies that Γ has a density p(x, y; T 1 ) with respect to π×π, where p( · , · ; T 1 ) is jointly measurable, and may be chosen so that it satisfies the strict upper bound, p(x, y; The probability measure Γ has common one-dimensional marginals (equal to π). Consequently, we must have p(x, y; T 1 )π(dx) = 1 a.e. y ∈ X [π]. For n ≥ 2T 1 we define the density p(x, y; n) via, p(x, y; n) := P n−T 1 (x, dz)p(z, y; T 1 ), x, y ∈ X. We have the upper bound sup x,y p(x, y; n) ≤ b 0 for all n ≥ T 1 since P k is an L ∞ -contraction for any k ≥ 0. Combining this bound with (33) gives the strict bound, This easily implies the result. Note that, for the special case of reflected Brownian motion on a compact domain, a similar result is established in [3]. We have already noted in the above proof that the lower bound in (32) implies the Doeblin condition, which is known to be equivalent to (V4) with V bounded for a ψ-irreducible chain [34,Theorem 16.2.2]. Consequently, condition (U) frequently holds for models on compact state spaces but it rarely holds for models on R n . We summarize this and related correspondences with drift criteria here. where r > 1 is arbitrary, and ǫ > 0 is to be determined. The functions V and V 0 are equivalent when ǫ ≤ T 1 −1 r −T 1 +1 since then by Hölder's inequality, x ∈ X, and the right hand side is in under the assumptions of (ii). Moreover, we have V ≥ ǫV 0 by considering only the first term in the definition of V . Hence V ∈ L V 0 ∞ and V 0 ∈ L V ∞ , which shows that V and V 0 are equivalent. We assume henceforth that this bound holds on ǫ. Hölder's inequality also gives the bound, This implies the result since the state space is small. Donsker-Varadhan Theory In Donsker and Varadhan's classic papers [14,15,16] there are two distinct sets of assumptions that are imposed for ensuring the existence of a large deviations principle, roughly corresponding to parts (i) and (ii) of our condition (DV3+). Lyapunov criteria. The Lyapunov function criterion of [16,43] is essentially equivalent to (DV3), with the additional constraint that the function W has compact sublevel sets; see conditions (1)-(5) on [43, p. 34]. In the general case (when X is not compact) this implies that (DV3) holds with an unbounded W . It is worth noting that the nonlinear generator is implicitly already present in the Donsker-Varadhan work, visible both in the form of the rate function, and in the assumptions imposed in [15,16,43]. Continuity and density assumptions. In [43] two additional conditions are imposed on Φ. It is assumed that the chain satisfies a strong version of the Feller property, and that for each x, P (x, dy) has a continuous density p x (y) with respect to some reference measure α(dy) which is independent of x. These rather strong assumptions are easily seen to imply condition (DV3+) (ii) when W is coercive, so that the sets C W (r) are pre-compact. Multiplicative Mean Ergodic Theorems The main results of this section are summarized in the following two theorems. In particular, the multiplicative mean ergodic theorem given in (35) will play a central role in the proofs of the large deviations limit theorems in Section 5. For all these results we will assume that Φ satisfies (DV3) with an unbounded function W . As above, we let B + denote the set of functions h : X → [0, ∞] with ψ(h) > 0; for A ∈ B we write A ∈ B + if ψ(A) > 0; and let M + denote the set of positive measures on B satisfying µ(X) > 0. As in (6) in the Introduction, we choose an arbitrary measurable function W 0 : X → [1, ∞) in L W ∞ , whose growth at infinity is strictly slower than W . This may be expressed in terms of the weighted L ∞ norm via, lim where {C W (r)} are the sublevel sets of W defined in (5). The function W 0 is fixed throughout this section. Given F ∈ L W 0 ∞ and an arbitrary α ∈ C, we recall from [32] the notation P α := e αF P , and where v := e V and V is the Lyapunov function in (DV3+). Next, we collect the main results of this section in the following theorem. Recall the definition of the empirical measures {L n } from (14). (i) There is a maximal, isolated eigenvalue λ(αF ) ∈ S α satisfying |λ(αF )| = ξ(αF ). Furthermore, Λ(αF ) := log(λ(αF )) is analytic as a function of α ∈ Ω, and for real α it coincides with the log-generalized principal eigenvalue of Section 2.4. (ii) Corresponding to each eigenvalue λ(αF ), there is an eigenfunctionf α ∈ L v ∞ and an eigenmeasureμ α ∈ M v 1 , where v := e V , normalized so thatμ α (f α ) =μ α (X) = 1. The functionf α solves the multiplicative Poisson equation, and the measureμ α is a corresponding eigenmeasure: Proof. Lemma B.3 in the Appendix shows that (P f 0 ) 2T 0 +2 is v η -separable for any F 0 ∈ L W 0 ∞ , and Theorem 3.5 then implies that the spectrum of P f 0 is discrete. It follows that solutions to the eigenvalue problem for Theorem 3.4 establishes the limit (iii) for α ∈ C in a neighborhood of the origin. Consider then the twisted kernelP =Pf a , where a is real. Proposition 2.11 states that this satisfies (DV3+) with Lyapunov functionV := V /f a . An application of Theorem 3.4 to this kernel then implies a uniform bound of the form (iii) for α in a neighborhood of a. For any given a > 0 we may appeal to compactness of the line-segment {a ∈ R : |a| ≤ a} to construct ω > 0 such that (35) holds for α ∈ Ω. We note that this result has many immediate extensions. In particular, if condition (DV3+) is satisfied, then this condition also holds with (V, W ) replaced by (1 − η + ηV, W ) for any 0 < η < 1. Consequently,f ∈ L vη ∞ for any 0 < η ≤ 1 when F ∈ L W 0 ∞ . Part (iii) of the theorem is at the heart of the proof of all the large deviations properties we establish in Section 5. For example, from (35) we easily obtain that, for any F ∈ L W 0 ∞ , the log-moment generating functions of the partial sums converge uniformly and exponentially fast: We therefore think of Λ(αF ) as the limiting log-moment generating function of the partial sums {S n } corresponding to the function F , and much of our effort in the following two section will be devoted to examining the regularity properties of Λ and its convex dual Λ * . Following [32], next we give a weaker multiplicative mean ergodic theorem for α in a neighborhood of the imaginary axis. Recall the following terminology: The asymptotic variance σ 2 (F ) of a function F : X → R is defined to be variance obtained in the corresponding Central Limit Theorem for the partial sums of F (Φ(n)), assuming it exists. For a V -uniformly ergodic (or, equivalently, a geometrically ergodic) chain, the asymptotic variance is finite for any function F satisfying F 2 ∈ L V ∞ , and [34, Theorem17.0.1] gives the representation, The minimal h for which this holds is called the span of F . If the function F can be written as a sum, F = F 0 + F ℓ , where F ℓ is lattice with span h and F 0 has zero asymptotic variance then F is called almost-lattice (and h is its span). Otherwise, F is called strongly non-lattice. The lattice condition is discussed in more detail in [32]. The proof of the following result follows from Theorem 3.1 and the arguments used in the proof of [32,Theorem 4.2]. Theorem 3.2 (Bounds Around the iω-Axis) Assume that the Markov chain Φ satisfies condition (DV3+) with an unbounded W , and that F ∈ L W 0 ∞ is real-valued. Spectral Theory of v-Separable Operators The following continuity result allows perturbation analysis to establish a spectral gap under (DV3). Recall that we set v η := e ηV ; for any real-valued F ∈ L W ∞ we define f := e F ; and we let P f denote the kernel P f (x, dy) := f (x)P (x, dy). Lemma 3.3 Suppose that Φ is ψ-irreducible and aperiodic, and that condition Proof. We have from the definition of the induced operator norm, Also, we have the elementary bounds, for all x ∈ X, Combining these bounds gives, The supremum is bounded under the assumptions of the proposition, which establishes the desired bound. We now show that, for any given h ∈ L vη ∞ , F ∈ L W 0 ∞ , the map G → I G−F P f h represents the Frechet derivative of P f h. We begin with the mean value theorem, where F θ = θF + (1 − θ)G for some θ : X → (0, 1). The bounds leading up to (39) then lead to the following bound, for all x ∈ X, It follows that there exists b 1 < ∞ such that which establishes Frechet differentiability. Next we present a local result, in the sense that it holds for all F with sufficiently small L W ∞norm, where the precise bound on F W is not explicit. Although a value can be computed as in [32], it is not of a very attractive form. Note that Theorem 3.4 does not require the density condition used in (DV3+). The definition of the empirical measures {L n } is given in (14). (ii) There exist positive constants B 0 and b 0 such that, for all g ∈ L vη ∞ , x ∈ X, n ≥ 1, we have withf ,μ, λ(F ) given as in (i). (iii) If V is bounded on the set C used in (DV3) then we may take η 0 = 1. Proof. Assumption (DV3) combined with Theorem 2.2 implies that P is v η -uniform for all η > 0 sufficiently small (when V is bounded on C then (DV3) implies v-uniformity, so we may take η = 1). It follows that the inverse [I − P + 1 ⊗ π] −1 exists as a bounded linear operator on L vη ∞ [34, Theorem 16.0.1]. An application of Lemma 3.3 implies that the kernels P f converge to P in norm We have the explicit representation, writing ∆ : The first term on the right hand side exists as a power series in H −1 ∆, provided Moreover, in this case we obtain the bound, For any F ∈ L W ∞ we have the upper bound, |F | ≤ [|||F||| W δ −1 ]δW , where δ > 0 is given in (DV3). Recalling the definition of the log-generalized principal eigenvalue functional Λ from Section 2.4, and assuming that θ := |||F||| W δ −1 < 1, we may apply the convexity of Λ (see Lemma C.1) to obtain the upper bound, where b is given in (DV3). From (44) we conclude that there is a constant ǫ 0 > 0 such that ǫ 0 < 1 2 ǫ 1 , and (42) together with the bound |λ(F ) − 1| < 1 2 ǫ 1 hold whenever |||F||| W < ǫ 0 . For such F , it follows that (43) holds, and hence P f is v η -uniform. SettingȞ := [Iλ(F ) − P f + 1 ⊗ π] we may express the eigenfunction and eigenmeasure explicitly as: The remaining results follow as in [32,Theorem 4.1]. In order to extend Theorem 3.4 to a non-local result we invoke the density condition in (DV3+) (ii). In fact, any such extension seems to require some sort of a density assumption. Recall that, in the notation of Section 2.2 and Section 2.4, we say that the spectrum S in L v ∞ of a linear operator P : L v ∞ → L v ∞ is discrete, if for any compact set K ⊂ C \ {0}, S ∩ K is finite and contains only poles of finite multiplicity. We saw earlier that condition (DV3+) implies that P 2T 0 +2 is v-separable. Next we show in turn that any v-separable linear operator P has a discrete spectrum in L v ∞ . Proof. Assume first that T 0 = 1. For a given ǫ > 0, set P = K + ∆ with |||∆||| v < ǫ, and with K a finite-rank operator. Write K = n i=1 s i ⊗ ν i , and for each z ∈ C define the complex numbers {m ij (z)} via Let M (z) denote the corresponding n × n matrix, and set γ(z) = det(I − M (z)). The function γ is analytic on {|z| > |||∆||| v } because on this domain we have Moreover, this function satisfies γ(z) → 1 as |z| → ∞, from which we may conclude that the equation γ(z) = 0 has at most a finite number of solutions in any compact subset of {|z| > |||∆||| v }. As argued in the proof of Theorem 3.4, if γ(z) = 0, then we have, Conversely, this inverse does not exist when γ(z) = 0. Recalling that ǫ ≥ |||∆||| v , we conclude that S( P ) ∩ {z : |z| > ǫ} = {z : γ(z) = 0}. The right hand side denotes a finite set, and ǫ > 0 is arbitrary. Consequently, it follows that the spectrum of P is discrete. If T 0 > 1 then from the foregoing we may conclude that the spectrum of P T 0 is discrete. The conclusion then follows from the identity For each n ≥ 1, we define the nonlinear operators Λ n and G n the space of real-valued functions F ∈ L W 0 ∞ , via, Λ n (F ) := 1 n log E x exp(n L n , F ) The following result implies that both sequences of operators {G n } and {Λ n } are convergent. Smoothness properties of the limiting nonlinear operators are established in Propositions 4.3 and 4.5. Proposition 3.6 Suppose that (DV3+) holds with an unbounded function W . Then there exists a nonlinear operator G : Proof. Note that the second bound follows from the first. So, let δ 0 > 0 and F 0 ∈ L W 0 ∞ be given, and consider an arbitrary F ∈ L W 0 ∞ satisfying F − F 0 W 0 ≤ δ 0 . We defineF n := G n (F ) for n ≥ 0, andF = G(F ) := log(f ), withf given in Theorem 3.1. We show below that for any η > 0, there exists b(η) < ∞ such that for all such F , Taking this for granted for the moment, observe that we then have, for any r ≥ 1, n ≥ 1, Moreover, Theorem 3.1 implies that for any r ≥ 1, provided we have the uniform bound (45). Putting these two conclusions together, and letting r → ∞ then gives, lim sup This then proves the desired uniform convergence, since η > 0 is arbitrary. We now prove the uniform bound (45). We begin with consideration of the functions {F : F − F 0 W 0 ≤ δ 0 }, since the corresponding bounds on {F n } then follow relatively easily. Let τ = min{k ≥ 1 : |F (Φ(k))| ≤ r}, with r ≥ 1 chosen so that {x : |F (x)| ≤ r} ∈ B + . The stochastic process below is a positive local martingale, The local martingale property combined with Fatou's Lemma then gives the bound, and then by Jensen's inequality and the definition of τ , The right hand side is bounded below by −k 0 (V + 1) for some finite k 0 by (V3) and [34,Theorem 14.0.1]. However, this bound can be improved. Since F ∈ L W 0 ∞ , and since W ∈ L V ∞ with (W 0 , W ) satisfying (6), we can find, for any η 0 > 0, a constant b 0 (η 0 ) and a small set Small sets are special (see [39]), which implies that Moreover, it follows from [34, Theorem 14.0.1] that for some b 0 < ∞, Combining the bounds (46-49) establishes (45) forF . From (35) in Theorem 3.1 we have, for any η > 0, constants From the forgoing we see that the right hand side is bounded by 2ηV +b(2η) for some b(2η) < ∞ and all n. To complete the proof, we show that a corresponding lower bound holds: By definition of f n and an application of Jensen's inequality we have for all n ≥ 0, where the expectation is with respect to the process with transition kernelPf . On taking logarithms, and appealing to the mean ergodic limit for the twisted process, for constants This together with the bounds obtained onF shows that (45) does hold. Entropy, Duality and Convexity In this section we consider structural properties of the operators G, H and the functional Λ. As above, we assume throughout that Φ satisfies (DV3+) with an unbounded function W , and we choose and fix an arbitrary function W 0 ∈ L W ∞ as in (34). Also, throughout this section we restrict attention to real-valued functions in L W 0 ∞ and real-valued measures in M W 0 1 since one of our goals is to establish convexity and present Taylor series expansions of G, H, and Λ acting on L W 0 ∞ . Recall from Proposition 2.8 that the log-generalized principle eigenvalue Λ coincides with the log-spectral radius Ξ on this domain. The convex dual of the functional Λ : A probability measure µ ∈ M W 0 1 and a function F ∈ L W 0 ∞ form a dual pair if the above supremum is attained, so that Λ(F ) + Λ * (µ) = µ, F . The main result of this section is a proof that Λ * can be expressed in terms of relative entropy (recall (17)) provided that we extend the definition to include bivariate measures on (X × X, B × B). Throughout this section we let M denote a generic function on X × X, and Γ a generic measure on (X × X, B × B). The definitions of L W ∞ and M W 1 are extended as follows: The following proposition shows that consideration of the bivariate chain Ψ, For any univariate measure µ and transition kernelP , we write µ ⊙P for the bivariate measure µ⊙P (dx, dy) := µ(dx)P (x, dy). In particular, Proposition 4.1 shows that if Φ satisfies (DV3+) with an unbounded W , then so does Ψ. Proposition 4.1 The following implications hold for any Markov chain Φ, with corresponding bivariate chain Ψ: (ii) If C is a small set for Φ, then X × C is small for Ψ; (iii) If C ∈ B, µ, and T 0 ≥ 1 satisfy P T 0 (y, A) ≤ µ(A) for y ∈ C, A ∈ B, then on setting C 2 = X × C and µ 2 = µ ⊙ P we have, where P 2 denotes the transition kernel for Ψ; (iv) If ν ∈ M + is small for Φ then ν 2 := ν ⊙ P is small for Ψ; (v) Suppose that Φ satisfies the drift condition (DV3). Then Ψ also satisfies the following version of (DV3), where H 2 is the nonlinear generator for Ψ, C 2 = X × C, and Proof. To prove (i) consider any set A 2 ∈ B × B with ψ 2 (A 2 ) > 0. Define Then we have ψ(g) > 0, and hence by ψ-irreducibility of Φ, ∞ k=0 P k g (x) > 0, for all x ∈ X. It follows immediately that ∞ k=0 P k 2 I A 2 (x, y) > 0, for all x, y ∈ X, from which we deduce that Ψ is ψ 2 -irreducible. This proves (i), and (ii)-(iv) are similar. Proof. Any probability measure Γ on (X × X, B × B) can be decomposed as Γ(dx, dy) = π(dx)P (x, dy), whereπ is the first marginal for Γ. We show in Lemma 4.11 that the marginals of Γ must agree when Λ * (Γ) < ∞, and this establishes (i). Finiteness of Λ * (Γ) also implies that Γ is absolutely continuous with respect to π ⊙ P . This follows from Proposition 4.6 (iv) below, applied to the bivariate chain Ψ. Consequently, the transition kernel can be expressed,P (x, dy) = m(x, y)P (x, dy), for x, y ∈ X, for some measurable function m : With M = log m, Proposition C.10 gives the upper bound, We apply Proposition C.4 to obtain a corresponding lower bound: There is a sequence {M k : as k → ∞. Moreover, we have Λ(M ) = 0 sinceP (x, dy) = m(x, y)P (x, dy) is transition kernel for a positive recurrent Markov chain, and hence '1-recurrent [39]. Consequently, We thus obtain the identity Λ * (Γ) = Γ, M , which is precisely (ii). Finally, part (iii) follows from Proposition 4.6 (iii) combined with Proposition 4.10. Convexity and Taylor Expansions We now return to consideration of the univariate chain Φ, and establish some regularity and smoothness properties for the (univariate) functional Λ and the nonlinear operators H and G. We recall the definition of the twisted kernelP h from (21), and for any h : X → (0, ∞) we define the bilinear and quadratic forms, When h ≡ 1 we remove the subscript so that F, G := P (F G) − (P F )(P G), and Q(F ) := P (F 2 ) − (P F ) 2 . It is well-known that σ 2 (F ) := π(Q(ZF )) is equal to the asymptotic variance given in (37) (i) Λ is strongly continuous: For each F 0 ∈ L W 0 ∞ there exists B < ∞, such that for all F ∈ L W 0 ∞ satisfying F W 0 < 1, is analytic as a function of a. Moreover, we have the second-order Taylor expansion, where g =f 0 := e G(F 0 ) , and π g is the invariant probability measure ofP g . Proof. Part (i) follows from Proposition 2.10 combined with Lemma 3.3. To establish (ii) we note that Λ n (F 0 + aF ) is an analytic function of a for each initial x, and F 0 , F ∈ L W 0 ∞ . Proposition 3.6 states that this converges to Λ(F 0 + aF ), which is convex and hence also continuous on R, and the convergence is uniform for a in compact subsets of R. This implies that the limit is an analytic function of a. The second-order Taylor series expansion follows as in the proof of property P4 in the Appendix of [32]. where inequalities between functions are interpreted pointwise. (ii) H is smooth: We have the second-order Taylor expansions, for any F, F 0 ∈ L W 0 ∞ , where g =f 0 := e G(F 0 ) and A g is the generator ofP g . Proof. We first show that H : which shows that H(F ) ∈ L V ∞ . Given these bounds, the smoothness result (ii) is a consequence of elementary calculus. To establish convexity, we let H i = H(F i ) and f i = e F i , so that P f i = e H i f i , i = 1, 2. An application of Hölder's inequality gives the bound, We can also obtain a Taylor-series approximation for G, but it is convenient to consider a re-normalization to avoid additive constants. Define, We have the Taylor series expansion, where Zf 0 is the fundamental kernel forPf 0 , normalized so that πZf 0 F = 0, F ∈ L W 0 ∞ . Proof. The strong continuity follows from strong continuity of P g given in Lemma 3.3. The Taylor-series expansion is established first with F 0 = 0. Given F ∈ L W 0 ∞ , a ∈ R, we let f a = exp(aF ), and letf a be the solution to the eigenfunction equation given by Under assumption (DV3) alone we have seen in Theorem 3.4 that this is an eigenfunction in L v ∞ for small |a|. We also haveF a = log(f a ) = G 0 (F a ) + k(a), with k(a) = π(F a ). In the analysis that follow, our consideration will focus onF a rather than G 0 (F a ) since constant terms will be eliminated through our normalization. We note that the first derivative may be written explicitly as, Observe that the derivative is in L v ∞ since both I F P fa and [I − P fa + 1 ⊗ π] −1 are bounded linear operators on L v ∞ . Similar conclusions hold for all higher-order derivatives. We define the twisted kernel as above, As in [32] we may verify that the function F a = d daF a is a solution to Poisson's equation, where π a is invariant forP a . Setting a = 0 gives the first term in the Taylor series expansion for G 0 . To obtain an expression for the second term we differentiate Poisson's equation: We wish to compute the second derivative, F (2) a = d 2 da 2 log(f a ), which requires a formula for the derivative ofP a : For any G ∈ L V ∞ , d da Letting H a = F a , F a f a , the identities (58) and (59) then give, Letting Z a denote the fundamental kernel forP a we conclude that Evaluating all derivatives at the origin provides the quadratic approximation for G 0 , where Z is the fundamental kernel for P , normalized so that πZ = 0, and F = ZF . To establish the Taylor-series expansion at arbitrary a 0 ∈ R we repeat the above arguments, applied to the Markov chain with transition kernelP a 0 . This satisfies (DV3+) withV = c + V −F a 0 for sufficiently large c > 0, by Proposition 2.11. Representations of the Univariate Convex Dual The following result provides bounds on the (univariate) convex dual functional Λ * , and gives some alternative representations: Proposition 4.6 Suppose that (DV3+) holds with an unbounded function W . Then, for any probability measure µ ∈ M W 0 1 : (iii) There exists ǫ 0 > 0, independent of µ ∈ M W 0 1 , such that (iv) If µ is not absolutely continuous with respect to π, then Λ * (µ) = ∞. The proof is provided after the following bound. Lemma 4.7 Suppose that (DV3+) holds with an unbounded function W . Then,F ∈ L ∞ provided the following conditions hold: F ∈ L V ∞ ; Λ(F ) = 0; and F = F I C V (r) for some r ≥ 1. Proof. From the local martingale property we have, This then gives the bound, Proof of Proposition 4.6. For any F ∈ L W 0 ∞ , and any r ≥ 1 we write, F r = I C V (r) [F − γ r ], where γ r ∈ R is chosen so that Λ(F r ) = 0. Its existence follows from Proposition 4.3. To prove (iv), write µ = pµ 0 + (1 − p)µ 1 where µ 0 , µ 1 are probability measures on (X, B) such that µ 1 ≺ π is absolutely continuous and µ 0 is singular with respect to π. Let S denote the support of µ 0 . We have Λ(F ) = 0 whenever F ∈ L ∞ is supported on S, and hence which is infinite, as claimed. Proof. (sketch) LetP a =Pǧ a and let π a denote the invariant distribution for given G W 0 ≤ 1, and a ∈ [0, 1]. We let Z a the fundamental kernel forP a , normalized so that π a (Z a G) = π a (G), and we let G a = Z a G. Proposition 4.3 then gives the representation, The proof is completed on showing that where the supremum is over all a and G in this class. This follows from the arguments above -see in particular (45) and the surrounding arguments. In the following proposition we give another characterization of dual pairs (µ, G) for Λ * . (iii) Suppose that µ ∈ M W 0 1 , and that there exists G ∈ L W 0 ∞ satisfying, Then µ is invariant under the twisted kernelP g . Proof. The first result is simply Jensen's inequality: If equality holds, it then follows that e H is constant a.e. [π]. Characterization of the Bivariate Convex Dual We now turn to the case of bivariate functions and measures. Given any function of two variables M : X × X → R, we let m = e M and extend the definition of the scaled kernel in (20) via, P m (x, dy) := m(x, y)P (x, dy) , x, y ∈ X . The following result shows that the spectral radius of this kernel coincides with that defined for the bivariate chain Ψ. The proof is routine. Proposition 4.10 Suppose that P m has finite spectral radius λ m in v η -norm for all sufficiently small η > 0. Let P 2 denote the transition kernel for the bivariate chain Ψ. (i) I m P 2 has the same spectral radius in v η2 -norm for sufficiently small η > 0, with v η2 (x, y) = exp(η[V (y) + 1 2 δW (x)]). (ii) If P m has an eigenfunctionf , then I m P 2 also possesses an eigenfunction given by, For a Markov process with transition kernel P satisfying (DV3+), we say that M and M are similar if there exists H ∈ L V ∞ such that The function M is called degenerate if it is similar to M ≡ 0. The log-generalized principal eigenvalues agree (Λ(M ) = Λ( M )) whenever M, M are similar. This is the basis of the following two lemmas. Proof. The conclusion that Γ ≺ π ⊙ P follows from Proposition 4.6 (iv). where Γ 1 and Γ 2 denote the two marginals. If Γ 1 = Γ 2 it is obvious that the right hand side cannot be bounded in H. Lemma 4.12 Suppose that (DV3+) holds with an unbounded function W . Suppose moreover that M ∈ L V ∞,2 , and that the asymptotic variance of the partial sums n−1 k=0 M (Φ(k), Φ(k + 1)), n ≥ 1, is equal to zero. Then the function M is degenerate. Proof. Applying [32,Proposition 2.4] to the bivariate chain Ψ with transition kernel P 2 , we can find M such that where π 2 = π ⊙ P is the invariant probability measure for P 2 . Since Φ(k + 1) is conditionally independent of Φ(k − 1) given Φ(k), it follows that M does not depend on its first variable. Thus we can find F ∈ L V ∞ satisfying whereπ is a marginal of Γ (see Lemma 4.11). Then, the function M 0 is similar to whereF = log(f ), withf equal to an eigenfunction for P m , with eigenvalue λ(M ). (ii) Conversely, suppose that Γ ∈ M W 0 1,2 is given, satisfying Γ ≺ [π ⊙ P ], and suppose that its one-dimensional marginals agree. Consider the decomposition, Γ(dx, dy) = [π ⊙P ](dx, dy), whereπ := Γ 1 = Γ 2 is the (common) first marginal of Γ on (X, B), andP is a transition kernel. Let Proof. Part (i) is a bivariate version of Proposition 4.9: We know that Γ is an invariant measure for a bivariate process, whose one-dimensional transition kernel is of the form, Invariance may be expressed as follows: Γ(dy, dz) = x∈X Γ(dx, dy)P m (y, dz) , y, z ∈ X. To prove (ii), letΛ( · ) denote the functional defining the log-generalized principal eigenvalue for the transition kernelP = P m . Proposition 2. Large Deviations Asymptotics In this section we use the multiplicative mean ergodic theorems of Section 3 and the structural results of Section 4 to study the large deviations properties of the empirical measures {L n } induced by the Markov chain Φ on (X, B); recall the definition of {L n } in (14). As in the previous section, we also assume throughout this section that the Markov chain Φ satisfies (DV3+) with an unbounded function W , and we choose and fix a function W 0 : (34). Our first result, the large deviations principle (LDP) for the sequence of measures {L n }, will be established in a topology finer (and hence stronger) than either the topology of weak convergence, or the τ -topology. As described in the Introduction, we consider the τ W 0 -topology on the space M 1 of probability measures on (X, B), defined by the system of neighborhoods (16). Since the map (x 1 , . . . , x n ) → 1 n n i=1 δ x i from X n to M 1 may not be measurable with respect to the natural Borel σ-field induced by the τ W 0 -topology on M 1 , we will instead consider the (smaller) σ-field F, defined as the smallest σ-field that makes all the maps below measurable: where E o andĒ denote the interior and the closure of E in the τ W 0 topology, respectively. The proof is based on an application of the Dawson-Gärtner projective limit theorem along the same lines as the proof of Theorem 6.2.10 in [12]. The main two technical ingredients are provided by, first, the multiplicative mean ergodic theorem Theorem 3.1 (iii) which, as noted in (36), shows that the log-moment generating functions converge to Λ. And second, by the regularity properties of Λ and the identification of Λ * in terms of relative entropy, established in Section 4 and Section C of the Appendix. As in Section 4, in order to identify the rate function for the LDP we find it easier to consider the bivariate chain Ψ. Recall the bivariate extensions of our earlier definitions from equations (51), (52), (53) and (54). Proof of Theorem 5.1. We begin by establishing an LDP for Φ with rate function given by Λ * . Recall that Proposition 3.6 gives In order to apply the projective limit theorem we need to extend the domain of the convex dual functional Λ * as follows. For probability measures ν ∈ M W 0 1 , Λ * (ν) is defined in (50), and the same definition applies when ν is a probability measure not necessarily in M W 0 1 . More generally, let L ′ denote the algebraic dual of the space L = L W 0 ∞ , consisting of all linear functionals Θ : L → R, and equipped with the weakest topology that makes the functional Therefore, we can identify the space of probability measures M 1 with the corresponding subset of L ′ , and observe that the induced topology on M 1 is simply the τ W 0 -topology. Next, extend the definition of Λ * to all Θ ∈ L ′ via and observe that [12, Assumption 4.6.8] is satisfied by construction (with W = L = L W 0 ∞ , X = L ′ and B = F), and that by Proposition 4.3 the function Λ(F 0 + αF ) is Gateaux differentiable. Therefore, we can apply the Dawson-Gärtner projective limit theorem [12, Corollary 4.6.11 (a)] to obtain that the sequence of empirical measures {L n } satisfy the LDP in the space L ′ with respect to the convex, good rate function Λ * . Moreover, since by Proposition C.9 we know that Λ * (Θ) = ∞ for Θ ∈ M 1 , we obtain the same LDP in the space (M 1 , F), with respect to the induced topology, namely, the τ W 0 -topology; see, e.g., [12,Lemma 4.1.5]. Next note that, in view of Proposition 4.1, the bivariate chain Ψ also satisfies the same LDP. But in this case, we claim that can express Λ * (Γ) for any bivariate probability measure Γ as follows: if the two marginals Γ 1 and Γ 2 of Γ agree; ∞ , otherwise. Finally, an application of the contraction principle [12, Theorem 4.2.1] implies that the univariate convex dual Λ * (ν) coincides with I(ν) in (62). Simply note that the τ W 0 -topology on the space of probability measures is Hausdorff, and that the map Γ → Γ 1 is continuous in that topology. Theorem 5.1 strengthens the "local" large deviations of [32] to a full LDP. The assumptions under which this LDP is proved are more restrictive that those in [32], but apparently they cannot be significantly relaxed. In particular, the density assumption of (DV3+) (ii) cannot be removed, as illustrated by the counter-example given in [18]. This example is of an irreducible, aperiodic Markov chain with state space X = [0, 1], satisfying Doeblin's condition. It can be easily seen that this Markov chain satisfies condition (DV3) with Lyapunov function V (x) = − 1 2 log x, x ∈ [0, 1], and with W given by Taking δ = 1, C = [0, 1] and b = 2 yields a solution to (DV3), with the Lyapunov function V and the unbounded function W as above. But for this Markov chain the density assumption in (DV3+) (ii) is not satisfied, and as shown in [18], it satisfies the LDP with a rate function different from the one in Theorem 5.1. The LDP of Theorem 5.1 can easily be extended to the sequence of empirical measures of k-tuples L n,k , defined for each k ≥ 2 by We write M 1,k for the space of all probability measures on (X k , B k ), and we let F k denote the σ-field of subsets of M 1,k defined analogously to F in (61), with X k in place of X, and with real-valued functions F in the space Similarly, the τ W 0 k -topology on M 1,k is defined by the system of neighborhoods A straightforward generalization of the argument in the above proof yields the following corollary. The proof is omitted. Corollary 5.2 Under the assumptions of Theorem 5.1, for any initial condition Φ(0) = x, the sequence of empirical measures {L n,k } satisfies the LDP in the space (M 1,k , F k ) equipped with the τ W 0 k -topology, with the good, convex rate function where ν k−1 denotes the first (k − 1)-dimensional marginal of ν k . Next we show that under the assumptions of Theorem 5.1 it is possible to obtain exact large deviations results for the partial sums S n , of a real-valued functional F ∈ L W 0 ∞ . In the next two theorems we prove analogs of the corresponding expansions of Bahadur and Ranga Rao for the partial sums of independent random variables [1]. Our results generalize those obtained by Miller [36] for finite state Markov chains, and those in [32] proved for geometrically ergodic Markov processes but only in a neighborhood of the mean; see [32] for further bibliographical references. First we note that, since for any F ∈ L W 0 ∞ the map ν → ν, F from M 1 to R, is continuous under the τ W 0 topology, we can apply the contraction principle to obtain an LDP for the partial sums {S n } in (66): Their laws satisfy the LDP on R with respect to the good, convex rate function J(c) as in (19), Alternatively, based on (the weak version of) the multiplicative mean ergodic theorem in (63), we can apply the Gärtner-Ellis theorem [12,Theorem 2.3.6] to conclude that the laws of the partials sums {S n } satisfy the LDP on R with respect to the good rate function J * (c), so that, in particular, J(c) = J * (c) for all c. Now suppose for simplicity that the function F has zero mean π(F ) = 0 and nontrivial central limit theorem variance σ 2 (F ) > 0; recall the definition of σ 2 (F ) from Section 3.1. To evaluate the supremum in (67), we recall from Lemma 2.10 that Λ(aF ) is convex in a ∈ R, and since by Theorem 3.1 it is also analytic, it is strictly convex. Therefore, if we define then J * (c) = ∞ for values of c larger than F max , and the probabilities of the large deviations events {S n ≥ nc} decay to zero super-exponentially fast. Therefore, from now on we concentrate on the interesting range of values 0 < c < F max . Note that, although in the case of independent and identically distributed random variables it is easy to identify F max as the right endpoint of the support of F , for Markov chains this need not be the case, as illustrated by the following example. Example. Let Φ = {Φ(n) : n ≥ 0} be a discrete-time version of the Ornstein-Uhlenbeck process in R 2 , with Φ(0) = x ∈ R 2 and, Φ(n + 1) = Φ 1 (n + 1) where {N (k)} is a sequence of independent and identically distributed N (0, 1) random variables. Let A denote the above 2-by-2 matrix, and assume that the roots of the quadratic equation z 2 + a 1 z + a 2 = 0 lie within the open unit disk in C. Note that there exists γ < 1 and a positive definite matrix P satisfying, A T P A ≤ γI. One may take P = ∞ 0 γ −k (A k ) T A k , where γ < 1 is chosen so that the sum is convergent. Then Φ satisfies (DV3+) (i) with Lyapunov function V (x) = 1 + ǫx T P x, and W = V , for suitably small ǫ > 0 (hence, the drift condition (DV4) also holds). Condition (DV3+) (ii) holds with T 0 = 2 since P 2 (x, · ) has a Gaussian distribution with full-rank covariance. Consider the functions The asymptotic variance of F 0 is zero, and for any initial condition we have We conclude that F max = (F + ) max = 1, although π{F > c} > 0 for each c ≥ 0 under the invariant distribution π. Recall form Section 3.1 the definitions of lattice and non-lattice functionals. Theorem 5.3 (Exact Large Deviations for Non-Lattice Functionals) Suppose that Φ satisfies (DV3+) with an unbounded function W , and that F ∈ L W 0 ∞ is a real-valued, strongly-non-lattice functional, with π(F ) = 0 and σ 2 (F ) = 0. Then, for any 0 < c < F max and all x ∈ X, where a > 0 is the unique solution of the equation d da Λ(aF ) = c, σ 2 a := d 2 da 2 Λ(aF ) > 0,f a (x) is the eigenfunction constructed in Theorem 3.1, and J(c) is defined in (19). A corresponding result holds for the lower tail. The proof of Theorem 5.3 is identical to that of the corresponding result in [32], based on the following simple properties of a Markov chain satisfying (DV3+). We omit properties P5 and P6 since they are not needed here. Properties. Suppose Φ satisfies (DV3+) with an unbounded function W , and choose and fix an arbitrary x ∈ X and a function F ∈ L W 0 ∞ with zero asymptotic mean π(F ) = 0 and nontrivial asymptotic variance σ 2 = σ 2 (F ) = 0. Let S n denote the partial sums in (66) and write m n (α) for the moment generating functions The proofs of the following properties are exactly as those of the corresponding results in [32], and are based primarily on the multiplicative mean ergodic theorem Theorem 3.1, and the Taylor expansion of Λ(F ) given in Proposition 4.3. Observe that by Theorem 2.2 we have that the Lyapunov function V in (DV3+) satisfies π(V 2 ) < ∞. An analogous asymptotic expansion for lattice functionals is given in the next theorem; again, its proof is omitted as it is identical to that of the corresponding result in [32]. Theorem 5.4 (Exact Large Deviations for Lattice Functionals) Suppose Φ satisfies (DV3+) with an unbounded function W , and that F ∈ L W 0 ∞ is a real-valued, lattice functional with span h > 0, π(F ) = 0 and σ 2 (F ) = 0. Let {c n } be a sequence of real numbers in (ǫ, ∞) for some ǫ > 0, and assume (without loss of generality) that, for each n, c n is in the support of S n . Then, for all x ∈ X, where Λ n (a) is the log-moment generating function of S n , Λ n (a) := log E x e aSn , n ≥ 1, a ∈ R , each a n > 0 is the unique solution of the equation d da Λ n (a) = c n , and J n (c) is the convex dual of Λ n (a), J n (c) := Λ * n (c) := sup A corresponding result holds for the lower tail. Observe that the expansion (69) in the lattice case is slightly more general than the one in Theorem 5.3. If the sequence {c n } converges to some c > ǫ as n → ∞, then, as in [32], the a n also converge to some a > 0, and have, for all x ∈ S, where the last bound uses Cauchy-Schwartz. This gives an upper bound for x ∈ S, and the same bound also holds for all x since g n (x) ≤ sup y∈S g n (y). Choosing η > 0 so small that ρ := e η2T B 1 (2η)(1 − ǫ) Proof of Theorem 2.5. Proof of Theorem 2.2. The construction of a Lyapunov function V * follows from the bounds given above, beginning with (72) (note however that W ≡ 1 under (DV2)). Assume that the set A ∈ B + is fixed, with V bounded on A. We assume moreover that A is small -this is without loss of generality by [34,Proposition 5.2.4 (ii)]. Fix k ≥ 0, and define, Consideration of this stopping time in (72) gives the upper bound, for some b 1 < ∞, and on summing both sides we obtain the pair of bounds, We now demonstrate that this function satisfies the desired drift condition: We have, . This is indeed a version of (V4). Proposition A.2 Suppose that X is σ-compact and locally compact; that P has the Feller property; and that there exists a sequence of compact sets {K n : n ≥ 1} satisfying (27): For Then, there exists a solution to the inequality, such that V, W : X → [1, ∞) are continuous, their sublevel sets are precompact, C ∈ B is compact, and b < ∞. Proof. Let {O n : n ≥ 1} denote a sequence of open, precompact sets satisfying O n ↑ X, and K n ⊂ closure of O n ⊂ O n+1 , n ≥ 1. For each n ≥ 1 we consider a continuous function s n : X → [0, 1] satisfying s n (x) = 1 for x ∈ O n , and s n (x) = 0 for x ∈ O c n+1 . We then define a stopping time τ n ≥ 1 through the conditional distributions, From the conditions imposed on s n we may conclude that τ Kn ≥ τ On ≥ τ n ≥ τ O n+1 for each n ≥ 1. For n ≥ 1, m ≥ 1 we define V n,m : X → R + by, Continuity of this function is established as follows: First, observe that under the Feller property we can infer that P x {τ n = k} is a continuous function of x ∈ X for any k ≥ 1. The bound τ n ≤ τ Kn , n ≥ 1, combined with (27) then establishes a form of uniform integrability sufficient to infer the desired continuity. Moreover, by the dominated convergence theorem we have V n,m (x) ↓ 0, m → ∞, for each x ∈ X. Continuity implies that this convergence is uniform on compacta. We choose {m n : n ≥ 1} so that V n,mn (x) ≤ 1 on O n+1 , and we define V n = V n,mn . Letting W n = n − 1 1 − s m , we obtain the bound H(V n ) ≤ −W n + 1. Let {p n } ⊂ R + satisfy n≥1 p n = 1, p n n = ∞, and define, W := 1 + B v-Separable Kernels The following result is immediate from the definition (24). Lemma B.1 Suppose that { P n : n ∈ Z + } is a positive semigroup, with finite spectral radiuŝ ξ > 0. Then the inverse [Iz − P ] −1 admits the power series representation, where the sum converges in norm. Lemma B.2 (i) is a simple corollary: Lemma B.2 Consider a positive semigroup { P n : n ∈ Z + } that is ψ-irreducible. Then: (i) The spectral radiusξ in L v ∞ of { P t } satisfiesξ < b 0 for a given b 0 < ∞ if and only if there is a b < b 0 , and a function v 1 : X → [1, ∞) such that v 1 equivalent to v, and P v 1 ≤ bv 1 . Then v 1 ∈ L v ∞ by Lemma B.1, and v ≤ v 1 by construction. Moreover, it is easy to see that v 1 satisfies the desired inequality. Conversely, if the inequality holds then for any 0 < η < 1, n ≥ 1, It follows thatξ ≤ η −1 b since v and v 1 are equivalent. Since η < 1 is arbitrary, this shows that b ≥ξ, and completes the proof. The following result will be used below to construct v-separable kernels. Lemma B.3 Suppose that P is a positive kernel, and that there is a measure µ ∈ M v 1 satisfying Then P 2 is v-separable. We then define P ǫ (x, dy) = r ǫ (x, y)v −1 (y)µ(dy) , x, y ∈ X , and P ǫ2 := P P ǫ . The latter kernel may be expressed P ǫ2 = s i ⊗ ν i , with We have s i ∈ L v ∞ and ν i ∈ M v 1 for each i. For any g ∈ L v ∞ , x ∈ X, we then have, Lemma B.4 Suppose that (DV3) holds with W unbounded. Fix 0 < η ≤ 1, and consider any measurable function F satisfying We then have |||I C W (r) c P f ||| vη → 0, exponentially fast, as r → ∞. Proof of Theorem 2.4. (a) ⇒ (b). When (DV3) holds we can conclude from Lemma B.4 that |||P − I C W (r) P||| v 0 → 0 as r → ∞. It follows that |||P T − I C W (r) P T ||| v 0 → 0 as r → ∞ for any T ≥ 1. In particular, this holds for T = T 0 . Under the separability assumption on {I C W (r) P T 0 : r ≥ 1} it then follows that P T 0 is v-separable. (b) ⇒ (a). We first show that each of the sets {C v 0 (r) : r ≥ 1} is small. Under the assumptions of (b) we may find, for each ǫ > 0, an integer N ≥ 1, functions This gives for any r ≥ 1, Let A ∈ B be a small set with ν i (A c ) < ǫ for each i. From the bound above and using similar arguments, It follows that for any r ≥ 1, we may find a small set A(r) such that P T 0 (x, A(r)) ≥ 1 2 , for x ∈ C v 0 (r). It then follows from [34, Proposition 5.2.4] that C v 0 (r) is small. We now construct a solution to the drift inequality in (DV3). Using finite approximations as in (74), we may construct, for each n ≥ 1, an integer r n ≥ n such that Since the norm is submultiplicative, this then gives the bound, We then define for each n ≥ 1, From the previous bound on |||(P I C c rn ) k ||| v 0 we have the pair of bounds, Finally, we set where C = C v (r) for some r, and the constants b and r are chosen so that W (x) ≥ 1 for all x ∈ X. The bounds (75) together with the lower bound v n ≥ v 0 e n I C c rn imply that which implies the existence of r and b satisfying these requirements. In much of the remainder of the appendix we replace (DV3+) with the following more general condition: (ii) There exists T 0 > 0 such that I C W (r) P T 0 is v-separable for for each r < ∞. Theorem 2.4 states that this is roughly equivalent to (DV3+) with an unbounded function W . In fact, we do have an analogous upper bound for P T 0 : Lemma B.6 Suppose that the conditions of (76) hold. Then, for each r ≥ 1, ǫ > 0, there is a positive measure β r,ǫ ∈ M v 1 such that Proof. We apply the approximation (74) used in the proof of Theorem 2.4, where {s i : 1 ≤ i ≤ N } ⊂ L v 0 ∞ are non-negative valued, and {ν i : 1 ≤ i ≤ N } ⊂ M v 0 1 are probability measures. We may assume that the {s i } satisfy the bound 1 = P T 0 (x, X) ≥ s i (x) − 1, x ∈ C W (r), and it follows that we may take β r,ǫ = 2 N i=1 ν i . The following result is proven exactly as Lemma B.5, using Lemma B.6. C Properties of Λ and Λ * In this section we obtain additional properties of Λ and Λ * . One of the main goals is to establish approximations of Λ(G) through bounded functions when G is possibly unbounded. Similar issues are treated in [13,Chapter 5] where a tightness condition is used to provide related approximations. Lemma C.1 For a ψ-irreducible Markov chain: (i) The log-generalized principal eigenvalue Λ is convex on the space of measurable functions F : X → (−∞, ∞]. (ii) The log-spectral radius Ξ is convex on the space of measurable functions F : X → (−∞, ∞]. Proof. The proofs of (i) and (ii) are similar, and both proofs are based on Lemma B.2. We provide a proof of (ii) only. Fix F 1 , F 2 ∈ L W 0 ∞ , η, θ ∈ (0, 1), and let b i = η −1 ξ(F i ), i = 1, 2. Lemma B.2 implies that there exists functions {v 1 , v 2 } equivalent to v, and satisfying We then define so that by Hölder's inequality, The function v θ is equivalent to v. Consequently, we may apply Lemma B.2 once more to obtain that ξ( Taking logarithms then gives, This completes the proof since 0 < η < 1 is arbitrary. The following result establishes a form of upper semi-continuity for the functional Λ. Lemma C.2 Suppose that Φ is ψ-irreducible, and consider a sequence {F n } of measurable, real-valued functions on X. Suppose there exists a measurable function F : X → R such that F n ↑ F , as n ↑ ∞. Then the corresponding generalized principal eigenvalues converge: Λ(F n ) → Λ(F ), as n ↑ ∞. Proof. It is obvious that lim sup n→∞ Λ(F n ) ≤ Λ(F ). To complete the proof we establish a bound on the limit infimum. Under the assumptions of the proposition we have P T fn ≥ P T f 1 , for any T ≥ 1, n ≥ 1. It follows that we can find an integer T 0 ≥ 1, a function s : X → [0, 1], and a probability ν on B satisfying ψ(s) > 0 and P T 0 fn ≥ s ⊗ ν , 1 ≤ n ≤ ∞. Let (f n , λ n ) denote the Perron-Frobenius eigenfunction and generalized principal eigenvalue for P fn , normalized so that ν(h n ) = 1 for each n. For each n ≥ 1 we have the upper bound, P fnfn ≤ λ nfn . This gives a lower bound on the {f n }: f n ≥ λ −T 0 n P T 0 fnf n ≥ λ −T 0 n ν(f n )s = λ −T 0 n s. In applying Lemma C.2 we typically assume that suitable regularity conditions hold so that Ξ(F ) = Λ(F ). Under a finiteness assumption alone we obtain a complementary continuity result for certain classes of decreasing sequences of functions. One such result is given here: Lemma C.3 Suppose that |||P||| v < ∞, and that F : X → R is measurable, with Ξ(F + ) < ∞. Then, with F n := max(F, −n) we have, Ξ(F n ) ↓ Ξ(F ), as n ↑ ∞. To establish a tight approximation for Λ(M ), where M = log m is as in the proof Theorem 4.2, we will approximate M by bounded functions. Proposition C.4 Suppose that |||P||| v < ∞, and that F : X → R is measurable, with Ξ(F ) < ∞, and Λ(F ) = Ξ(F ). Then, there exists a sequence {n k : k ≥ 1} such that with F k := F I{−n k ≤ F ≤ k} we have: Proof. Let F 0 k := F I{F ≤ k}. From Lemma C.2 we have Λ(F 0 k ) ↑ Λ(F ), k → ∞. It follows that we also have Ξ(F 0 k ) ↑ Ξ(F ), k → ∞, since Ξ dominates Λ. We now apply Lemma C.3: For each k ≥ 1 we may find n k ≥ 1 such that with F k := F I{−n k ≤ F ≤ k}, The following proposition implies that Λ is tight in a strong sense under (DV3+): Proposition C.5 Suppose that the conditions of (76) hold. Then, for any increasing sequence of measurable sets K n ↑ X, and any G ∈ L W 0 ∞ , The proof is postponed until after the following lemma. Lemma C.6 Suppose that the conditions of (76) hold, and consider any increasing sequence of measurable sets K n ↑ X, and any G ∈ L W 0 ∞ . Then, on letting g n = exp(I K c n G), n ≥ 1, we have |||P T 0 P gn − P T 0 +1 ||| v → 0 , n → ∞. Proof. We may assume without loss of generality that G ≥ 0. As usual, we set g = e G . Under (76) we have |||P gn ||| v ≤ |||P g ||| v < ∞, n ≥ 1. Consequently, given Lemma B.4, it is enough to show that for any r ≥ 1, To see this, observe that for any h ∈ L v ∞ , x ∈ X, I C W (r) [P T 0 P gn − P T 0 +1 ]h (x) = I C W (r) [P T 0 I K n c [P g − P ]]h (x) , where the measure β r,ǫ ∈ M v 1 is given in Lemma B.6. Consequently, This proves the result since ǫ > 0 is arbitrary. Proof of Proposition C.5. To see (i), consider any G ∈ L W 0 ∞ , and any sequence of measurable sets K n ↑ X. We assume without loss of generality that G ≥ 0. Fix any b > 1, and define for n ≥ 1, G n = (T 0 + 1)bI K c n G. In view of Lemma C.6, given any Λ > 0, we may find n ≥ 1 such that the spectral radius of the semigroup generated by the kernel P n := P T 0 P gn satisfies ξ n < e Λ . With n, Λ fixed, we then have for some b n < ∞, P k n v ≤ b n e kΛ v for k ≥ 1. This has the sample path representation, x ∈ X, k ≥ 1. Denote by h 0,k (x) the expectation on the left hand side. We then have, for each j ≥ 1, h j,k (x) := P j h 0,k (x) ≤ b n e kΛ (|||P||| v ) j v(x) , x ∈ X. We then obtain the following bound using Hölder's inequality, x ∈ X, k ≥ 1. This shows that Λ I Kn G → Λ G as claimed. Proposition C.5 allows us to broaden the class of functions for which Ξ is finite-valued. If the state space X is σ-compact, then we may assume that W 1 is also coercive. Proof. Fix a sequence of measurable sets satisfying K n ↑ X, with sup x∈Kn V (x) < ∞ for each n. Proposition C.5 implies that we may find, for each k ≥ 1, an integer n k ≥ 1 such that Ξ(2 k+1 I K c n k W 0 ) ≤ 1. We then define The functional Ξ is convex by Lemma C.1, which gives the bound, To see that W 1 ∈ L V ∞ we apply Lemma 2.9. Finally, if X is σ-compact, then the {K n } may be taken to be compact sets, which then implies the coercive property for W 1 . We have the following useful corollary. The proof is routine, given Proposition C.7 and Proposition B.2 (i); see also [2,Theorem 2.4]. We now turn to properties of the dual functional Λ * defined in (64). The continuity results stated in Proposition C.5 lead to the following representation. Proposition C.9 Suppose that the conditions of (76) hold. Let Θ be a linear functional on L W 0 ∞,2 satisfying Λ * (Θ) < ∞. Then Θ may be represented as, where ν ∈ M W 0 1 is a probability measure. Proof. We proceed in several steps, making repeated use of the bound, First note that on considering constant functions in (77) It is clear that finiteness of Λ * implies that Θ, 1 = 1. Next, consider any G : X → R + with G ∈ L W 0 ∞ . Then, since Λ(cG) ≤ 0 for c ≤ 0, We conclude that Θ, G ≥ 0 for G ≥ 0. Consider now a set A ∈ B of ψ-measure zero. Then Λ(cI A ) = 0 for any c ≥ 0, and we can argue as above using (77) that ∞ > Λ * (Θ) ≥ sup c>0 Θ, I A c, which shows that Θ, I A = 0. It follows that lim sup n→∞ Θ(G n ) = 0, which implies that Θ defines a countably additive set function, so that Θ is in fact a probability measure. More generally, we define Λ * for bivariate probability measures Γ not necessarily in M W 0 1,2 using the same definition as in (54). Recall from Lemma 4.11 that the two marginals of Γ agree whenever Λ * (Γ) < ∞. Proposition C.10 provides further structure. Lemma B.5 shows that Λ(ǫW ) < ∞ for ǫ > 0 sufficiently small, and this gives (79). DefineP through the decomposition Γ =π ⊙P , and letĚ denote the expectation for the Markov chain with transition kernelP . We assume thatP is of the form P (x, dy) = m(x, y)P (x, dy), x, y ∈ X, and set M = log(m), since otherwise the relative entropy is infinite and there is nothing to prove. We then have, for any G ∈ L W 0 ∞,2 , Λ(G) = lim T →∞ Taking the supremum over all G ∈ L W 0 ∞,2 gives (78).
24,519.6
2005-02-24T00:00:00.000
[ "Mathematics" ]
Enhanced control of self-doping in halide perovskites for improved thermoelectric performance Metal halide perovskites have emerged as promising photovoltaic materials, but, despite ultralow thermal conductivity, progress on developing them for thermoelectrics has been limited. Here, we report the thermoelectric properties of all-inorganic tin based perovskites with enhanced air stability. Fine tuning the thermoelectric properties of the films is achieved by self-doping through the oxidation of tin (ΙΙ) to tin (ΙV) in a thin surface-layer that transfers charge to the bulk. This separates the doping defects from the transport region, enabling enhanced electrical conductivity. We show that this arises due to a chlorine-rich surface layer that acts simultaneously as the source of free charges and a sacrificial layer protecting the bulk from oxidation. Moreover, we achieve a figure-of-merit (ZT) of 0.14 ± 0.01 when chlorine-doping and degree of the oxidation are optimised in tandem. Optical images Supplementary Figure 1 (A) to (E), show that all the films formed by vacuum deposition present a "mirror-like" surface. Films formed from the co-evaporation method directly show a perovskite black phase, whereas the sequentially deposited and SLS films are red-brown (due to the coexistence of SnI2 and CsI layers in the film) but convert to a black phase upon baking. The B--CsSnI3 undergoes a significant phase transition to the Y-phase (yellow phase) when exposed to air. The Yphase will continuously change to a dark-green phase (Cs2SnI6) where Sn 2+ is totally oxidised to Sn 4+ . The Cs2SnI6 films are semi-transparent and stable in air. CsSnI3-xClx thin films. We performed X-ray diffraction analysis of SLS 5% SnCl2 mixed halide CsSnI3-xClx thin films, which showed (Supplementary Figure 2 (a)) peaks at 25.02° and 29.15°, corresponding to (220) and (202) planes respectively of the orthorhombic B--CsSnI3 crystal structure. In fact, the mixed halide films processed by SLS show a similar crystal structure to undoped SLS CsSnI3 films (XRD presented in the main text). To investigate the difference between the pure CsSnI3 and mixed CsSnI3-xClx, we performed a slow X-ray diffraction scan from 20° to 30° at a rate of 1°/minute (Supplementary Figure 2 (b)). A peak at 23.00° is clearly observed, which is different to the typical peak at 22.80° of the CsSnCl3 perovskite (011) plane and is also shifted slightly with respect to the weak CsSnI3 perovskite (213) plane at 22.94°. 6 Supplementary Figure 3 (a-b) shows that the main change to the spectrum after exposure to air for 30 minutes is the (103) at 32.9°. The other peaks are reasonably unaffected. Quantitative analysis of Cl states in mixed halide perovskite by X-ray photoelectron spectroscopy (XPS). To investigate the Cl bonding environment in the films, we performed X-ray photoelectron spectroscopy (XPS) of Cl 2p in 0.5%, 1%, 3% and 5% SnCl2 mixed halide CsSnI3-xClx perovskite films. The percentage we use refers to the mass of SnCl2 relative to SnI2 in our thin films before the baking step. The final atomic % of Cl in the film will be much lower. As shown in Supplementary Figure Supplementary Note 4 UV-vis absorption spectra for air stability analysis. We quantified the film stability in ambient air by following the quenching of the optical absorbance. The sequentially deposited films presented a poor stability where the intensity of a degradation peak at 680 nm gradually increase after 100 minutes air exposure. For the co-evaporated films, there was no clear peak at 680nm after 500 minutes air exposure though the absorbance at 420 nm reduced to 32% of its original value. In the SLS films, the degradation peak at 680 nm presented from ~380 minutes, and, after 500 minutes, the absorbance at 420 nm had reduced to 41% of its original value. When SnCl2 was introduced to form the mixed halide CsSnI3-xClx, the large improvement in film stability evident from UV-vis absorption spectra that show no emergence of the degradation peak at 680 nm even after 500 minutes (Supplementary Figure 8 g and h). In Supplementary Figure 8(i), sequentially deposited films (red) show a poor stability with 60% reduction in absorption at 420 nm after 100 minutes air exposure. Mixed halide CsSnI3-xClx perovskite films with 5% SnCl2 show the best stability among all with just 3% quenching of the 420 nm peak after 100 minutes air exposure. Supplementary Note 5 Sn oxidation states in CsSnI3-xClx perovskite films. For the pristine CsSnI3 perovskite films, the binding energy of Sn 3d5/2 was observed at 485.8 eV. However The reported Sn 3d core binding energy has shifts between Sn 0 , Sn 2+ and Sn 4+ oxidation states in the range 1 -1.8 eV (Sn 0 to Sn 2+ ) and 1.8 -2.5 eV (Sn 0 to Sn 4+ ), respectively. 8,9 The small shift of 0.4 eV from Sn 2+ to Sn 4+ makes quantitative analysis of oxidation in our films challenging, AES is employed to distinguish Sn 2+ and Sn 4+ states due to substantial spectral shifts. Supplementary Table 1 shows the AES peaks of Sn metal. 10,11 Upon oxidation of metallic Sn samples, Kövér found Sn MNN kinetic energy shift of ~3.8 eV, 5.8 eV for SnO and SnO2. 9 Lee observed small shifts ~2 eV for 0.4 monolayer (ML) and 3.4 ML Sn oxides attributed to the formation of SnO rather than SnO2. 12 Another indicator of Sn chemical state, the modified Auger parameter, is shown in Supplementary of Sn 0 (915 eV) 9,12 and Sn 4+ (911.2 eV) 8 it is clear that our films contain oxidised forms of Sn in a mix of Sn 2+ and Sn 4+ states. As the depth goes down to 10 nm, there is a reduction in the Sn 4+ character of the perovskite until something resembling pure Sn 2+ is reached. This combined with the spectral fitting in the main paper (Figure 4a,b) we can conclude that on the timescale of our thermoelectric measurements, the oxidation process only proceeds in the outer surface layer of our films, leaving the bulk in a pristine state. Wiedemann-Franz law in CsSnI3-xClx perovskite semiconductor thin films and extraction of the Lorenz number. The Thermal conductivity can therefore be deviated into two parts: lattice and electron contribution, as following: We can extract L and lattice thermal conductivity, , by plotting thermal conductivity versus electrical conductivity for a number of experimental measurements. To do this, after the first measurement of the samples, it was exposed to air for 3 minutes before re-measuring the electrical and thermal conductivity (σ3mins and κ3mins). Then the same sample was exposed to air for an additional 3 minutes exposure, before measuring the electrical and thermal conductivity again (σ6mins and κ6mins). Thus, we obtained several values (σtime and κtime) and plot σtime vs. κtime. From the equation, = + , we see that the slope gives Lorentz number times temperature (L*T) and the intercept is lattice thermal conductivity. In this situation, we must approximate that the lattice thermal conductivity does not change as the dopants are introduced. In fact, dopants are defects, and do influence the lattice structure and disorder. Consequently, quantification of the Lorenz number is challenging in many thermoelectric materials. In our case, CsSnI3 perovskites support an effective way to extract L because of the selfdoping process, which does not introduce extrinsic dopants. The Sn 4+ sites that act as self-dopants are shown in the main manuscript to be located at the top surface of the film only (in a layer < 10 nm thick), but provide free charges to the bulk (~300 nm thick). The lattice thermal conductivity in > 95 % of the film thickness is therefore unaffected by the doping process. As shown in Supplementary Figure 11 Thermoelectric property measurement details. Electrical conductivity measurement is based on the van der Pauw method with four needlelike contact pads at the four corners of the material under test, as shown in Supplementary Figure 16 (a). Electrical conductivity can be calculated by the following equation: where is film thickness, R is resistance of the films with vertical and horizon directions (as pictured). The thin-film heater and thermometer are located along the centre-line of the membrane, and the temperature gradient can be adjusted with the current in heater. The cold side temperature is taken as the chip temperature (measured underneath in the base holder), allowing the thermovoltage to be obtained and a Seebeck coefficient calculated. In-plane thermal conductivity is be measured by a transient 3-ω method. An alternating current ( ) = 0 × cos( ) is used to heat the stripe. Joule heating occurs at frequency 2ω due to the heating power, 2 = 0 2 R(1 + cos(2ωt))/2. As a result, the temperature oscillates at 2ω, and the temperature dependent electrical resistance also has a component at 2ω (R2ω). The temperature change of the heating stripe is measured with a lock-in amplifier which can extract the third harmonic of the voltage drop across the heating stripe (V3ω = 2 ). The thirdharmonic voltage captures the second-harmonic temperature rise (ΔT2ω) in the heater, which is a function of the thermal properties of the underlying materials. Considering in-plane thermal conductivity, by solving the two dimensional partial differential heat equation across the membrane with the given boundary conditions, the general expression for the amplitude of the 3ω oscillation V3ω is expressed as 16 : where is the temperature coefficient of resistance, is the total thickness of sample plus 100nm of Si3N4 membrane and 33 nm of Al2O3, is the length of the heater, is the width of the membrane, is the thermal relaxation time, = / , where is the mass density and is the specific heat capacity. the ω becomes negligible and V3ω becomes constant. Equation 3 can be written as: (4) To get the thermal conductivity of the sample, the contribution of membrane thermal conductance should be removed. The sample thermal conductivity is given by Where is the membrane thermal conductivity, is the membrane thickness, is the sample thickness. The errors of electrical and thermal conductivity are dominated by the measurement of film thickness. The errors on the Seebeck coefficient come from the fitting error of thermal volatge vs. temperature gradient data. Because the film thicknesses used in electrical and thermal conductivity are identical, they cancel out in the calculation of ZT, limiting the error on the final value.
2,388.4
2019-12-01T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Structural Damage Characteristics of a Layer-to-Layer 3-D Angle-Interlock Woven Composite Subjected to Drop-Weight Impact The most attractive structural feature of the three-dimensional (3D) angle-interlock woven structure is that the straight weft yarns are bundled by the undulated warp yarns, which induces the overall good structural stability and a stable fabric structure. Thus the 3-D angle-interlock woven composite (3DAWC) prepared by the vacuum-assisted resin transfer molding (VARTM) curing process has excellent mechanical properties by using the fabric and epoxy resin as the reinforcement and matrix, respectively. The low-velocity impact damage properties of the composites under different drop-weight energies (70, 80, and 100 J) were tested experimentally. The load–displacement curves, energy–time curves, and the ultimate failure modes were obtained to analyze the performance of resistance to low-velocity impact, as well as the impact energy absorption effect and failure mechanism, especially the structural damage characteristics of the 3DAWC subjected to the low-velocity impact of drop weight. By analyzing the obtained experimental results, it is found that the fabric reinforcement is the primary energy absorption component and the impact energy mainly propagates along the longitudinal direction of the yarns, especially the weft yarn system, which is arranged in a straight way. In addition, as the impact energy increases, the energy absorbed and dissipated by the composite increases simultaneously. This phenomenon is manifested in the severity of deformation and damage of the material, i.e., the amount of deformation and size of the damaged area. Introduction Resin-based fi ber-reinforced composite materials have been widely used in aerospace, transportation, construction engineering, sports equipment, personal protection, and other fi elds due to their signifi cant advantages, such as lightweight, high strength, diversifi ed structure, and good design ability. Among them, especially the three-dimensional (3-D) textile structural composites (3DTSCs) with outstanding mechanical properties, such as 3-D woven, knitting, and braiding, indicate the better mechanical properties than the traditional twodimensional (2-D) unidirectional composite materials. Due to the existence of the reinforcing fi ber system in the thickness direction, the 3DTSCs can effectively avoid the delamination phenomenon and increase the inter-laminar shear strength, thus they have more obvious advantages in the external loading conditions such as bending, impact, and fatigue [1,2]. Among the 3-D woven composites, two typical types are 3-D orthogonal and 3-D angle-interlock woven composites (3DOWCs and 3DAWCs), and each has its own advantages due to the specifi c structural characteristics [3][4][5][6]. The most prominent feature of the 3-D orthogonal woven structure is that the warp and weft yarns are arranged in a straight way and interlaced at 90°. Besides, they are bundled by the Z-yarn system in the thickness direction to form a tight and stable structure. The most prominent feature of the 3-D angle-interlock woven structure is that the straight-aligned yarn system and the yarns laid in the thickness direction are interwoven at a certain angle. The undulated warp yarn system is used to bind the weft yarn system that is arranged in a straight way, thereby effectively enhancing the connection strength between the layers and imparting good structural stability. Therefore, according to the count of interlaced layers, 3-D angle-interlock woven structures can be classifi ed into layer-to-layer and through thickness [7]. In engineering applications, the mechanical advantages of the corresponding fabric structure should be fully utilized to choose the most suitable fabric structure. Taking the 3-D angle-interlock woven fabric (3DAWF) or its reinforced composite material, i.e., 3DAWC subjected to the Abstract: The most attractive structural feature of the three-dimensional (3D) angle-interlock woven structure is that the straight weft yarns are bundled by the undulated warp yarns, which induces the overall good structural stability and a stable fabric structure. Thus the 3-D angle-interlock woven composite (3DAWC) prepared by the vacuum-assisted resin transfer molding (VARTM) curing process has excellent mechanical properties by using the fabric and epoxy resin as the reinforcement and matrix, respectively. The low-velocity impact damage properties of the composites under different drop-weight energies (70, 80, and 100 J) were tested experimentally. The load-displacement curves, energy-time curves, and the ultimate failure modes were obtained to analyze the performance of resistance to lowvelocity impact, as well as the impact energy absorption effect and failure mechanism, especially the structural damage characteristics of the 3DAWC subjected to the low-velocity impact of drop weight. By analyzing the obtained experimental results, it is found that the fabric reinforcement is the primary energy absorption component and the impact energy mainly propagates along the longitudinal direction of the yarns, especially the weft yarn system, which is arranged in a straight way. In addition, as the impact energy increases, the energy absorbed and dissipated by the composite increases simultaneously. This phenomenon is manifested in the severity of deformation and damage of the material, i.e., the amount of deformation and size of the damaged area. Keywords: 3-D angle-interlock woven composite; drop weight; low-velocity impact; energy absorption; structural failure characteristics impact-type loads (including high and low velocities) into consideration, the mechanical contributions from the structure should be highlighted. On the one hand, due to the existence of the undulated yarn system along the thickness direction, the material is given a better performance of resistance to interlayer damage than the 2-D fiber-reinforced composite materials [8][9][10]. On the other hand, the straight-lined yarn system helps to rapidly propagate the impact energy to a large area of the structure with an ultrahigh stress wave velocity. The material is fully loaded to effectively improve the capacity of energy absorption and impact resistance. Such types of materials have great potential for applications in impact loading conditions [11][12][13]. In practical engineering applications, compared with the highvelocity impact conditions with severe material damage which should take the high strain rate effect into consideration, the influence of low-velocity impact loading on the mechanical properties of the 3DAWC is relatively small but it cannot be ignored. At present, the research works on the failure behavior of the 3DAWC under low-velocity impact, taking into account room temperature, normal pressure, and various special working conditions, such as high temperature, thermal oxidation, and aging [14,15]. However, the research on the structural failure mechanism of the 3DAWC under the dropweight low-velocity impact needs to be further studied. In this article, the experimental investigation on a typical layerto-layer 3DAWC subjected to drop-weight low-velocity impact loading with different impact energy (70, 80, and 100 J) is carried out to obtain the load-displacement curve, energytime curve, and the ultimate failure mode in order to analyze the performance of resistance to low-velocity impact, as well as the impact energy absorption effect and failure mechanism, especially the structural damage characteristics of the 3DAWC subjected to the low-velocity impact of drop weight, thus to guide the structural optimization design of low-velocity impactresistant composites. Materials The composite testing specimens used in the drop-weight impact tests consist of the 3DAWF reinforcement and resin matrix. The reinforcing phase material and the matrix phase material are carbon fiber (warp and weft fiber tows) and epoxy resin, respectively. The specifications of 3DAWF are listed in Table 1. In this study, a typical layer-to-layer 3DAWF is used, which is composed of two systems, i.e., the undulated warp yarn system in the longitudinal direction and weft yarn system arranged in a straight way. For the weft yarn system, relying on the closed-knit effect provided by the warp yarn system, it forms a stable 3-D woven fabric structure with excellent mechanical properties. The composite specimens were manufactured using vacuumassisted resin transfer molding (VARTM) technique. According to ASTM D7136, the size of each composite specimen for dropweight impact testing is 100 x 100 x 6 mm 3 . Besides, the fiber volume fraction is approximately 45%. The surface and cross section of a testing specimen are shown in Figure 1. Low-velocity impact tests The drop-weight impact tests were carried out on the WANCE (Shenzhen, China) DIT302E-Z drop-weight impact tester, as shown in Figure 2. The composite specimen was fixed in the specific fixture and placed in the working area of the tester. The axis of the drop weight was aligned with the center point of the testing specimen. The impact energy was set as 70, 80, and 100 J by controlling the drop height. Besides, all the tests were performed at room temperature, approximately 25°C. Each test at one specific impact energy was repeated three times and the average value was obtained. Load-displacement curves The load-displacement curves of the tested 3DAWC specimens under various impact energies are shown in Figure 3. It can be found that the trends of curves are similar under different impact energies. Furthermore, the impact process can be divided into three stages as described as follows. In the initial stage, the curves show a significant linear rise, indicating that the composite specimens are deformed. At this stage, the deformation range of the tested specimens is approximately 0.5 -1 mm. Then entering the second stage, the curves begin to show a slight tiny fluctuation. It indicates that the initial damage of the composite specimens is mainly caused by the cracking failure of the resin and the resin-fiber interface after reaching the failure threshold. The cumulative deformation range of the specimen at this stage is approximately 0.5-1.5 mm. In the third stage, a series of continuous relatively larger fluctuations appear on the curves. The fiber reinforcement is the main load-carrying component of the composite, indicating that Notes: T300 and T700 are two types of carbon fiber tows provided by Toray Inc (Japan). 6K (12K) means there are approximately 6,000 (12,000) fibers in a single fiber tow (yarn). "ends/cm" means the number of warp/weft yarns per unit length of the fabric. AUTEX Research Journal, DOI 10.2478/aut-2020-0012 © AUTEX http://www.autexrj.com/ the fracture failure of the fiber reinforcement plays an important role in this stage. Furthermore, considering that the 3DAWC is reinforced by multilayers of warp and weft yarns, a similar large amplitude fluctuation occurs when each layer reaches its failure threshold and breakage occurs. According to the count of layers in the direction of impact, a total of approximately six or seven fluctuations can be found. This process continues until the breakage of the underlying yarns, which induces the ultimate punctured failure of composite. The cumulative deformation range is approximately 1.5-3.5 mm for this stage, and it is also the primary failure stage of the 3DAWC specimen under low-velocity impact loading. Energy-time curves In addition, the curves of the energy absorbed and dissipated by the 3DAWC specimens under various impact energies are shown in Figure 4. It can be found that the absorbed and dissipated energy increases simultaneously with the increase in the impact energy. Specifically, the absorbed and dissipated energy values corresponding to the initial impact energy of 70, 80, and 100 J are 25.24, 28.76, and 31.96 J, respectively. This phenomenon is manifested in the severity of deformation and damage of the 3DAWC specimens. For the impact-induced deformation of the 3DAWC specimens, the deformation in the impact direction increases simultaneously with the increase in the initial impact energy, i.e., the normal impact deformation amounts corresponding to various impact energies of 70, 80, and 100 J are 2.64 , 2.85, and 3.46 mm, respectively. As for the size of the impact-induced damaged area, it becomes larger with the increase in the initial impact energy. It is worth pointing out that a relatively small part of the energy (approximately 30-35%) is absorbed and dissipated by the 3DAWC specimens during the impact process, which results in a certain degree of damage. Also most of the energy (approximately 65-70%) is taken away via the rebound effect of the drop weight due to the reverse force from the 3DAWC specimens. The ultimate damage mode During the drop-weight low-velocity impact process of the layer-to-layer 3DAWC specimen, the impact energy is mainly absorbed and dissipated via the damage modes such as resin cracking, fiber breakage, and de-bonding of the resin-fiber interface. As shown in Figure 5, a similar damage mode occurs for the three impact energies. It can be found that clear impact-induced pits are generated in the central region of the composite surface in which they are directly impacted by the drop-weight impactor, accompanied by the cracking of fibers and resin. On the back side of each 3DAWC specimen, in addition to the cracking of matrix and fiber breakage, the cracks parallel to the longitudinal direction of fibers can also be found and accompanied by a significant protrusion. It is worth mentioning that the damage in resin and fibers mainly propagates along the longitudinal direction of the warp and weft yarns on both sides of the 3DAWC specimen, especially of the weft yarns that are arranged in a straight way. In addition, it can be found that the greater the energy absorbed and dissipated by the 3DAWC specimen, the larger the area of damage, which is mutually confirmed by the phenomenon shown in Figure 5. The impact failure mechanism According to the earlier description, three main aspects on the damage of resin, fiber tows, and resin-fiber interface are involved in the drop-weight low-velocity impact process of the layer-to-layer 3DAWC. Therefore, the impact damage mechanism is divided into three parts and they are summarized as follows. Resin When the drop-weight impactor contacts with the resin on the surface of the 3DAWC, the composite enters the stage of deformation and progressive damage. The resin is influenced by the continuously strengthened stress and strain. Cracking is generated to absorb the impact energy when the stress and strain exceed their limit. Since the matrix absorbs only a small portion of the impact energy, a tiny fluctuation occurs in the load-displacement curves. Resin-fiber interface The impact energy propagates at a certain stress wave velocity during the impact process, resulting in a certain bending deformation and damage to absorb and dissipate the impact energy, as well as improving the load-carrying capacity of 3DAWC. Since the stress wave velocity is much greater than the dropweight velocity, stress wave is generated as the impactor contacts the composite surface, i.e., compression wave. It propagates rapidly along the lateral and longitudinal directions of the material, and then the reflected wave is generated when a free surface or interface is encountered, i.e., tensile wave. Under the joint action of compression wave and tensile wave, the onset of de-bonding phenomenon can be found at resinfiber interface in the composite structure. Moreover, it mainly propagates into the interior of the composite structure along the longitudinal direction of the fiber tows. Similarly, the tiny fluctuations in the load-displacement curves occur since relatively small portion of impact energy is absorbed by the interface de-bonding. Fiber reinforcement With the continuation of the impact process, due to the distribution of a certain amount of yarn reinforcement on the path of the drop-weight impactor, the breakage of fiber tows occurs after reaching the failure threshold. Since the fiber tows can no longer carry loads after breakage, the number of co-load-carrying fiber tows during the penetration process is constantly changing, and the load applied to the composite material is also constantly changing. Furthermore, since the fiber reinforcement absorbs most of the energy during the impact process, which causes a huge change in the force acting on the impactor, the continuous fluctuations with large amplitude occur on the load-displacement curves. Conclusions In this article, the energy absorption mechanism and structural failure characteristics of a layer-to-layer 3DAWC under drop-weight low-velocity impact loading are studied. The damage performance for different impact energy cases (70, 80, and 100 J) is tested. By analyzing the experimental results, the following conclusions have been obtained. 1. In view of the 3-D angle-interlock woven structure, the impact energy mainly propagates along the longitudinal direction of the straightly arranged weft yarns, instead of the undulated warp yarns, thus the impact energy propagates at a certain stress wave velocity. The capacity of impact resistance for the 3DAWC has been effectively improved. 2. According to the variation characteristics of the loaddisplacement curves, the drop-weight impact process can be divided into three stages: the initial stage of deformation; the second stage of initial damage, which is mainly due to the cracking or de-bonding of resin and resin-fiber interface; and the third stage of layer-by-layer style fracture failure of fiber reinforcement. 3. As the impact energy increases, the energy absorbed and dissipated by the composite increases simultaneously. This phenomenon is manifested in the severity of deformation and damage of the 3DAWC specimens. For the impactinduced deformation, the deformation in the impact direction increases simultaneously with the increase in the initial impact energy. As for the size of the impact-induced damaged area, it becomes larger with the increase in the initial impact energy.
3,852.8
2020-05-05T00:00:00.000
[ "Materials Science", "Engineering" ]
Migfilin Interacts with Vasodilator-stimulated Phosphoprotein (VASP) and Regulates VASP Localization to Cell-Matrix Adhesions and Migration* Cell migration is a complex process that is coordinately regulated by cell-matrix adhesion and actin cytoskeleton. We report here that migfilin, a recently identified component of cell-matrix adhesions, is a biphasic regulator of cell migration. Loss of migfilin impairs cell migration. Surprisingly, overexpression of migfilin also reduces cell migration. Molecularly, we have identified vasodilator-stimulated phosphoprotein (VASP) as a new migfilin-binding protein. The interaction is mediated by the VASP EVH1 domain and a single L104PPPPP site located within the migfilin proline-rich domain. Migfilin and VASP form a complex in both suspended and adhered cells, and in the latter, they co-localize in cell-matrix adhesions. Functionally, migfilin facilitates VASP localization to cell-matrix adhesions. Using two different approaches (VASP-binding defective migfilin mutants and small interfering RNA-mediated VASP knockdown), we show that the interaction with VASP is crucially involved in migfilin-mediated regulation of cell migration. Our results identify migfilin as an important regulator of cell migration and provide new information on the mechanism by which migfilin regulates this process. Cell migration is a tightly controlled process that is crucially involved in embryonic development, wound repair, and other physiological processes (1)(2)(3)(4). Abnormal cell migration is an important causal factor for the pathogenesis and/or progression of a wide variety of diseases. Understanding how cells control their motility therefore is an important topic in molecular biology. Migfilin is a recently identified widely expressed focal adhesion protein that provides a link between cell-matrix adhesions and the actin cytoskeleton (5). Migfilin consists of three structurally distinct regions. The C-terminal region of migfilin is composed of three LIM domains, which mediates the interaction with Mig-2, a focal adhesion protein that is critically involved in cell-matrix adhesion and shape modulation (5,6). The N-terminal region of migfilin interacts with filamin (5), an actin-binding protein whose deficiency or mutations cause defects in cell migration (7,8). Between the N-terminal and the C-terminal regions lies a proline-rich domain. Unlike the N-and C-terminal domains, however, neither the binding partners nor the functions of the migfilin proline-rich domain were known. Interestingly, some human cells express not only migfilin but also a splicing variant (termed as migfilin(s)) lacking the prolinerich domain (5). In this study, we have sought to identify proteins that interact with the proline-rich domain of migfilin and investigated the roles of migfilin in regulation of cell migration. Our results show that vasodilator-stimulated phosphoprotein (VASP), 2 an actin cytoskeletal regulatory protein (reviewed in Refs. 9 -11), interacts with the proline-rich domain of migfilin. Depletion of migfilin diminished VASP localization to cell-matrix adhesions and impairs cell migration. Surprisingly, overexpression of migfilin also reduces cell migration. Using two different approaches (overexpression of VASP-binding defective migfilin mutants and siRNAmediated VASP knockdown), we show that the interaction with VASP is crucially involved in migfilin regulation of cell migration. Yeast Two-hybrid Assays-A cDNA fragment encoding human migfilin residues 1-189 was inserted into the pGBKT7 vector (Clontech). The construct was used as bait to screen a human keratinocyte MATCHMAKER cDNA library following the manufacturer's protocol (Clontech). Sixteen positive clones were obtained from the library screening. Five of them were sequenced, among which three were found to encode VASP. To identify the VASP domain that mediates migfilin binding, cDNA fragments encoding human VASP sequences were inserted into the pGADT7 vector. The cDNA fragments encoding migfilin sequences were inserted into the pGBKT7 vector. The pGADT7 and pGBKT7 vectors containing VASP or migfilin sequences were introduced into yeast cells (Saccharomyces cerevisiae strain AH109) and the interaction was analyzed following the manufacturer's protocol (Clontech). DNA Constructs, Transfection, and Immunoprecipitation-DNA vectors encoding FLAG-migfilin or FLAG-migfilin(s) were described (5,12). Deletion or substitution mutations (as specified in each experiment) were introduced into the migfilin coding sequence by PCR. To generate the vector encoding the GFP-tagged migfilin proline-rich domain, a cDNA fragment encoding migfilin residues 84 -180 was ligated into the pEGFP-C2 vector (Clontech). DNA vectors encoding wild type or mutant forms of migfilin were generated by ligating the wild type or mutant forms of migfilin cDNAs (including the stop codon) into the p3xFLAG-CMV-14 vector (Sigma). The migfilin coding sequences were confirmed by DNA sequencing. Cells were transfected with the vectors encoding various forms of migfilin using Lipofectamine reagents (Invitrogen). For immunoprecipitation * This work was supported by National Institutes of Health Grants GM65188 and DK54639 (to C. W.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 analyses, the transfectants were lysed with the lysis buffer (1% Triton X-100 in 50 mM Tris-HCl, pH 7.4, containing 150 mM NaCl, 2 mM Na 3 VO 4 , 100 mM NaF, and protease inhibitors). The cell lysates were mixed with agarose beads conjugated with anti-FLAG mAb M2 (Sigma) or a rabbit anti-GFP Ab (Santa Cruz Biotechnology) and UltraLink Immobilized Protein G (Pierce). The beads were washed four times and the immunoprecipitates were analyzed by Western blotting with Abs as specified. Co-precipitation of Endogenous Migfilin and VASP-WI-38 cells were grown in DMEM containing 10% FBS in culture plates for 1 day. The cell monolayers (adhered) were lysed with the lysis buffer. In parallel, WI-38 cells were harvested by trypsinization and washed with DMEM containing 10% FBS. The cells were re-suspended in DMEM containing 10% FBS, and maintained in suspension on a rocking platform for 45 min (suspension). The cells were then washed with phosphate-buffered saline and lysed with the lysis buffer. Migfilin was precipitated from the lysates with GST-FLN21 fusion protein (containing filamin A residues 2282-2395) as described (5). siRNA and Transfection-The migfilin siRNA and control RNA were described (5). A second migfilin Stealth TM siRNA (target sequence: 5Ј-gaagaggguggcaucgucugucuuu-3Ј) was generated by Invitrogen. The migfilin Stealth siRNA was used in some experiments as specified. Two different VASP siRNAs were used. The first one (termed VASP siRNA1 herein) was purchased from Santa Cruz Biotechnology (catalog number: sc-29516) and the second one (termed VASP siRNA2 herein, target sequence: 5Ј-aaagaggaaatcattgaagcc-3Ј) was purchased from Invitrogen. Cells were transfected with the siRNA or the control RNA using Oligofectamine or Lipofectamine 2000 (Invitrogen) following the manufacturer's protocols. The cells were analyzed 2 days after transfection. In double DNA/RNA transfection experiments, MDA-MB-231 cells were transfected with the DNA vector encoding migfilin or a control vector using Lipofectamine Plus (Invitrogen). One day after DNA transfection, the cells were transfected with VASP siRNA or a control RNA using Lipofectamine 2000. The double transfectants were analyzed 2 days after the RNA transfection. Immunofluorescent Staining-Immunofluorescent staining was performed as described (13). Briefly, cells were plated on fibronectincoated coverslips, fixed, and dually stained with primary mouse mAbs and rabbit polyclonal Abs as specified. The primary mouse and rabbit Abs were detected with secondary fluorescein isothiocyanate-conjugated anti-mouse IgG and Rhodamine Red TX -conjugated anti-rabbit IgG Abs, respectively. Preparation of Triton X-100-soluble and -insoluble Fractions-Cells were plated on collagen I-coated 60-mm plates and incubated at 37°C under a 5% CO 2 , 95% air atmosphere for 2 h to allow formation of cell-matrix adhesions. The cells were rinsed once with a PIPES buffer (100 mM PIPES, pH 6.9, 0.1 mM EDTA, 0.5 mM MgCl 2 , 4 M glycerol) containing protease inhibitors, and then extracted with the same buffer supplemented with 0.75% Triton X-100. The cell extracts were collected, vortexed, and centrifuged at 20,800 ϫ g at 4°C for 5 min. Protein concentrations in the Triton X-100-soluble (cytosol) fractions were determined using a BCA protein assay (Pierce). The pellets (the Triton X-100-insoluble fractions) were extracted with 1% SDS in phosphatebuffered saline. The Triton X-100-soluble and -insoluble fractions were mixed with SDS-PAGE sample buffer and analyzed by Western blotting with anti-migfilin and anti-VASP Abs. Cell Migration-The cells (HeLa, HT-1080, or MDA-MB-231) were transfected with the siRNA duplexes or DNA vectors as specified in each experiment. Two days after the transfection, cell migration was analyzed using Transwell motility chambers as described (14). Briefly, the undersurfaces of the 8-mm pore diameter Transwell motility chambers (Costar) were coated with 20 g/ml fibronectin. The cells were suspended in 0.1 ml of DMEM containing 5 mg/ml bovine serum albumin and added to the upper chambers (2 ϫ 10 4 cells/chamber). After incubation at 37°C for the indicated period of time, the cells on the upper surface of the membrane were removed. The membranes were fixed and the cells on the undersurface were stained with Gills III hematoxylin. The cells from five randomly selected microscopic fields were counted. Madin-Darby canine kidney (MDCK) cell migration was assessed by the ability of the cells to migrate into a cell-free area as described (15). Briefly, the cells were plated in DMEM containing 10% FBS in plates that were pre-coated with 30 g/ml collagen I. The cell monolayers were wounded by scratching with a plastic pipette tip. After washing, the cells were incubated in DMEM containing 1% FBS and 0.5 g/ml mitomycin C for the indicated periods of time. Images of three different segments of the cell-free area were recorded, and the distances traveled by the cells at the front in three different segments of the wound were measured. For statistical analyses, paired t test was used to compare two groups of data. For comparison among three or more groups of data, one-way analysis of variance followed by Tukey's post-test were used. All values are presented as the mean Ϯ S.D. from n experiments as indicated. p values Ͻ0.05 were considered statistically significant. Cell Adhesion-MDCK cells were transfected with a control vector lacking the migfilin sequence or vectors encoding different forms of migfilin as specified in the experiments. Two days after the transfection, the cells (5 ϫ 10 4 /well) were seeded in quadruplicate in 96-well plates that were pre-coated with 30 g/ml collagen I. After incubation at 37°C for 40 min, the wells were washed three times with phosphate-buffered saline. The cells were quantified using crystal violet as described (16). The percentages of the adhered cells were presented as the absorbance at 570 mm from the adhered cells divided by the absorbance at 570 mm from the total seeded cells. The data from two experiments were analyzed by one-way analysis of variance followed by Tukey's post-test. p values Ͻ0.05 were considered statistically significant. Identification of VASP as a Binding Protein of Migfilin-We used two approaches to identify proteins that interact with the migfilin prolinerich domain. One is a structure-based "candidate" approach. VASP was a strong candidate because of the presence of multiple LPPPP motifs, potential docking sites for VASP (9,10), in migfilin proline-rich domain. To test whether VASP interacts with migfilin, we transfected HeLa cells with a vector encoding FLAG-migfilin and a vector lacking migfilin sequence as a control, respectively. Expression of FLAG-migfilin in the FLAG-migfilin transfectants (Fig. 1A, lane 2) but not the control transfectants (Fig. 1A, lane 1) was confirmed by Western blotting. FLAGmigfilin was immunoprecipitated with anti-FLAG mAb M2 (Fig. 1A, lane 4) and the immunoprecipitates were probed by Western blotting with an anti-VASP Ab. VASP was readily detected in the FLAG-migfilin immunoprecipitates (Fig. 1B, lane 4). In control experiments, no VASP (Fig. 1B, lane 3) was detected in control precipitates lacking FLAGmigfilin (Fig. 1A, lane 3). In the second approach, we screened a human yeast two-hybrid cDNA library (containing 2.5 ϫ 10 6 independent clones) with a migfilin fragment containing the proline-rich domain as bait. Five positive clones were sequenced, among which three encoded VASP. To further analyze this, we performed yeast two-hybrid binding assays using vectors encoding different domains of VASP and migfilin. The results showed that the VASP EVH1 domain is both necessary and sufficient for interacting with migfilin ( Fig. 1C). On the other hand, no interaction was detected between VASP and the C-terminal LIM region of migfilin (Fig. 1C). These results identify VASP as a migfilin-binding protein and map the migfilin-binding site to the EVH1 domain of VASP. Endogenous Migfilin and VASP Form a Complex in Cells-We next tested whether endogenous migfilin and VASP form a complex in cells. To do this, we precipitated migfilin from human cells with a GST fusion protein containing filamin repeat 21 (GST-FLN21), which recognizes the N-terminal region of migfilin (5). As expected, migfilin ( Fig formation of cell-matrix adhesions. VASP is phosphorylated by protein kinase A on its first phosphorylation site upon cell detachment (17), which causes a characteristic shift of VASP from 46 to approximately 50 kDa on SDS-PAGE gels. Consistent with the finding that the formation of the migfilin-VASP complex occurs prior to the formation of cellmatrix adhesions, the 50-kDa phosphor-VASP also formed a complex with migfilin (Fig. 2B, lane 4). Mapping the VASP-binding Site to a Single LPPPP Site within the Proline-rich Domain of Migfilin-To facilitate structure-function analyses, we sought to define the VASP-binding site on migfilin. Toward this end, we first tested whether the migfilin proline-rich domain is necessary for interaction with VASP. To do this, we transfected HeLa cells with vectors encoding FLAG-migfilin(s), which lacks the proline-rich domain (5), and FLAG-migfilin, respectively. The expression of FLAGmigfilin (Fig. 3A, lane 2) and FLAG-migfilin(s) (Fig. 3A, lane 3) was confirmed by Western blotting. FLAG-migfilin (Fig. 3A, lane 5) and FLAG-migfilin(s) (Fig. 3A, lane 6) were immunoprecipitated from the corresponding transfectants but not the control transfectants (Fig. 3A, lane 4) with an anti-FLAG mAb. As expected, VASP was readily coimmunoprecipitated with FLAG-migfilin (Fig. 3B, lane 5). By marked contrast, no VASP was co-immunoprecipitated with FLAG-migfilin(s) lacking the proline-rich domain (Fig. 3B, lane 6) or with the anti-FLAG mAb in the absence of migfilin (Fig. 3B, lane 3). Thus, the proline-rich domain of migfilin is required for the interaction with VASP. Although migfilin contains several polyproline motifs that can potentially serve as binding sites for profilin, it was not detected in the migfilin immuno- precipitates (Fig. 3C, lane 5), suggesting that the proline-rich domain of migfilin is preferentially recognized by VASP. The migfilin proline-rich domain contains multiple polyproline sites including two LPPPP sites (starting from Leu residues at positions 104 and 166, respectively). To map the VASP-binding site, we introduced deletion mutations (⌬84 -112 and ⌬142-170) into the migfilin prolinerich domain (Fig. 3F). Cells were transfected with vectors encoding FLAG-tagged ⌬84 -112 or ⌬142-170 and VASP binding was analyzed by co-immunoprecipitation as described in Fig. 3, A and B. Deletion of residues 84 -112, but not that of residues 142-170, eliminated VASP binding (Fig. 3F). To define the VASP-binding site, we introduced Ala C, E, and G) and a rabbit anti- VASP Ab (B, D, F, and H). Bar in D, 20 m. substitution mutations into the Leu 104 polyproline site. Cells were transfected with vectors encoding the FLAG-tagged Leu 104 polyproline substitution mutant or FLAG-migfilin (as a positive control), or a vector lacking the migfilin sequence as a negative control. Expression of the FLAG-Leu 104 polyproline mutant (Fig. 3G, lane 3) and FLAG-migfilin (Fig. 3G, lane 2) were confirmed by Western blotting. Co-immunoprecipitation assay showed that the Leu 104 polyproline mutant (Fig. 3G, lane 6), unlike the wild type migfilin (Fig. 3G, lane 5), failed to bind to VASP (Fig. 3H, compare lanes 5 and 6). These results demonstrate that VASP binding is mediated by a single LPPPPP site located at positions 104 -109 within the proline-rich domain. Migfilin Facilitates VASP Localization to Cell-Matrix Adhesions- Consistent with the interaction between migfilin and VASP, migfilin and VASP proteins co-localized in cell-matrix adhesions (Fig. 4, compare A and B). To test whether migfilin plays a role in VASP localization to cell-matrix adhesions, we suppressed migfilin expression by transfection with the migfilin siRNA (5). Immunofluorescent staining with an anti-migfilin mAb revealed that the level of migfilin at cell-matrix adhesions was substantially reduced in the migfilin siRNA transfectants compared with that in the control transfectants (compare Fig. 4, C with A). As expected, clusters of VASP were readily detected in focal adhesions in control cells (Fig. 4, A and B). To analyze the effect of migfilin knockdown on VASP localization, we randomly selected 50 migfilin knockdown cells and found that all of them exhibited diminished focal adhesion localization of VASP (a representative migfilin knockdown cell is shown in Fig. 4, C and D). To confirm this, we transfected HeLa cells with a second migfilin siRNA (the migfilin Stealth siRNA, Invitrogen). Transfection of the cells with the migfilin Stealth siRNA effectively knocked down migfilin and diminished the localization of VASP to cellmatrix adhesions (data not shown). Collectively, these results suggest an important role of migfilin in facilitating VASP localization to cell-matrix adhesions. In contrast to VASP, we found that focal adhesion kinase was clustered at cell-matrix adhesions in both the control (Fig. 4, E and F) and migfilin knockdown cells (Fig. 4, G and H). To further analyze this, we dually stained the migfilin knockdown cells and control cells with mouse anti-vinculin and rabbit anti-VASP Abs. Clusters of vinculin and VASP were readily detected in control cells (Fig. 4, I and J). As expected, VASP was largely diffusely distributed in migfilin knockdown cells. We randomly selected 30 migfilin knockdown cells. Clusters of vinculin were detected at cell-matrix adhesions in all these cells, despite the diminished level of VASP in these adhesion structures (a representative cell is shown in Fig. 4, K and L). The vinculin clusters, however, appeared somewhat smaller in migfilin knockdown cells compared to those in the control cells (Fig. 4, compare I and K). Protein clusters at cell-matrix adhesions and cytoskeleton are often resistant to nonionic detergent (e.g. Triton X-100) extraction (12). To further analyze the effect of migfilin on VASP distribution, we extracted the migfilin siRNA transfectants and the control transfectants with a buffer containing 0.75% Triton X-100. As expected, transfection of the cells with the migfilin siRNA reduced the level of migfilin in the Triton X-100-soluble (cytosol) as well as the Triton X-100-insoluble (cytoskeletal) fractions (Fig. 4M). VASP was detected in both the Triton X-100soluble (Fig. 4N, lane 3) and Triton X-100-insoluble (Fig. 4N, lane 1) fractions. Knockdown of migfilin reduced the level of VASP in the Triton X-100-insoluble fraction (Fig. 4N, compare lane 1 with lane 2) and concomitantly increased the level of VASP in the Triton X-100-soluble fraction (Fig. 4N, compare lane 3 with lane 4). Re-probing the same membrane with an anti-vinculin Ab showed that the level of vinculin in these fractions was not significantly altered (Fig. 4O). The biochemical extraction results, together with those of the immunofluorescent staining, suggest that depletion of migfilin selectively attenuates the localization and clustering of VASP at cell-matrix adhesions. Migfilin Is Not Required for VASP Localization to Lamellipodia-We next tested whether migfilin is required for VASP localization to lamellipodia. Human HT-1080 fibrosarcoma cells were used in these studies as a high percentage of them form extensive VASP-positive lamellipodia that were often present in a polarized fashion (Fig. 5, B and F). Among 53 cells that were analyzed, 34 (64%) of them exhibited strong VASP-positive lamellipodia (two representative images are shown in Fig. 5, B and F). Despite the presence of VASP in lamellipodia, migfilin was not detected in this structure. Instead, it was highly concentrated in cellmatrix adhesions (Fig. 5, A and E), where VASP clusters were also detected (Fig. 5, B and F). Transfection of HT-1080 cells with migfilin siRNA substantially reduced the level of migfilin (Fig. 5, C and G). Immunofluorescent staining of migfilin knockdown cells with the mouse anti-migfilin mAb and rabbit anti-VASP Ab showed that, as expected, depletion of migfilin reduced the clustering of VASP at cellmatrix adhesions (Fig. 5, D and H). We analyzed 36 migfilin knockdown cells and detected VASP-positive lamellipodia in 25 (69%) of them. However, they often appeared shorter or un-polarized (Fig. 5, D and H). These results suggest that migfilin is not absolutely required for VASP localization to lamellipodia, albeit it may influence the length and polarity of lamellipodium. Migfilin Is a Biphasic Regulator of Cell Migration-The role of migfilin in regulation of cell migration had not been previously investigated. To test whether migfilin is required for cell migration, we transfected HeLa cells with the migfilin siRNA and the control RNA, respectively. As expected, the level of migfilin in the migfilin siRNA transfectants were substantially reduced (Fig. 6A). The levels of other cellular proteins, including actin (Fig. 6B) and other focal adhesion proteins such as Mig-2, filamin, VASP, paxillin, focal adhesion kinase, ILK, PINCH, and ␣-parvin (not shown) were not altered, confirming the specificity of the migfilin siRNA. We next compared the migration of the migfilin-deficient cells with that of the control cells. The results showed that depletion of migfilin significantly reduced cell migration (Fig. 6, C-E). To further test this, we transfected HT-1080 fibrosarcoma cells and MDA-MB-231 breast carcinoma cells with the migfilin siRNA and the control RNA, respectively. Transfection with the migfilin siRNA effectively reduced the level of migfilin in HT-1080 cells (Fig. 6F) and MDA-MB-231 cells (Fig. 6K). Equal loading was confirmed by probing the same samples with an anti-actin Ab (Fig. 6, G and L). Next, we compared migration of the migfilin knockdown cells with that of the control cells. In both HT-1080 (Fig. 6, H-J) and MDA-MB-231 (Fig. 6, M-O) cells, depletion of migfilin impaired cell migration. Knockdown of migfilin with the migfilin Stealth siRNA also impaired cell migration (not shown). Taken together, these results suggest that migfilin is required for proper cell migration. Next, we sought to test the effect of up-regulation of migfilin on cell migration. To do this, we overexpressed migfilin in HeLa cells (Fig. 7A, compare lane 1 with 3). Surprisingly, overexpression of migfilin in HeLa cells reduced cell migration (Fig. 7, B, C, and E). To further test this, we overexpressed migfilin in MDCK epithelial cells (Fig. 8A, lane 1). Again, overexpression of migfilin reduced MDCK cell migration (Fig. 8, B-D). Similarly, overexpression of migfilin in MDA-MB-231 cells reduced cell migration (see below). Thus, the effect of migfilin on cell migration is biphasic. On the one hand, migfilin is required for supporting cell migration and therefore loss of it impairs cell migration. On the other hand, expression of an excessive amount of migfilin also suppresses cell migration. The VASP Binding Is Crucial for Migfilin-mediated Suppression of Cell Migration-To test whether the VASP binding is involved in migfilin-mediated suppression of cell migration, we overexpressed migfilin(s), which lacks the VASP-binding proline-rich domain in HeLa cells (Fig. 7A, lane 2). Overexpression of migfilin(s), unlike that of migfilin, did not significantly reduce cell migration (Fig. 7, B-E). To further test this, we overexpressed migfilin(s) in MDCK cells (Fig. 8A, lane 3). Consistent with the results in HeLa cells (Fig. 7), overexpression of the VASP-binding defective migfilin(s) in MDCK cells did not significantly reduce cell migration (Fig. 8, B-E). These results raised an interesting possibility that VASP binding is required for migfilin-mediated suppression of cell migration. To test this, we overexpressed the VASP-binding defective Leu 104 polyproline mutant, and the wild type migfilin as a control, in MDCK cells. The expression of migfilin ( Fig. 9, lane 2) and the VASP-binding defective mutant (Fig. 9, lane 3) was confirmed by Western blotting. Overexpression of the VASP-binding defective mutant, unlike that of the wild type migfilin, did not significantly reduce MDCK cell migration (Fig. 9B), suggesting that the interaction with VASP is required for migfilin-mediated suppression of cell migration. To further analyze this, we tested the effect of overexpression of migfilin on cell-matrix adhesion. The results showed that overexpression of migfilin substantially increased cell-matrix adhesion (Fig. 9C). By contrast, overexpression of the VASP-binding defective Leu 104 polyproline mutant or migfilin(s) failed to enhance cell-matrix adhesion (Fig. 9C). The forgoing mutational experiments provide strong evidence for a role of the VASP binding in migfilin-mediated suppression of cell migration. If VASP is indeed involved in this process, depletion of VASP should abolish or significantly weaken the migfilin-mediated suppression of cell migration. Because siRNA that targets canine VASP was unavailable, we used human MDA-MB-231 cells to test this. Consistent with the results obtained with MDCK cells, overexpression of migfilin, but not that of the VASP-binding defective migfilin mutant, significantly reduced MDA-MB-231 cell migration (Fig. 10, A and B). To test whether VASP is involved in migfilin-mediated suppression of cell migration, we dually transfected MDA-MB-231 cells with VASP siRNA1 and the migfilin expression vector or VASP siRNA1 and the control vector lacking the migfilin sequence. In control experiments, we dually transfected MDA-MB-231 cells with the control RNA and the migfilin expression vector, or the control RNA and the control DNA vector lacking migfilin sequence. Western blotting showed that the level of VASP was reduced in both VASP siRNA1 transfectants (Fig. 10C, compare lanes 1 and 3 with lanes 2 and 4), irrespective of the level of migfilin. On the other hand, migfilin was overexpressed in both migfilin transfectants (Fig. 10C, compare lanes 3 and 4 with lanes 1 and 2). Knockdown of VASP in MDA-MB-231 cells reduced cell migration (Fig. 10E). Importantly, overexpression of migfilin in the VASP knockdown cells (Fig. 10F), unlike that in cells expressing a normal level of VASP (Fig. 10G), did not significantly reduce cell migration. Using a similar approach, we confirmed that knockdown of migfilin with a second VASP siRNA (VSAP siRNA2) also abolished migfilin-mediated suppression of cell migration (Fig. 11). Collectively, these results suggest that VASP is crucially involved in migfilin-mediated regulation of cell migration. DISCUSSION Cell migration is a complex process that is coordinately regulated by cell-matrix adhesion and actin cytoskeleton (1)(2)(3)(4)18). The studies presented in this paper have demonstrated that migfilin is a key component of the cellular machinery that controls cell migration. Using several different types of cells (HeLa, HT-1080, and MDA-MB-231 cells), we have shown that depletion of migfilin impairs cell migration. How does migfilin contribute to cell migration? Migfilin contains no intrinsic catalytic activities but possesses multiple protein-binding domains (19). Through different protein-binding domains migfilin interacts with multiple components of cell-extracellular matrix adhesions and the actin cytoskeleton, including Mig-2 (5), filamin (5,20), and as reported 3 and 4). One day after DNA transfection, the cells were transfected with VASP siRNA1 (lanes 1 and 3) or the control RNA (lanes 2 and 4). The cell lysates were analyzed by Western blotting (20 g of protein/lane) with Abs that recognize VASP (C) or migfilin (D). Two days after siRNA transfection, cell migration was analyzed as described in the legend to Fig. 6, M-O. E, the migration of the VASP knockdown cells expressing a normal level of migfilin was compared with that of the control cells (normalized to 100%). F, the migration of the VASP knockdown cells overexpressing migfilin was compared with that of the VASP knockdown cells expressing a normal level of migfilin (normalized to 100%). G, the migration of the migfilin-overexpressing cells that express a normal level of VASP was compared with that of the cells expressing normal levels of migfilin and VASP (normalized to 100%). Bars represent mean Ϯ S.D. (n ϭ 3). **, p Ͻ 0.05 versus control. in this paper, VASP (Figs. 1 and 2). We have previously shown that migfilin is involved in linking the cell-matrix adhesions to the actin cytoskeleton (5,21). Thus, the migratory defect induced by the loss of migfilin is probably caused, at least in part, by the impaired connection between cell-matrix adhesions and the actin cytoskeleton. Additionally, migfilin could contribute to other steps that are pertinent to cell migration. For example, lamellipodia in migfilin knockdown cells appeared shorter or un-polarized (Fig. 5). Thus, migfilin may contribute to cell migration by influencing the formation or polarization of lamellipodia. It has been well documented that VASP regulates actin polymerization in lamellipodia (10,(22)(23)(24)(25). Given the interaction of migfilin with VASP, a simple model would be that migfilin influences lamellipodium formation via VASP. However, two lines of evidence suggest that the interaction of migfilin with VASP probably is not directly involved in the organization of lamellipodium. First, migfilin co-localizes with VASP in cell-matrix adhesions but not in lamellipodia. Thus, it is unlikely that migfilin directly regulates VASP activity in lamellipodia. Second, despite its effect on the length and polarization of lamellipodia, loss of migfilin does not prevent VASP localization to lamellipodia. Instead, loss of migfilin compromised the localization of VASP to cell-matrix adhesions, suggesting that migfilin, together with other focal adhesion proteins such as zyxin (26 -28), contributes to the localization of VASP to cell-extracellular matrix adhesions. The fact that migfilin is concentrated at the cell-extracellular matrix adhesions suggests that migfilin likely influences lamellipodium formation and polarization indirectly, possibly through its effect on the structure or signaling of the cell-matrix adhesion sites. In addition to showing that migfilin is required for proper cell migration, we have demonstrated that an excessive amount of migfilin suppresses cell migration. Thus, migfilin regulates cell migration in a biphasic fashion. How does overexpression of migfilin suppress cell migration? We have used two different approaches to address this question. The first is a mutational approach. This is based on our initial observation that overexpression of migfilin(s), a naturally occurring migfilin splicing variant lacking the proline-rich domain (and hence VASP-binding defective), unlike that of migfilin, fails to suppress cell migration (Figs. 7 and 8). This finding is interesting for two reasons. First, it suggests a new mechanism by which cells can regulate their motility. It is well known that different types of cells often exhibit vastly different motilities. Cell motility is influenced by many factors including the strength of cellmatrix adhesion, lamellipodial protrusion, etc., and ultimately, the level and subcellular localization of molecules that control one or more of these processes (1)(2)(3). The presence of two naturally occurring migfilin splicing variants with vastly different motility regulating activities provides a system by which cells could control their motility through RNA splicing. Second, this finding suggests that migfilin likely suppresses cell migration through proteins that interact with its proline-rich domain. We have demonstrated that VASP binds to the proline-rich domain of migfilin. Furthermore, we have mapped the VASP binding to a single LPPPPP site (residues 104 -109) within the proline-rich domain. Substitution mutations within the L 104 PPPPP site abolished the VASP binding (Fig. 3). Importantly, overexpression of the VASP-binding defective migfilin mutant, like that of migfilin(s), failed to suppress cell migration ( Figs. 9 and 10). These results provide strong evidence for a role of the VASP binding in migfilin-mediated suppression of cell migration. In the second approach, we have suppressed VASP expression by RNA interference. The results showed that, unlike cells that express a normal level of VASP, VASP knockdown cells were unresponsive to migfilin overexpression (Figs. 10F and 11E). This result is highly consistent with the results obtained with the mutational approach. Collectively, they suggest that VASP is likely involved in the suppression of cell migration induced by the overexpression of migfilin. A large body of genetic, cellular, and biochemical evidence has demonstrated that VASP is a key regulator of actin cytoskeletal dynamics 1 and 3) or the migfilin expression vector (lanes 2 and 4). One day after the DNA transfection, the cells were transfected with VASP siRNA2 (lanes 3 and 4) or the control RNA ( lanes 1 and 2). The cell lysates were analyzed by Western blotting (12 g of protein/lane) with Abs that recognize VASP (A) or migfilin (B). The membrane used in panel A was re-probed with an anti-actin Ab (C). D-F, cell migration was analyzed as described in the legend to Fig. 10, E-G. Bars represent mean Ϯ S.D. (n ϭ 3). **, p Ͻ 0.05 versus control. (reviewed in Refs. 9 -11). For example, VASP, through its EVH1 domain, binds to the proline-rich motifs in the ActA protein of Listeria monocytogenes (29 -32). The binding of VASP to the ActA protein exerts several effects on actin assembly including stimulation of the Arp2/3-mediated actin nucleation and reduction of the number of filamentous actin branches (33)(34)(35)(36)(37). The consequences of VASP-mediated protein interactions are complex and often contextually (and subcellular localization) dependent. For example, in the case of L. monocytogenes, the binding of VASP to the bacterial surface of the ActA protein promotes Listeria motility within the cells (reviewed in Refs. 37 and 38). In Dictyostelium, DdVASP promotes cell adhesion, filopodia formation, and directional cell movement (39). In the nervous system, members of the Ena/VASP protein family are required for proper neuronal migration and axon guidance (40 -48). Using mouse fibroblasts, Bear et al. (22) have shown that VASP negatively regulates the motility of these cells. VASP localizes to both cell-matrix adhesions and lamellipodia. However, lamellipodia are the primary sites in which VASP suppresses fibroblast motility (22). These studies demonstrate an important mechanism by which VASP suppresses cell migration. It remained to be determined, however, whether in other cell types VASP could regulate cell migration through other mechanisms. Theoretical and experimental analyses have shown that the effect of the cell-matrix adhesion strength on cell migration is biphasic (reviewed in Refs. 1 and 2). Abundant VASP is present at cell-matrix adhesions but the function of VASP in these sites has been a puzzling question. In this study, we have found that overexpression of migfilin, which localizes to cell-matrix contacts but not lamellipodia, enhances MDCK cell-matrix adhesion and concomitantly reduces cell migration. Importantly, neither migfilin(s) nor the VASP-binding defective mutant can do so. Thus, in these cells the interaction of VASP with migfilin at cell-matrix adhesions likely enhances cell-matrix adhesion and consequently suppresses cell migration. These results provide, for the first time, functional evidence suggesting a role of VASP (via its interaction with migfilin) at cell-matrix adhesions.
7,793.8
2006-05-05T00:00:00.000
[ "Biology", "Medicine" ]
SMART PANDEMIC MANAGEMENT THROUGH A SMART, RESILIENT AND FLEXIBLE DECISION-MAKING SYSTEM Over the last few years, the world has seen many social, industrial, and technological revolutions. The latter has enabled a combination of expertise from different fields in order to manage a wide range of multidimensional issues such as integrated societies and industrial ecosystems achievement, urban planning, transport management, sustainable development and environmental protection and currently pandemics management. Super smart society's vision that is driving the 5.0 social revolutions is at the heart of the current situation that requires system resilience, sustainability, proactivity, interoperability and collaborative intelligence between society, economy, and industry. Establishing communication bridges between different entities, of different natures and with different objectives implies solutions that reinforce the development of efficient, dynamic, and communicating business models on a large scale, merging cyber and physical spaces. Through this paper we explored the potential of digital twins for the development of a new vision of world global dynamics under the aegis of a virus whose parameters are still elusive to date. INTRODUCTION Covid-19 also known as severe acute respiratory syndrome associated coronavirus SARS 2 first appeared in China on December 2019 in the province of Wuhan. Covid-19 is a zoonotic pathology belonging to coronavirus family that is potentially highly contagious and attacks the respiratory system at variable levels among affected individuals, causing acute respiratory complications that can lead to more deadly viruses such as SARS and Middle East respiratory syndrome MERS. As of today, the virus has reached all world continents, leading to its declaration by the World Health Organization as a pandemic (CDC 2020). A wide range of strategies, prevention, response, and protection measures have been undertaken to date. Through their means, resources and intellectual and technological capital, each country has tried to draw up a plan to deal with the pandemic and to reduce its impacts, particularly its emerging economic and social impacts (De Vito and Gómez 2020). Some countries with their strategies have succeeded in controlling the spread of the virus, while others, due to a lack of early preventive measures, have seen the number of victims of the virus rise drastically in recent months (Grasselli, Pesenti, and Cecconi 2020). In the absence of a vaccine against Covid-19, pandemic management remains paramount. Outbreaks management falls into epidemiological science field that is concerned with investigating the occurrence, causality, effects, and evolution of infectious diseases among populations (Dasaklis, Pappis, and Rachaniotis 2012). Over the history of humankind, there has been a constant convergence in the field of epidemiology due to the persistent changes in the variables governing the evolution of populations and the frequency and patterns of pathogens that are further influenced by environmental changes (Madhav et al. 2017). The management of pandemics and the current pandemic has raised many issues. The first issue concerns pandemic risks versatility that is caused by the uncontrollable influences and uncertainties resulting from pandemic evolution throughout populations. This versatility makes the problem of pandemic management more complex and subject to several hazards that notably affect the efficiency of contingency plan management approaches in the event of a pandemic. Given the universal nature of pandemics, international cooperation is highly required which comes to be extremely difficult in the context of the current globalized world and under the various political tensions that arise from it. Globalization, while bringing all regions of the world much closer, has turned out to be an accelerator of pandemic spread and appears to be a major impediment to the development of territorial intelligence. The second problem is the multidimensionality of the virus, which involves several technical, non-technical and qualitative factors whose evolution over time and space is non-linear and subject to several contextual constraints linked in particular to populations' development characteristics and their reluctance to change, especially those involving changes in habits and behaviour acquired through the socio-cultural environment. The third issue that it represents a real challenge for science and medicine today is the virus novelty. Many features of the virus are so far unknown, and with the pressure exerted by the spread of the virus and the risks of uncontrollable mutation, scientists are faced with a real dilemma that forces them to use advanced technological means of simulation and modelling that would make it possible to accelerate the process of vaccine development but also its commercialization, a point that presents a major challenge in the context of pandemic management. Under the current context, the emerging challenges are prompting scientific and industrial communities, as well as administrations and governmental institutions to reconsider their approaches for emergency management and response. The effort launched by these communities has led to applications of digitalization across all affected sectors and the proposition of innovative strategies for tackling pandemic preparedness and response challenges. These initiatives have known across their applications a lot of challenges and constraints related to communication, interoperability, and data governance management. Bringing into connection a variety of differently structured systems with heterogeneous concepts, knowledge and data presents a real challenge for its new solutions, especially when it comes to domains that require impotent consideration of security concerns and confidentiality constraints. The aim of this work is to unveil the potential of smart cities and digital transformation visions and their interactions through multidimensional modelling and smart decision-making for the mitigation of Covid-19 risks and the development of a proactive, flexible and resilient post-pandemic management plan that can deal with the various issues that could hamper countries fight against the virus. The main contributions of the paper are the merging between the concept of digital twins with their dynamic and multi-dimensional modelling capabilities and that of smart cities through the exploitation of feedback from a number of applications that have proven the effectiveness of this vision in the management of Covid-19 impacts and the tailoring of this integration to the requirements of the new global dynamic for the implementation of a resilient system that can meet the needs of different stakeholders in a secure and efficient way. In the first section of the paper we introduce the role of modelling and predictive analytics for pandemic management while highlighting the different challenges that these two components face for their large-scale deployment through digital technologies. In the second section we introduce smart cities and its surrogates' visions as a promising alternative for intelligent decision-making, we emphasize this significance through real case studies deployed under Covid-19 context and their observed limitations. In the third section, we discuss a set of application cases that constitute the proof of concept of smart pandemic management system we are proposing to build. The last section consolidates all the issues dealt within the paper and paves the way for further opportunities and perspectives. Modelling and data driven analysis for pandemic management Epidemiological science has many factors that help to assess the impact of the pandemic, its evolution and spread trends. One of these factors is R0 referred to as the basic reproductive number, which is an estimate of the average number of people who acquire the virus from a single infected person (Zhang et al. 2020), and which varies from one country to another, and within countries and regions, depending on multiple controllable and objective, but mostly random and multidimensional factors. Many statistical and stochastic studies and research are currently trying to propose models for the estimation of Covid-19 epidemiological parameters inter alia R0. However, the results obtained, and their analysis shows some limitations related to these methods (Delamater et al. 2019). Therefore, as alternative, scientific research communities to improve the reliability of those estimations proposed a set of indicators that according to their investigations influence this important parameter in the world of infectious and highly transmissible diseases. By trying to identify the fluctuations of these parameters, countries could first monitor the evolution of the pandemic within their respective territories and frame the strategic field of barrier and prevention measures to be taken (Tuite et al. 2020). In this section, we essentially cite the following three indicators on which we will focus our analysis of the measures put in place to stop the spread of the virus and to respond to it in the event of a spike. These three factors are firstly the susceptibility of the virus, then the contact rate or the estimation of the probable exchanges between infected person and person suspected of having contracted the virus and finally the last indicator which also interests us the probability of contaminating new person which is strongly linked to the viral charge of the holder. Currently, literature, organizational and technological levels are trying to propose all possible measures to control its indicators and their different related parameters through advanced simulation tools. The aspect that interests us the most is related to the attempt to model the evolution of these three indicators its factors, their impact on the evolution of the pandemic and their correlation with other parameters at different social, cultural, economic and regulatory levels that contribute positively or negatively to it. Multidimensional modelling and advanced simulations enable to mimic complex systems dynamic behaviour and evolution by ingesting real life systems data and information's and as result giving life to complex physical and mathematical models that represents faithfully through ergonomic and dynamic visualization interfaces the real system and its evolution within its real environment. Thus, modelling and data analysis are common concerns for advanced simulation and pandemic management. Modeling and predictive analysis The modelling of these three indicators in Covid-19 context and more specifically the predictions taken on their evolution could serve government and society's different institutions to respond proactively to the different changes and hazards related to the virus development life cycle ("The SIR Model for Spread of Disease -Introduction | Mathematical Association of America" n.d.,2020). Predictive analysis involves three categories of predictors that are mathematical predictors of infectious diseases, probabilistic predictors and finally historical and data analysis-dependent predictors. Predictors allow early detection of changes in virus infection parameters by taking advantage of gathered knowledge, data and information acquired to date on the virus. These three models are currently being used world widely to prevent the spread of the virus and to respond proactively to pandemic spikes, they also enable the selection of appropriate and efficient barrier measures. Scientific communities currently to manage Covid-19 evolution relies heavily on three types of prediction tools and models that are conceptual models, descriptive models, and interactive maps, and finally surveys and large-scale communities' behavioural analysis. Conceptual models are used for the identification of interrelationships between government barrier measures, such as containment and stringency measures and the evolution of positive cases and contagion unit . Descriptive and geospatial mapping models across different levels, including socio-economic level, are used for the estimation of Covid-19 socio-economic impacts and the proposition of immediate response plans ("Tracking Vulnerable Population by Region" n.d.,2020). And finally, are considered surveys that track citizens' attitudes and responses for feedback and management of worldwide knowledge about the virus ("Oxford University Launches World's First Covid-19 Government Response Tracker | University of Oxford" n.d.,2020). The results of its models are communicated to governments to strengthen effective and adaptive barrier measures according to territorial disparities and to cross-reference them to regions with similar public response characteristics and maintain them for a long period of time. The main challenge facing the deployment and efficiency of these models in the context of the current pandemic has been the consideration of a broad range of dimensions, incorporating a set of unpredictable and uncertain constraints that relate to social and cultural aspects. Among these aspects are, first and foremost, ensuring that barrier measures are respected individually, and that security and prevention measures are adhered to collectively. The second aspect concerns both individuals' and groups' mobility trends and their effective monitoring without affecting individual freedom. These two aspects, which have been the subject of several studies among different research communities, are of great importance for ascertaining uncertain parameters of virus spread and for the prediction of contagion foci impeding the effectiveness of preventive measures and government response strategies. Key technological initiatives based on digitalization tools have been deployed in order to encounter those challenges, including smart wearables via the Internet of Things, large scale data analysis solutions based on sampling within affected communities, and last but not least digital assistance through Chabot's and e-healthcare applications. Such initiatives, in turn, face a few constraints for their widespread deployment, particularly in countries that are not sufficiently technologically mature. Challenges and potential opportunities for predictive analysis The first challenge remains on the accuracy and reliability of the developed models for pandemic indicators estimation because the quality of outputs and estimates provided by their use are closely linked to the characteristics of the data that feeds it inter alia data quality, integrity, availability and confidentiality. Real time data communication, analysis and sharing among various heterogeneous sources can be enabled by IOT technologies that can allow merging effectively different significant information and data from different sources and locations from both physical and virtual spaces (De Arriba-Pérez, Caeiro-Rodríguez, and Santos-Gago 2016). The main challenge that impedes the deployment of IOT advanced technologies in the current context for smart pandemic management and strategic decision-making, particularly for countries whose legacy communication and information infrastructure is not sufficiently mature, lies in the development of Internet of Things second layer, which is defined as sensing and controlling domain. This layer is constituted through IoT devices that are basically sensors and actuators connected to physical devices and IoT gateways. This layer's main function is to capture field collected data related to physical systems operations and to prepare and pre-process them through IOT gateways for their transmission to processing and analysis layers. The territorial development of this layer primarily involves assessing its maturity in a broader geographical and sectorial context. Evaluating this layer's maturity within countries mainly concerns three ecosystems. In first place come companies and industrial units that can be considered as significant pandemic foci, in second place public institutions, administrations and public places that are followed finally in the long term by universities and schools. This assessment requires considerable research efforts aimed at identifying all necessary requirements for secure, ethical, and effective use of artificial intelligence tools and IOT technologies. Challenges and potential opportunities for big data and large-scale data analysis Several research communities are currently exploiting big data analysis to deploy interactive map solutions ("GeomatiCovid.com" n.d.,2020), in order to provide multidimensional observations and to identify correlations that reduce uncertainties related to the susceptibility of the virus (Torres and Sacoto 2020). Real time collected and analysed data from heterogeneous sources represent a reliable portal for artificial intelligence communities that use it for the development of reactive solutions providing early detection and prevention systems of the virus. Big data technologies implementations in healthcare raise issues related to distributed storage and remote information sharing among various stakeholders through cloud platforms. Huawei in the context of the pandemic has created a platform based on AI and cloud tools that offer several services including E-learning and E-Healthcare services that combine Internet of Things, big data and cloud computing ("Fighting COVID-19 with Technology_HUAWEI CLOUD" n.d.,2020). The advanced open source Application Programming Interface APIs made available by cloud platforms can also be used for early detection of the virus and notification of suspected persons and sharing of relevant data sets around the world. Currently, a huge number of radiology images datasets are made available open source to research community world widely. The purpose of these data is to contribute to the development of deep learning algorithms that allow the detection of similarities with results from infected cases (LLORET 2020). This type of solution makes it possible to reduce the risks caused by asymptomatic cases. Artificial intelligence combined with relevant input data offer a large range of possibilities, some of which are currently deployed in Asia and Europe for real-time monitoring of patients vital signs in hospitals (Peng et al. 2020) but also for self-quarantine patients (Nundy and Patel 2020) with the use of remote health monitoring and assistance services (Webster 2020). The challenges that arise from this type of solution in addition to the recurring data governance issues are these time infrastructure issues. As far as infrastructure is concerned, it is mandatory for developing countries to provide solutions that help to improve network coverage of all urban and rural areas and their connectivity (Austin et al. 2020). Citizen engagement is also a decisive factor. Surveys are being carried out to identify disengagement gaps and their causes to address this aspect to take part of the studies established in the framework of data and knowledge management related to the pandemic. Knowledge extraction and capitalization aspects are also important as it could be weaned from the detection of the root causes of evolutionary spikes and the establishment of new barrier measures, cause trees and behavioural mapping are two disciplines presenting significant opportunities and solutions to address these different issues. Multidimensional modelling and predictive analysis of Covid 19 through advanced simulation Digital surrogate as reliable copies of living and non-living physical systems allowing real-time data analysis, multidimensional simulations and dynamic modelling of behaviours, interactions and evolutions of complex systems are among the most promising technologies to achieve the objectives set by smart cities vision (Kaur, Mishra, and Maheshwari 2020). Independently each of the two visions throughout the history of digital twins gave birth to multiple applications and has known the development of several technologies throughout the world and at the level of multiple social and industrial networks. Advanced simulation and smart cities vision have a key role to play within digital ecosystems development (Hajar and Abdelghani 2017). Currently, several applications have been developed combining advanced simulation and smart cities throughout the world. The first initiative date back several years ago and has its origin in Germany in the context of a project led by a group of researchers for the development of a German city twin that would be used for several purposes and allow the potential of the solution to be exploited on a large scale (Martinez-Velazquez, Gamez, and Saddik 2019). Several smart cities twins have been developed around the world. Digital surrogate concept in recent years is undergoing an important evolution, due to the development of real-time communication and computational capacities through the exploitation of cloud technologies and Internet of Things in multiple domains. This evolution has allowed to extend their field of application to several domains more particularly for the urban modelling and cities management, smart healthcare, smart buildings and geospatial intelligence domains that enable multidimensional analysis to be performed in a real time manner virtually, effectively and safely (Dembski et al. 2020). Different architectures were proposed through the literature for digital twins' development under the context of smart cities. Those architectures tried to take smart cities context constraints and requirements to establish effective, secure, and flexible solutions. Figure 1 represents the functional view of smart city twin architecture. The first layer of the architecture is perception layer. This layer acts as an interrogation mechanism for the real systems that constitute the smart city. Its main missions are sense, act and operate. The real system interacts with its environment through sensors and actuators. Sensors record fluctuations in system parameters, states and events that characterize their interactions with their environment. Actuators allow the systems to act on the real environment and to perform the various functions for which the system was designed in a scheduled and programmed sequence. The concept conveyed by smart cities vision is to create connected networks, through which components could exchange a wide range of data in a flexible, reliable, and proactive way. Through the Internet of Things and Edge Computing within smart cities framework today, these two drivers can come to life and interact intelligently with their environment through communication interfaces and embedded intelligence mechanisms. These drivers produce a significant amount of data of different types and nature, structured, unstructured, and semi-structured data. Depending on their type, these data require special processing that varies according to their consumers and producers but also according to the smart city real time simulator modules that are responsible for data and information short-and long-term processing. These data can be quality data, production data, operational data or Master Data linked to systems Meta structure and views. Outputs from this layer are transferred to the second layer of the architecture through communication protocols and schemes. Protocol selection depends on data criticality and various concerns related to system behaviour, services provided and data sources structures. According to these parameters we can distinguish two types of medium components Field Gateway and middleware platforms and intermediate storage units. The second layer of the architecture is composed of two blocks of the first block has as its main mission explaining the acquired data and their interpretation, while the second block is responsible for ensuring system prognostication and learning features. On this level, the system receives collected data from the different perception agents. The first block is dedicated to data contextualization, pre-processing according to clearly defined logics by the system architects and its stakeholders, and finally data storage using intermediate storage mechanisms that connect the system to the second block. Three levels characterize the first block in charge of data interpretation. The first level is inspired by the standard architecture of IOT systems proposed reference architecture (Bauer et al. 2013). Three agents interact at this level. The first agent is operations and policies manager whose mission is to maintain the system in optimal condition and therefore it is tasked with managing system internal tasks according to regulations and optimized planning. The second agent is Model Rules and logic provider, this agent is responsible for handling the end to end management of all services governed by the system and intended for internal and external digital and physical users of smart city shadows. Through this module the architect establishes operating logic for the various services and can thus The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) ensure their assignment based on responsibilities through its various internal and external entities. The third agent is Transactions and Resources Manager, who is in charge of managing transactions flow by providing a Road Map to the appropriate resources and adapting their usage while at the same time controlling their access and ensuring the efficiency of incoming and out coming data and information transactions from the two other modules. This agent holds metadata related to data producers and consumers, as well as semantics that describe the functional logic of smart city shadows. Regulated outputs generated by the interactions between its three agents are communicated to smart city agents' shadows, which constitute a dynamic simulation of physical agents and their associated logic within smart city. These agents can be equipment, systems or a set of systems driving a process. A shadow's role is to faithfully imitate, in real time, the functioning of the real entity to which it is connected. This imitation is done by feeding models of real systems with data collected from the system and its environment through the three agents that constitute the first level of the block. This first simulation is an instance simulation, in the sense that the developed models are specific to the real system they represent and serve to dynamically visualize parameters and indicators proper to the connected instance. The results of these simulations are communicated to the second block of the system. The mission of the second block is to extract, through the results of the simulations and the multidimensional modelling induced by applying Virtual shadows concept to cities agents modelling, structured and relevant information on a set of instances of the same type. This information is input for the first level modules of the second block which are threefold. The first two functions geo spatial business intelligence and smart performance tracking aim at developing a learning mechanism that, based on the multidimensional analysis of the outputs of a number of instances belonging to different contexts, will allow the development of a solid knowledge base for intelligent decision making at both strategic and operational levels. The third module Artificial intelligence and activities optimization acts as a prognostication unit for the prediction of future scenarios and the definition of optimization parameters for smart city agent shadows type models. The simulation of these scenarios is done at smart city twin level, which represents a mature shadows agent able to react proactively to uncertainties and unpredictable events and to provide a set of interactive modules responding to several critical issues facing decision-making processes. The last layer of the architecture is the back-end interface for communication with system stakeholders. It consists of dynamic user-centred interfaces, interactive knowledge repositories resulting from learning workflows, and finally a collaborative platform allowing experience sharing between different parties with disparate expertise and priorities. Security, interoperability, and persistence management apply across the complete smart city twin architecture. Security deals with confidentiality, integrity, and availability issues. Resilience deals with systems abilities to recover from unexpected events in this framework it concerns architecture flexibility and resiliency mechanisms developed to management layers agents' changes and evolutions across their lifecycle. Finally, interoperability with technical interoperability through communication and policies management and end points connectivity, semantic interoperability for content and context awareness between systems internal and external agents, syntactic interoperability for data exchange structures and models management. Smart cities surrogates experience feedback in smart decision making for cities management In the context of pandemics management geospatial intelligence plays a key role particularly for the management of various areas with different social and economic parameters and indicators. Cities management system provided with multiple viewpoints enabled by digital surrogates' models can enable each stakeholder according to its needs to deals with Covid-19 impacts that are threatening its activities. Social distancing that is necessary to reduce contagion unit of the virus require the reformulation of criteria for the management of public transport and transportation in general. Urban mobility management has been the concerns of cities decision makers for years. A lot of research communities have been trying with the use of developed algorithms and heuristics to mitigate this problem. Currently, around the world developed cities digital twins proved the potential of cities digital surrogates to reinforce results achieved by these works. Digital twins are an ideal solution for tailoring this new vision for the advantages they offer of real-time tracking, simulation of optimization scenarios and interoperability between different data sources and decision-making dimensions (Madurai Elavarasan and Pugazhendhi 2020). Digital surrogates in combination with Building Information Modelling BIM approach provide spatial and temporal representations of data in 3D and 4D formats of structures such as buildings, strata plans, terrain, property lines and utilities such as electrical, water and sewer lines (Sofia, Anas, and Faïz 2020). These representations combined with advanced analysis can be used by policy makers as a support for proactive decision making but also as a basis for communication between public and private institutions, administrations, and companies for the management of operations and activities vital to their business continuity under the current pandemic. Complex systems operational status such as cities depends closely upon their geospatial context of evolution. The understanding of this context and its integration in the analysis of system performance changes through interactive mapping allocated by Geographic Information Systems Technologies combining geospatial analysis and business intelligence will be of great added value in the context of Covid-19 for both industries and cities. The contribution of digital twins on cities management can help evaluate unemployment rates and the communication of industries inherent needs due to the pandemic that causes a lot of companies to migrate their production systems and can help connect employees with employers more easily. This can reduce pandemic social and economic impacts and in long term revolutionize human resources management within cities; it can also reinforce territorial intelligence within developed and developing countries. Smart cities surrogates to tackle pandemics management system challenges As we have seen through the previous sections, countries are currently facing several challenges in managing the pandemic The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) efficiently and reducing its impacts, particularly socio-economic impacts. Through the previous parts we have been able to apprehend the different opportunities offered by advanced simulation and the vision of smart cities to counter these different challenges. The first challenge concerns addressing pandemic risks and uncertainties arising from limited systems resiliency for the management of pandemic aspects evolution. The architecture proposed through the paper provides an independent learning mechanism that, through feedback and knowledge capitalization, offers an interactive virtual reference platform for the simulation of risk scenarios and the proposal of corrective actions and preventive plans to be followed while taking into account all the geo-spatial, temporal, environmental, technical and non-technical parameters related to the real system and its environment. The second aspect concerns the multi-dimensionality of the pandemic. The development of smart city surrogate enables real-time interrogation of multiple agents and ensures the integration of heterogeneous and relevant data from different stakeholders with different expertise and in different fields so as to explore the connections resulting from the combination of these expertise. According to the priorities defined by the users and the responsibilities assigned to each service, the surrogate allows to mimic in real time the functioning of real-world systems and to communicate the performance gap identified with respect to references and tolerances defined by system key stakeholders. The third challenge relates to virus novelty, which requires increased proactivity from all systems involved in managing its spread, including healthcare systems, medical industries, and social and governmental ecosystems. The functionalities of the two simulation blocks allow the prediction of system outputs relative to disparate and different inputs and thus to identify new relationships and correlations around the pandemic and its consequences. Optimization, prognostication, and artificial intelligence modules combined with collective intelligence platform resulting from their interaction reinforce the attempts to explore virus epidemiological parameters. PROOF OF CONCEPT AND LIMITATIONS Our analysis of different solutions and digital technologies developed and deployed against the Covid-19 and its impacts, as well as detailed research on the potential of simulation, modelling and smart cities vision, have enabled us to propose a set of solutions that will serve different purposes but share a common objective of countering the spread of the virus and limiting its impact on the industrial, economic and social networks of countries. Through this section we will describe three proofs of concept of the architecture that we proposed in the previous sections, with each solution aimed at covering one or several aspects of the pandemic. Our first solution consists of the development of a modular digital platform that manages pandemics, from prevention to return to normal life, by exploiting advanced information and communication technologies and intelligent decision support tools. It will consist of four modules that operate interactively and dynamically with each other to response to pandemic impacts and challenges. This solution has different scientific, technological, and socio-economic implications. Indeed, in terms of scientific impact, this solution can help achieving autonomy and scientific recognition for the management of large-scale contingencies such as pandemics and sanitary crises. For the Technological Impacts, it will enable to emerge with Home-Made technological solutions that are efficient, intelligent, and open to several scientific, industrial, economic, institutional, and primarily social communities. For Socio-Economic Impacts, the proposed platform will allow the detection of different socio-economic problems and risks throughout cities and ensure an intelligent and efficient management of its risks. Digital technologies and tools in recent months have been distinguished as an efficient weapon in the fight against the proliferation of the virus and its impact on public The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) health systems. This second solution is addressed to isolated patients with home-based care and mild to moderate symptoms, its aim is to reduce the pressure on health system, by reinforcing the management of this community. The solution is inspired by the medical protocol for the treatment of chronic obstructive pulmonary diseases, and it is made up of two modules, a first real-time smart monitoring module and a second closed-loop Oxygen Therapy module. The main purposes of the solution are firstly to predict home quarantined patients' complications and reduce the pressure on healthcare systems and hospitals, secondly to use advanced technologies to manage the pandemic in cases of home isolation, and finally to develop individual and collective autonomy and proactivity to face the impact of the pandemic and daily management of the new standard of living resulting from Covid-19. Figure 2 and 3 represent solutions overall architectures. Smart city surrogate for smart pandemic management The first solution as described in the previous section is a modular platform for smart pandemic management covering all the ecosystems directly and indirectly affected by the health crisis. The five services of the Platform are Interactive mapping for the management of cities most affected areas by Covid-19 impacts, Intelligent online testing service, on-line mental Health service, Post-Covid Intelligent action plan after the crisis and finally Covid-Innov service to support the implementation of innovative ideas concerning pandemic risks mitigation. Interactive mapping aims to help cities act more quickly to combat the Coronavirus pandemic. Indeed, it is intended for citizens and stakeholders to fulfill a set of significant functionalities. It will enable governmental institutions and healthcare specialists to follow daily infection foci parameter, including information and trends on infected individuals, their number, symptoms, age, gender, nationality, and location. This service combined with cities 3D models will allow a global visualization covering all countries regions. With the help of defined indicators, the selected area (neighborhood, city, or country) can be evaluated compared to other areas, thus allowing real time visualization of overall risks and eventually risk rates in a specific geographical area. Intelligent online testing service will assist healthcare professionals in performing their tasks and will help them minimize pressure on their testing services by offering online solutions. Indeed, this module is composed of two sub-module that are testing pre-testing and intelligent testing. Pre-testing is based on a basic survey in accordance with common symptoms presented by global health organizations; and an analysis of individual health indicators over a predefined period depending on Covid-19 incubation period. This submodule allows users to take the necessary measures more quickly, by carefully monitoring their own health for signs of possible infection. Intelligent testing is a smarter version adopting the latest technologies including intelligent sensors and IoT devices within perception layer and Deep learning and case-based reasoning within smart city twin block to detect suspected asymptomatic cases. This solution will be available to limited stakeholders on a subscription basis. Intelligent Action Plan service is based on artificial intelligence and multi-criteria decision-making, simply enter the indicators of the current situation to derive the appropriate recommendations. This module includes guidelines for team building, a planning framework, and risks to be anticipated, useful tools and funding information for corrective and preventive plans deployment. Covid-Inov is dedicated to project leaders, industrialists and people wishing to contribute to the financing of projects to face Covid-19. For project leaders, this module will offer an intelligent knowledge capitalization service. In this sense, the project leader will be able, by inserting the theme of his idea, to be redirected towards a system that will present to him ; a set of national and international standards and regulations related to his idea and a set of procedures to be followed in case the project leader wants to patent his idea. And finally, a communication bridge with other project managers and investors through a discussion forum. For industrialists, this module has two functions, an interface with the standardization bodies and the laws established under Covid-19, the new updates concerning health and safety at work under pandemic auspices. Each service has a set of rules that govern its functioning and exploits a given set of input data coming from its data processing mechanism. The first level of abstraction of the smart city twin that includes the shadows concerns field inquiry services such as interactive mapping and online assistance. The results extracted from these two services are communicated to the smart decision making and mental health monitoring services which are based on more cognitive processes requiring the use of prediction and learning tools and approaches mainly inspired by domain ontologies and semantic web models. In-house Oxy-Shield The functional architecture of the solution is essentially based on two modules, which constitute the smart city agents' shadows and smart city twin. The first module of the platform will consist of two parts, a hardware part and a software part based on an application for real-time monitoring of patient health indicators. This module governs the interfaces dedicated to health care professionals and home isolated patients and constitutes the overall structure of the smart city agents' shadows. The hardware part is based on smart IOT devices and wearables with various integrated sensors, and a system for analysing and adjusting the biases of smart measurements, which are used to measure patient health indicators. Indeed, for the prevention of respiratory complications, the SpO2 oxygen saturation rate must be measured, as well as other indicators such as heart rate or sleep apnoea. These data, as well as the results of questionnaires filled in by users allowing the establishment of specific electronic health record, and the approximation of his oxygen needs through the analysis of the daily activities of the patient, will serve as input to the system of prediction of differential pressure of oxygen PaO2 among affected patients. The main objective of this first module is to predict the treatment of respiratory complications in the targeted patients, and to improve the immune system response through preventive actions according to the results of each user. The final role that this module will play is to create an intelligent value chain for the supply of equipment needed for The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) home care. Securely and continuously connected to the suppliers and stakeholders involved, this module will serve as an interface for communicating forecast needs, avoiding stock shortages, and responding to fluctuations due to the hazards caused by the pandemic. In addition to providing a proactive model for respiratory complication prediction, the system will provide a secure data exchange mechanism based on block chain technology, creating a transparent communication and collaboration between patients, health professionals, governmental institutions, and healthcare industrials. The second module that governs smart city twin's operation is also made up of two parts, a hardware part based on an oxygen therapy device and an advanced control software part connected to the previous module. The oxygen therapy device will first be constituted according to the availability of nasal cannulas or face masks, without recycler used in normal cases for oxygen therapy. This first element will be connected to an oxygen tank which can be a concentrator, a liquid oxygen tank or an oxygen bomb. In our case, we have chosen to use the oxygen bombs, and to connect them to an intelligent regulation system. This control system will be designed with an automatic inhalation and expulsion device. The purpose of the latter is to regulate the oxygen flow, taking into account the operational data collected in real time through the first module, and the predicted targets defined through the simulation scenarios carried out by the system for predicting malfunctions and PaO2 fluctuations at the user's premises. This system will thus offer its users several functionalities. A complete support according to their activity during the day and during their sleep, thanks to the knowledge base developed through the simulations and the adjustment system. An efficient means of communication and remote assistance that can be cross-transversal improved through algorithms and prediction models and maintained for respiratory complications as a promising alternative to technologies based on the use of concentrators. The results collected will enable cities to develop a proprietary knowledge base for the remote monitoring of Covid-19 patients, which is of significant benefit for pandemic management and The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) the deployment of smart healthcare under the framework of smart cities. SOLUTIONS IMPACTS IN THE FIGHT AGAINST THE VIRUS AND MAIN IMPLEMENTATION CHALLENGES Our proposed solutions are based on generic and adaptive platforms characterized by a data driven and services-based architectures relying on layers interoperability and security. The interoperability system of the architecture consisting of a data governance unit and a specific security mechanism that will be integrated during the development phase of the hardware and software components aims to improve the scalability of the platform. Its first role is to ensure adaptability of the platform to a variety of contextual and functional constraints of its deployment environment, and its second role is to establish standard communication and data exchange models based on an analysis of flows between the layers of the architecture governing the platform, thus creating a set of metadata and conceptual reference models for each type of system involved in the realization of services. For this component we will be mainly inspired by domain ontologies and semantic web models. The implementation of its solutions requires a common consensus between a set of stakeholders with different expertise and responsibilities. This consensus cannot be reached without improving system trustworthiness and interoperability. Improving the interoperability of the system requires considerable investment in the further refinement of cities' network architecture and regulatory systems related to data sharing, the use of artificial intelligence and the exploitation of personal data. Ethics related to artificial intelligence use is currently evolving; however, the standards and reference systems available to date remain limited, particularly regarding a specific territorial context. CONCLUSION AND FUTURE WORKS The current health crisis is to date one of the most critical crises in the history of humanity. To date, digital technologies have proven to be one of the most effective weapons against the crisis which can reduce its socio-economic, industrial, and sanitary impacts. The use of artificial intelligence and simulation in health care for modelling, prediction and decision making is not new, and in recent years, driven by the development of IOT and cloud computing technologies, the concept of smart cities has gradually expanded worldwide. As we have seen, several cities around the world are trying to integrate this vision. Simulation has proven that it can accelerate this deployment. The merging of digitization tools and the feedback acquired from their use with the new vision of intelligent cities and advanced simulation tools are improving pandemic management by establishing a large-scale smart decision-making system. Through this paper we have tried to outline a portfolio of applications that demonstrate the potential of this fusion through two applications, each addressing one aspect of pandemic management and addressed to a group of stakeholders. The work on these solutions as well as the analysis of the previously tested solutions have both allowed us to detect a set of problems inherent to the deployment of digital solutions and tools in a global context such as the management of smart cities. To overcome these issues in the current pandemic context, a considerable effort is needed regarding territorial intelligence improvement, cities' current infrastructure development, especially in developing countries, and finally regulatory framework establishment for digital and artificial intelligence tools. Our forthcoming studies will focus on the development of these three axes and the deployment of the proposed solutions across several contexts.
10,218.6
2020-11-23T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Monitoring the Impact of Air Quality on the COVID-19 Fatalities in Delhi, India: Using Machine Learning Techniques Objective: The focus of this study is to monitor the effect of lockdown on the various air pollutants due to the coronavirus disease (COVID-19) pandemic and identify the ones that affect COVID-19 fatalities so that measures to control the pollution could be enforced. Methods: Various machine learning techniques: Decision Trees, Linear Regression, and Random Forest have been applied to correlate air pollutants and COVID-19 fatalities in Delhi. Furthermore, a comparison between the concentration of various air pollutants and the air quality index during the lockdown period and last two years, 2018 and 2019, has been presented. Results: From the experimental work, it has been observed that the pollutants ozone and toluene have increased during the lockdown period. It has also been deduced that the pollutants that may impact the mortalities due to COVID-19 are ozone, NH3, NO2, and PM10. Conclusions: The novel coronavirus has led to environmental restoration due to lockdown. However, there is a need to impose measures to control ozone pollution, as there has been a significant increase in its concentration and it also impacts the COVID-19 mortality rate. A n epidemic that occurs over a very large area worldwide and affects a large population is called a pandemic. An influenza pandemic is characterized by a widespread transmission occurring worldwide. 1 Some of the influenza epidemics that have been recorded are the severe acute respiratory syndrome coronavirus (SARS-CoV) 2 in 2002 and the Middle East respiratory syndrome coronavirus (MERS-CoV) that emerged in Saudi Arabia in 2012. Another such swine origin influenza pandemic known as A (H1N1) emerged in March 2009 that led to about 3200 deaths worldwide by September 2009, for which later a vaccine was developed. 3 An outbreak of pneumonia of unknown origin occurred in Wuhan, China. This outbreak was later attributed to a novel coronavirus, SARS-CoV-2, responsible for causing the 2019 coronavirus disease . 4,5 The virus affects the lower respiratory tract and leads to serious illness in older people or in those with diabetes, heart, or respiratory problems. 6 Initially, all the cases in other countries were attributed to infection from China but later extended to a number of countries like Japan, South Korea, Italy, and the United States. COVID-19 has affected countries in all continents and was thus declared a pandemic by the World Health Organization (WHO) on March 11, 2020. 7 As per the data by Johns Hopkins University, as of April 10, 2020, more than 1.49 million infections and 90 000 deaths were reported due to COVID-19 worldwide. 8 The second most populated country in the world is India, and thus any pandemic in India which is unrestrained can affect about one-sixth of the population in the world. 9 As per the Ministry of Health and Family Welfare (MoHFW), Government of India, by April 10, 2020, there have been 6039 active cases and 206 deaths due to COVID-19 in India. The highest number of cases has emerged in Maharashtra with 1364 cases and 97 deaths, followed by Delhi with 898 cases and 13 deaths. 10 To mitigate the spread of COVID-19, MoHFW in January 2020 advised to abstain the travel to China, and thermal screening of all passengers returning from various countries was carried out at 21 airports across India. Due to the absence of any vaccine or medicine for COVID-19, to slow down the spread of the virus, there is a need for early detection and self-isolation of infected patients along with quarantine and hand hygiene, as the transmission occurs ORIGINAL RESEARCH through droplets from coughing and sneezing. 11 The WHO 12 and MoHFW have issued the following guidelines to mitigate the spread of COVID-19: • To wash hands frequently using soap and water or alcoholbased hand rub • To cover the mouth and nose with disposable tissue or handkerchief while sneezing or coughing and throw the used tissues immediately • To avoid public gatherings and practice social distancing • To visit a doctor while wearing a mask if one experiences fever, cough, and difficulty breathing 13 14 COVID-19 affects the lower respiratory tract, so air pollution could further impact the deaths due to the virus. Thus, there is a need to enforce regulations to control the air pollution both during and after a lockdown. The influence of a lockdown due to the COVID-19 pandemic on the air quality in India has been studied by many researchers. Mahato et al. 15 studied the air quality of Delhi amidst the lockdown due to COVID-19. Seven pollutant concentrations for 34 stations in the city were studied, and it was found that PM 2.5 , PM 10 , CO, and NO 2 have maximum reduction in comparison to the pre-lockdown phase. Kambalagere 16 analyzed the air quality index (AQI) of Bengaluru before and after the lockdown and found that the air quality improved from a hazardous level during the lockdown. Contini and Costabile 17 studied the influence of atmospheric pollutants on COVID-19 and its mortality rate. It was found that PM 2.5 and PM 10 concentrations may increase the vulnerability to COVID-19. Therefore, air pollution along with other factors like population, age, density, and meteorological parameters should be considered in the future to determine the importance of these factors in the mortality rate due to COVID-19. Mitra et al. 18 compared the concentration of CO 2 at 12 monitoring stations in Kolkata during April, 2019, and during the lockdown on April 2020. Temporal variation of CO 2 was observed, but no statistically significant variation existed between various monitoring stations. Furthermore, some sites did not lead to a decrease in CO 2 as its previous concentration was low due to widespread floral species. Sharma et al. 19 studied the restricted emissions in the lockdown due to COVID-19 pandemic and analyzed the concentration of 6 criteria pollutants from mid-March to mid-April 2017 to 2020 in over 22 cities of India. While all the pollutants and the AQI decreased significantly, an increase in ozone was observed. Furthermore, the WRF-AERMOD model was applied for predicting PM 2.5 concentration of Delhi with actual meteorological parameters and the events of November 2019, and it was found that there was an increase of 33% in PM 2.5 concentration. Though an overall decrease in the concentration of the majority of pollutants has been observed during the lockdown period, the influence of these pollutants on COVID-19 and its mortality rate needs to be explored further. In this study, the impact of various pollutants in Delhi on fatalities due to COVID-19 has been studied, and a further comparison between the air pollution levels over the past 2 years has been performed. METHODS The comparison of air pollutants in Delhi during the lockdown period in the previous 2 years and the methodology used to assess the impact of air pollutants on COVID-19 fatalities has been shown in the following section. Air Quality of Delhi Air quality is assessed by the AQI tool that maps the concentration of a number of pollutants (CO, SO 2 , PM 2.5 , Ozone, and NO 2 , etc) into a single value. To compute this index, a subindex of each pollutant is first calculated and then these subindices are combined in weighted additive form. AQI has been divided into 6 categories in India: Good (0-50), Satisfactory (51-100), Moderately Polluted (101-200), Poor (201-300), Very Poor (301-400), and Severe (401-500). 20,21 Air pollution is the cause of respiratory and other impacts on human health and increases the mortality and morbidity rates. 22,23 The National Capital Territory (NCT) of Delhi is one of the most polluted cities of the world. Major sources of poor air quality in Delhi include industrial activity and the emission from vehicles which are increasing at a rate of 7% every year as per the Transport Department, NCT, of Delhi. 24 The first positive case of COVID-19 was reported in Delhi on March 2, 2020. Following that, all the primary schools were shut from March 6, 2020, and all colleges, schools, and cinema halls since March 13, 2020. Subsequently, all the domestic and international flights were canceled, and a 21-day lockdown was announced by the Indian Prime Minister to mitigate the spread of COVID-19, which led to the closure of all factories and vehicles, as most of the people were restricted to their homes. Thus, the air quality of Delhi has improved drastically. The line graphs between the AQI and the number of deaths and between AQI and the number of COVID-19 cases in Delhi are shown in Figure 1. From Figure 1, it has been observed that the air quality of Delhi improved significantly as the number of COVID-19 cases increased. It has also been noted that gradual improvement Figure 2. From the plots in Figure 2, it has been observed that the overall air quality for the year 2020 showed improvement over the previous 2 years starting in January. The reason for this can be attributed to COVID-19, as its first positive case was reported in India in January 2020. Furthermore, it has also been noted that the concentration of the pollutants has decreased significantly, apart from ozone and toluene. The average concentration of all the pollutants and AQI in 2020 compared with that of the previous 2 years for the months of January, February, and March has been summarized in Table 1. From Description of the Machine Learning Techniques In this paper, feature selection has been carried out using various machine learning techniques, namely, Decision Trees, Linear Regression, and Random Forest. These techniques have been explained in detail in the following texts. Decision Trees Decision Trees represent the parameters by a node in the tree, and the values of these parameters are represented by the respective branches of the node and perform division of input based on the values of the various parameters. 25 The design of a decision tree classifier depends on the design of the tree structure, feature subsets at each internal node, and the decision rule to be used at each node. Error rates, number of nodes in the tree, and information gain are some of the criteria for the design of the tree structure. Branch and bound technique and greedy algorithm are used for feature subset selection, whereas entropy and information-based approaches are generally used as a decision rule at each node. 26 Linear Regression Linear Regression represents a dependent variable based on the linear combination of various independent variables. 27 A scatter plot is constructed and a correlation is computed between the response variable and the predictors. The regression coefficient that is the intercept and slope coefficient are finally calculated to find the regression line, which determines the predicted value of the response variable. 28 Random Forest It is based on the forecast of a number of trees where each tree is trained independently and then averaging the values of the result. 29 To construct Random Forest, samples equal to the number of trees are drawn from the dataset, and a tree is grown by choosing the best split amongst all the input variables at each node. Prediction is carried out by aggregation of predictions of all the sample trees. For regression is an average prediction and for classification the majority voting amongst predictions is considered. 30 FIGURE 1 Using Machine Learning Techniques During COVID-19 Disaster Medicine and Public Health Preparedness Methodology to Monitor Impact of Air Quality on COVID-19 Fatalities Recent studies have suggested that air pollution could lead to an increase in COVID-19 deaths as the virus tends to weaken the respiratory system. Thus, there is a need to identify the pollutants in Delhi that affect the novel coronavirus so that measures to control pollution could be enforced. In this work, feature selection techniques based on machine learning are employed to find the various pollutants that influence the COVID-19 deaths. The methodology to monitor the impact of air quality on deaths due to COVID-19 has been summarized in Figure 3. • The dataset with AQI and various pollutants (PM 2.5 , CO, NO 2 , SO 2 , ozone, PM 10 , toluene, benzene, and NH 3 ) as predictors and COVID-19 deaths in Delhi as response are collected from the Central Pollution Control Board 31 and MoHFW website. • To pre-process the dataset, the missing values of various pollutants have been replaced by their mean. The dataset has then been split into 70% train and 30% test set. • Feature selection has been carried out using Decision Trees, Linear Regression, and Random Forest. These techniques are used to build a model on the train data and compute the feature importance on the test data. • To train the data using the machine learning techniques, first the tuning parameters for each model are chosen and then resampling is performed using the cross-validation method. The tuning parameters used for Decision Trees, Linear Regression, and Random Forest are the complexity parameters used to select the optimal tree size, the intercept of the regression line, and the number of input parameters used for splitting at each node, respectively. • The importance is computed on the test data based on the model statistic. A feature is of importance if a reduction in the model statistic is noticed when that feature is added to the model. For Linear Regression, the t-statistic is used, and for Decision Tree and Random Forest, the mean square error is used as the model statistic. • The final pollutants are then selected by the machine learning techniques that influence COVID-19 deaths, and various measures to control pollution could be enforced to handle the COVID-19 crisis. RESULTS Amid the lockdown, the concentration of most of the pollutants has reduced. To monitor the influence of the pollutants on COVID-19 and its mortality rate, various machine learning techniques have been applied to extract the features. A correlation between the pollutants and the deaths due to COVID-19 in Delhi has been depicted in Table 2. From FIGURE 3 Using Machine Learning Techniques During COVID-19 Disaster Medicine and Public Health Preparedness the pollutants that influence the death due to the novel coronavirus. The importance of each pollutant, as computed by various techniques, has been displayed in Table 3. Figure 4. From Figure 4, it can be inferred that the important pollutants that have been found by all 3 techniques are ozone, NH 3 , NO 2 , and PM 10. Thus, there is a need to take measures to control these pollutants during and after lockdown, as they influence COVID-19 deaths. DISCUSSION In December 2019, an outbreak of pneumonia of unknown origin occurred in Wuhan, China. This outbreak was later identified to be caused by a novel coronavirus responsible for COVID-19. The outbreak has affected countries in all continents and thus has been declared a pandemic by the WHO. 32 The virus leads to serious illness in older people or ones with diabetes, heart, or respiratory problems and would lead to more COVID-19 deaths in areas with poor air quality. Thus, there is a need to enforce regulations to curb pollutions during and after lockdown. In this work, it has been found that ozone and toluene have increased in the lockdown period, and the pollutants that may impact the COVID-19 fatalities are ozone, NH 3 , NO 2 , and PM 10. Ozone is of significance, as it is one of the pollutants that has increased in the lockdown period and considerably may affect the mortality due to the novel coronavirus; also, it has been noted that high levels of ozone concentration lead to mental conditions such as depression. 33 levels, the ozone concentration becomes high as there is no NO 2 to further exploit it. 34 Controlling the VOC emissions from vehicles and from the combustion of solid fuels is crucial to control ozone pollution. An amendment in the transport system with cleaner fuels and battery in vehicles and controlling emissions due to industries, constructions, and waste burning is crucial to alleviate the air quality. CONCLUSIONS In this work, experiments using various machine learning techniques have been performed to monitor the impact of various air pollutants during lockdown amid COVID-19 in Delhi, and which pollutant types contribute to the deaths. Further, a comparison between the concentration of various air pollutants and AQI during the lockdown period and the last 2 years has been presented. From this study, it has been inferred that the concentration of all the pollutants has decreased significantly apart from ozone and toluene, with an increase of 94.42% and 95.98%, respectively. The highest percentages of decrease of 63.68%, 56.56%, and 93.16% exist for the concentration of NO 2 , SO 2 , and benzene, respectively. Machine learning techniques have identified ozone, NH 3 , NO 2 , and PM 10 as indicators that may impact deaths caused by COVID-19. The parameter that holds importance here is ozone, as an increase in its concentration has been noted, and further it has been observed that it impacts the COVID-19 mortality rate. In the future, preliminary precautions can be taken to mitigate the levels of VOC, which further impact the ozone concentration. FIGURE 4 Using Machine Learning Techniques During COVID-19 Disaster Medicine and Public Health Preparedness
4,020.8
2020-10-12T00:00:00.000
[ "Environmental Science", "Computer Science" ]
FQHE and $tt^{*}$ geometry Cumrun Vafa has proposed a microscopic description of the Fractional Quantum Hall Effect (FQHE) in terms of a many-body Hamiltonian $H$ invariant under four supersymmetries. The non-Abelian statistics of the defects (quasi-holes and quasi-particles) is then determined by the monodromy representation of the associated $tt^*$ geometry. In this paper we study the monodromy representation of the Vafa 4-susy model. Modulo some plausible assumption, we find that the monodromy representation factors through a Temperley-Lieb/Hecke algebra with $q=\pm\exp(\pi i/\nu)$. The emerging picture agrees with the other Vafa's predictions as well. The bulk of the paper is dedicated to the development of new concepts, ideas, and techniques in $tt^*$ geometry which are of independent interest. We present several examples of these geometric structures in various contexts. Contents The Fractional Quantum Hall Effect (FQHE) describes some peculiar quantum phases of a system of a large number N of electrons moving in a two-dimensional surface S in presence of a strong normal magnetic field B at very low temperature (for background see [2]). These quantum phases are classified by a rational number ν ∈ Q >0 , called the filling fraction, which measures the fraction of states in the first Landau level which are actually occupied by the electrons The quantum phase for a given ν is characterized by a specific topological order of the ground state(s). The topological order is captured by the (possibly non-Abelian) generalized statistics of the topological defects (quasi-holes and quasi-particles) which may be inserted at given points {w k } in the surface S where the electrons move. The generalized statistics of quasi-holes is the main object of interest in the theory of such phases. In principle, the microscopic description of the system is provided by the Schroedinger equation governing the dynamics of the N electrons: (1.2) The system is described The dependence of H on some continuous parameters is however interesting, even if their deformation does not close the gap and leaves the Hamiltonian in the same strict topological class. A basic example are the positions w k ∈ R 2 where one inserts the defects. If we keep track of the dependence on these parameters in solving the Schroedinger equation (1.2), we may follow how the ground state(s) change when we take one defect around another, thus determining their generalized statistics. Morally speaking, the solution to (1.2) defines a connection on the space of defect configurations, and parallel transport along closed loops in this space defines the general statistics. 1 Then, out of the infinite-dimensional space of JHEP12(2019)172 possible deformations of the Hamiltonian H, all of which locally preserve the energy gap, 2 there is a finite-dimensional sub-space of deformations which may be used to probe the quantum order; the corresponding couplings are essential to understand the nature of the topological phase. All other couplings are pretty irrelevant, and we are free to deform them in any convenient way in order to make the analysis easier. Thus, pragmatically, a microscopic description consists of a family of (gapped) Hamiltonians H(w k ) for the N electron system, where the w k are the essential parameters which take value in some essential coupling space X . H(w k ) is unique up to an equivalence relation given by arbitrary deformations of all inessential parameters while preserving the gap. In a given FQHE topological phase, from the dynamics of the microscopic degrees of freedom there emerges at low-energy an effective 2d QFT Q for the (non-local) quasi-hole "field" operators h(w); the topological phase is then captured by the braiding properties of their multi-point correlators as we transport the h(w)'s around each other in closed loops. One of the goals of the theory is to understand the effective QFT of quasi-holes for a given value of the filling fraction ν. Starting from M-theory considerations, Vafa [1] puts forward the remarkable proposal that the relative universality class of Hamiltonian families which describes FQHE with given filling fraction ν contains explicit families {H(w)} w∈X which are invariant under extended supersymmetry with four-supercharges (4-susy). As we review in section 3, this means that the action of the braid group B n on the topological defects h(w j ) coincides with the monodromy representation of the flat connection of the 4-susy supersymmetric Quantum Mechanics (SQM), and then the topological order of the FQHE system may be studied with the powerful tools of tt * geometry [3][4][5][6][7]. The purpose of the present paper is to study the tt * monodromy representation of the 4-susy SQM Hamiltonians which represent the FQHE relative universality classes, and determine the properties of their quantum topological phase. The observables one computes this way may potentially be tested in actual experiments in the laboratory. Before going to that, in section 2 we argue from the first principles of Quantum Mechanics that the Vafa Hamiltonian is the physically correct one to describe many electrons, moving in a plane which interact with each other, in presence of a parametrically large magnetic field. ref. [1] discusses FQHE from several viewpoints besides the microscopic one based on the 4-susy SQM model, all of them inspired by M-/string theory consideration. The results of the effective approach in section 2 of [1] then constitute predictions of the results one is expected to obtain from the microscopic description (section 3 of [1]). In this paper we get full agreement with Vafa expectations: the way they arise from the microscopic theory looks quite elegant and deep from the geometrical side. We find that the 4-supercharge Hamiltonian proposed by Vafa describes FQHE for the following series Our results are "exact" in the sense that no asymptotic limit is implied: we do not assume any particular regime of the discrete or continuous parameters of the quantum model besides the defining assumption of the FQHE that the magnetic field B is parametrically large. Our computations do not rest on some approximation scheme, but on subtle general properties of tt * geometry and some plausible assumption. While the geometric statements are beautiful, plausible, and supported by explicit examples, the arguments we present fall short of being proofs. The bulk of the paper is devoted to the study of advanced topics in tt * geometry required for the analysis of FQHE. Most of these developments have not appeared before in print, and some look rather surprising. In this direction there is still work to do. The idea that the IR physics of some concrete physical system, actually realized in the laboratory -as the FQHE materials -does have a microscopic description in terms of a Lagrangian with extended supersymmetry may seems rather odd at first. In itself supersymmetry is not a problem since, for a gapped susy system, the supercharges just vanish in the IR sector. But extended supersymmetry is a subtler story. There are obvious obstructions to the uplift of the IR sector of a gapped quantum system to a 4-susy Hamiltonian model. We conclude this introduction by showing that these obstructions are avoided in the FQHE case. This is quite remarkable in its own right. In section 2 we shall give detailed arguments to the effect that the real-world microscopic FQHE Hamiltonian H FQHE does have a canonical 4-susy uplift of the form proposed by Vafa. Obstructions to 4-susy uplift. The Lagrangian of a 4-susy SQM is the sum of two pieces, called the D-term and the F -term. Couplings entering in the D-term are inessential for the IR sector, but there are finitely many F -term couplings which do are essential in the IR: they take value in some manifold 4 X tt * (the tt * manifold). Therefore, in order to have a 4-susy uplift, our quantum system should satisfy a necessary condition: its essential coupling space X should match the tt * one X tt * . This is a formidable restriction since X tt * is a very special kind of manifold: a) it is a complex analytic space, b) it admits a complete Kähler metric with a global Kähler potential, and c) if all infinitesimal deformations are unobstructed, X tt * has the structure of a Frobenius manifold [8]. The fact that the essential parameters of FQHE satisfy all these peculiar conditions looks quite remarkable in itself, and gives confidence on the proposal put forward by Vafa. While this correspondence JHEP12(2019)172 may look quite unlikely at first, it is pretty natural from the M-theory perspective [1]. More direct physical arguments to believe Vafa's supersymmetric picture is correct will be discussed in section section 2. Organization of the paper. The paper is organized as follows: in section 2 we shall discuss the physics of the FQHE and present the reasons to believe in the 4-supercharges description. Here we fill in the details of various deep arguments sketched in section 3 of [1]. In section 3 we review the basics of tt * geometry mainly to fix the language and notation. In section 4 we introduce a first block of new developments in tt * geometry: here the focus is on the natural and deep interconnection between tt * geometry and subjects like the Knizhnik-Zamolodchikov equation [9], (Iwanori-)Hecke algebras [11], the Gaudin integrable model [12] and all that. Section 5 contains a second block of special tt * topics: here we consider the interplay between tt * geometry and statistics from the viewpoint of tt * functoriality, and connect these issues to the Heine-Stieltjes theory. In this section we also introduce the notion of tt * dualities, i.e. correspondences between different looking quantum systems with four supercharges which have identical tt * geometry (i.e. same brane amplitudes, metrics, new indices etc.). In section 6 the ideas developed in sections 4, 5 are applied to the Vafa model of FQHE to get the monodromy representation we look for. We present our conclusions in section 7. The Vafa model vs. the microscopic physics of FQHE The fractional quantum Hall effect arises from the quantum dynamics of a large number N of electrons moving in a two-dimension surface S subject to a strong external magnetic field B. In principle the quantum physics may be determined by solving the Schroedinger equation (1.2) for the many electron system. The actual Hamiltonian H contains a large number of degrees of freedom, it is involved and poorly known, so the direct approach from the microscopic side may seem totally hopeless. However, as far as the only observables we wish to compute from the Schroedinger equation (1.2) are the ones which control the topological order of the quantum phase, the problem becomes tractable under some mild assumptions. Generalities The basic assumption is that the strong magnetic field is really strong, so that there is a parametrically large energy-gap between the low-lying energy levels and the rest of the Hilbert space H. More precisely, the Hamiltonian is assumed to have the schematic form where y i is the position of the i-th electron in the plane R 2 ∼ = C (we shall set z i ≡ y i,1 +iy i,2 ), p i its conjugate momentum, σ i its spin d.o.f., and A the background gauge field ∇× A = B. The interacting Hamiltonian H int describes all other interactions; its crucial property is that JHEP12(2019)172 it is O(1) as B → ∞. The additive constant in the large parenthesis is chosen so that the ground state energy vanishes. We assume ν ≤ 1. Let H Φ ⊂ H be the subspace of the Hilbert space consisting of states whose energy is bounded in the limit B → ∞; the orthogonal complement H ⊥ Φ is separated from H Φ by a large O(B) energy-gap. One has Note that in H Φ the electrons are polarized and the spin d.o.f. get frozen in their Clifford vacua. Thus, if we are only interested in the physics at energies B/m we may forget these degrees of freedom. Acting on the vector space H Φ , the operator H B is identically zero; we are reduced to a quantum system with a finite-dimensional Hilbert space H Φ with Hamiltonian H eff = P Φ H int P Φ where P Φ is the projection on H Φ . The fact that H Φ is finite-dimensional is not a significative simplification (unless ν = 1), since in realistic situations the dimension of the "small" space H Φ is something like 10 10 14 and gets strictly infinite in the thermodynamic limit. To proceed forward one needs new physical insights. In ref. [1] two novel ideas were proposed: 5 1. the low-lying Hilbert space H Φ is isomorphic to the space of supersymmetric vacua of a certain 4-susy SQM model; 2. the SQM system has a unique preferred vacuum |vac which is identified with the vacuum of the physical FQHE under the isomorphism in 1. Our first goal is to flesh out the above two ideas in some detail. Electrons in a finite box with magnetic flux To get a clean problem, we work in a finite box, i.e. we replace the plane C in which the electrons move with a very large flat 2-torus E. The complex structure τ on the elliptic curve E is immaterial in the infinite volume limit: we fix it to any convenient value. Also the spin structure is irrelevant; it is convenient to pick up an even one, 6 O(S), associated to a divisor S = p 0 − q where p 0 , q ∈ E are distinct points which satisfy 2p 0 = 2q. We are free to translate p 0 ∈ E according convenience. In a holomorphic gauge, Az = 0, an Abelian gauge field A on E is determined by two data: i) a holomorphic line bundle L → E with Chern class c 1 (L) = Φ/2π, where Φ > 0 sheaf V on the complex space X. O is the structure sheaf of X and M the sheaf of germs of meromorphic functions. An asterisque denote the sub-sheaf of invertible elements of the given sheaf. 2) If D = i nipi is a divisor on a smooth curve X, we fix a Cartier representative of it, i.e. we take a sufficiently fine open cover {Ui} of X and fix ψ0,i ∈ Γ(Ui, M * ) such that ψ0,i/ψ0,j ∈ Γ(Ui ∩ Uj, O * ). We write O(D) for the associated line bundle (≡ invertible sheaf) with transition functions ψ0,i/ψ0,j. The defining section ψ0 of O(D) is the one given by ψ0|U i = ψ0,i. We write ∼ for linear equivalence of divisors. JHEP12(2019)172 is the magnetic flux through the surface E, and ii) a Hermitian metric h on the fibers of L. Locally In such a holomorphic gauge, the low-lying wave functions ψ of the one-particle Hamiltonian 7 are simply the holomorphic sections of the line bundle L twisted by O(S), and for the N electron system In this gauge the low-lying wave-function Ψ are independent of h; this does not mean that h is irrelevant for the low-energy physics, because the inner product in the space H Φ depends on h. To be very explicit, we choose an effective divisor D = i=1 n i p i such that L = O(D). Then L(S) = O(D + S). The divisor D + S, unique up to linear equivalence, has a defining meromorphic section ψ 0 with a zero of order n i ≥ 1 at each point p i ∈ E, a simple zero at p 0 , and no other zeros. In addition, ψ 0 has a single pole at q and no other poles. Because of the pole ψ 0 ∈ Γ(E, O(D + S)). The map ψ → ψ/ψ 0 ≡ φ sets an isomorphism we get the linear isomorphism On the other side of the correspondence, we consider a 4-susy SQM with a single chiral field z taking value in K ≡ E \ supp F , where E is the elliptic curve on which the electrons move, dz is a holomorphic differential on E, and F an effective divisor. 9 We choose the oneparticle superpotential 10 W (z) such that its derivative, W (z), is a meromorphic function on E whose zero-divisor D ≡ i=1 n i p i is the one describing the magnetic background in which the electrons move. The polar divisor of W (z) is F ∼ D. In making the dictionary between the two quantum models, we use our freedom in the choice of p 0 to set p 0 ∈ Supp F , i.e. p 0 ∈ K. By the Chinese remainder theorem, 11 the chiral ring R of this 4-susy model is Comparing with (2.10) we get as vector spaces. On the other hand, in a 4-susy theory we have a linear isomorphism between the chiral ring R and the space V of susy vacua [17,20]. Composing the two we get a natural isomorphism between the low-lying states of the two quantum systems At the level of explicit Schroedinger wave-functions the isomorphism reads (for the oneparticle theory) ψ → ψ susy ≡ ψ ψ 0 dW + Q(something), (2.14) where in the r.h.s. we wrote the supersymmetric wave-functions as differential forms on K, as it is customary [17,21]. Q is a nilpotent supercharge, Q 2 = 0, which acts in the Schroedinger representation as the differential operator [17] The space of susy vacua V (and R) is isomorphic to the Q-cohomology with L 2 -coefficients. Eq. (2.14) says that, up to a boring factor, the low-lying wave-functions for the original magnetic system and the ones for the 4-susy SQM models are identical in Q-cohomology. To see that (2.14) is an isomorphism note that the elliptic function ψ/ψ 0 is holomorphic for ψ ∈ Γ(E, O(D + S)) if and only of it is identically zero, that is, the r.h.s. of (2.14) is Q-exact iff ψ = 0. The identification of the actual Schroedinger wave-functions on the two sides of the correspondence, if not fully canonical, is pretty natural. 9 If supp F = ∅, the target space K is Stein [16]. This ensures that the elements of the chiral ring R may be represented (non-uniquely) by global holomorphic functions [17], see also Hilfssatz C in [18]. The results of the latter paper imply that these nice properties hold even when dim R = ∞ (i.e. for infinite degree divisors) a fact we shall need in section 5.7.5 (for an exposition of these results, see section 26 of [19]). 10 We stress that we require only the derivative W to be univalued in K, not the superpotential W (z) itself which is typically multivalued. 11 We stress that the ring of holomorphic functions on a one-dimensional Stein manifold is a Dedekind domain. Then (say) Theorem 4 of [73] applies. Motion in the plane When the electron moves on C instead of a torus, the corresponding 4-susy SQM is defined by a one-form dW which is a rational differential on P 1 with a pole of order ≥ 2 at ∞ With this prescription on the behaviour at ∞, the scalar potential |W | 2 is bounded away from zero at infinity for all complete Kähler metrics on P 1 \ {∞}. This makes the quantum problem well-defined in the following senses: A. if we consider the 2d (2,2) Landau-Ginzburg model with superpotential W (z), this condition guarantees the absence of run-away vacua; B. if we consider the 1d 4-susy SQM obtained by dimensional reduction from the above 2d model, it guarantees the presence of a finite energy-gap, and also normalizability of the vacuum wave-functions. We mentioned both 2d and 1d models since the tt * geometry is the same for the two theories [3], and it is convenient to pass from one language to the other, since some arguments are more transparent in 2d and some other in 1d. The minimal regular choice is dW having a double pole at ∞; we shall mostly focus on this case. 12 The same argument as in the torus geometry gives the linear isomorphism H Φ ∼ = V also in the plane. The magnetic flux is 2π deg D, D being the zero divisor of dW . One writes the spin structure in the form O(−q) for some reference point q ∈ Supp D ∪ {∞}. The low-lying magnetic wave-functions are ψ ∈ Γ(P 1 , O(D − q)), dim Γ(P 1 , O(D − q)) = Φ/2π. In conclusion: for N non-interacting electrons in presence of a magnetic field, the low-lying Hilbert space is This is a mere linear isomorphism: the Hermitian structures on the two sides of the correspondence depend on additional data: in the original magnetic system on the fiber metric h, while in the 4-susy SQM on the detailed form of dW (z) which determines the groundstate Hermitian metric through the tt * equations [3]. Our next task is to find the explicit form of dW (z) which best mimics the Hilbert structure of H Φ for the magnetic system. Comparing Hermitian structures on H Φ For simplicity, we consider a single electron moving in C ≡ P 1 \ {∞} in presence of a strong magnetic field B macroscopically uniform along the surface. The extension to the case of N electrons is straightforward. JHEP12(2019)172 In the magnetic side, the Hermitian structure is defined by the fiber metric h = e −B|z| 2 , so that in a unitary gauge the low-level wave functions read ψ(z) uni = ψ(z) holo e −B|z| 2 /2 B > 0. (2.18) In the 4-susy side we have the rational differential dW (z) with Φ/2π zeros and a polar divisor of the form F = F f + 2∞. Generically, such a differential has the form with a i ∈ C × and ζ i ∈ C all distinct. An exact identification of the microscopic Hilbert space structures is a requirement a bit too strong. We content ourselves with equality after averaging over small but macroscopic domains U C. In the present context U being macroscopic means U B/2π 1. This weaker condition is all we need if we are interested only in predicting long-wave observables of the kind which characterize the quantum topological order. Let U C be such a domain. For B and µ large, so a large macroscopically uniform B corresponds (non surprising) to a roughly homogeneous distribution in C of the points ζ i ; the domain U is macroscopic iff it is much larger than the typical separation of the ζ i 's. After taking the ζ i to be regularly distributed in the plane, matching the Hermitian structures on the two sides of the correspondence boils down to fixing the residues a i of dW so that the probability of finding the electron in the macroscopic domain U C in the original magnetic system is the same as in the supersymmetric model. It is clear that a homogenous field should correspond to the residues being all equal. By a rotation of the Grassman coordinates θ we may assume the a i to be all real. JHEP12(2019)172 From the Schroedinger equation of the supersymmetric system, one has [17,22] − A possible large-field asymptotics consistent with this equation is provided the function in the r.h.s. a) is univalued in the large |z| region, and b) it goes to zero rapidly at infinity, so that ψ susy has a chance to be normalizable. For a superpotential as in eq. (2.19) with a i ∈ R for all i the first condition holds This function is the electrostatic potential of a system of point charges of size a i at positions ζ i superimposed to a constant background electric fieldμ. When averaged over a macroscopic region U , it looks like the potential for a continuous charge distribution with density σ(z) such that (2.29) where in the last equality we used the Poisson equation of electrostatics. 14 Comparing eq. (2.29) with eq. (2.20), which also should be true for all macroscopic domain U , we get that either all a i = −1 or all a i = +1, the two possibilities being related by a change of orientation. We fix conventions so that the external magnetic field is modelled in the susy side by (2.19) with a i = −1 for all i. Introducing defects From the susy side there is a natural way to introduce topological defects in the systems. One flips sign to a small number h of the residues a i . Now there is a small mismatch between the number of vacua and the effective magnetic field as measured by the fall-off of the wave-function at infinity: we have two extra vacua per defect. The extra vacua are localized near the position of the corresponding defect in the plane and may be interpreted as "internal states" of the defect. We identify these defects with the quasi-holes of FQHE. The Vafa superpotential emerges We return to Schroedinger equation with Hamiltonian (2.1). In the large B limit, the lowenergy physics is described by a quantum system with Hilbert space H Φ and Hamiltonian H ≡ P Φ H int P Φ . Under the isomorphism discussed above, this system may be seen as a deformation of the 4-susy model with superpotential W the sum of N copies of the above one-particle superpotential, i.e. W = N i=1 W (z i ). The additional terms in the Hamiltonian describe the interactions between the electrons. We can split these interactions in two groups: the ones which preserve supersymmetry and the ones which do not. The first ones may be inserted in the superpotential W (or in the D-term, these ones being IR irrelevant). One is led to a superpotential of the form where dW (z i ) models the background magnetic field and x a are the positions of the topological defects. As a function of the position z i of the i-electron at fixed z j =i , the meromorphic one-form U i dz i can have poles only when z i = z j for some j = i. Generically U i dz i has only simple poles (including at ∞): we assume this to be the case. The residues are entire functions bounded at ∞, hence constants. Since W must be symmetric under permutations of the electrons, the most general superpotential differential is for some complex constant β. The Vafa model [1] has a superpotential of this form. Ref. [1] proposes to model the magnetic field by where ζ k are points forming some regular "lattice". Working on the plane C we prefer to add a constant contribution to dW (z) Vafa target space C, the coupling β may be any complex number. However, β gets quantized to a rational number when we study the model more carefully in a finite box E and we insist that the residues of dW have the correct values in (2.31). In this case dW is a meromorphic one-form in the Kähler space E N ; its restriction to the i-th factor space at fixed z j (j = i) is a meromorphic one-form on the elliptic curve E with single poles of residue −1 at the ζ k , residue +1 at the x a , and residue 2β at the z j =i and no extra pole at p 0 . Since the total residue of a meromorphic one-form vanishes i.e., as N, Φ → ∞, which is the value given in ref. [1]. In ref. [1] the equality 2β = 1/ν was obtained by comparing the 4-susy brane amplitudes [13] in the (unphysical) asymmetric limit with the Laughlin phenomenological wavefunctions [24]. However that argument does not fix β unambigously 15 since the superpotential is not univalued and one should go to a cover (see section 5.7); then the effective coupling β eff appearing in the brane amplitudes is a "renormalized" version of the superpotential coupling β [3,6,7]. In the rest of this paper we shall work on the plane C and keep β generic. We shall identify the filling fraction ν with (2β eff ) −1 . 15 The brane amplitudes in the asymmetric limit have the general form Γ e W φ where φ is a holomorphic N -form which represents the (cohomology class of) a susy vacuum. Clearly we are free to redefine φ → hφ and W → W − log h for h a holomorphic function, leading to an ambiguity in reading the superpotential out of the integral Γ e W φ. Remark: knowing the set of allowed integration cycles Γ reduces (or eliminates) the ambiguity. JHEP12(2019)172 lifts the huge degeneration of the ground-states of H W producing a unique true vacuum |vac . The FQHE topological order is a property of this particular state. We may formalize the situation as follows. The zero-energy eigenvectors of the supersymmetric Hamiltonian H W define a vacuum bundle over the space X of couplings entering in the superpotential whose fiber V x is the space of susy vacua for the model with couplings x. V is equipped with a flat connection ∇ extending the holomorphic Berry connection D (see next section). The quantum topological order of the supersymmetric model H W is captured by the monodromy representation of ∇. Switching on the interaction H su.br. selects one vacuum |vac x ∈ V x . The states |vac x span the fibers of a smooth line sub-bundle (2.40) L is endowed with two canonical sub-bundle connections, ∇ vac and D vac , inherited from ∇ and D, respectively. In general the sub-bundle curvature is quite different from the curvature of the original vector bundle; the discrepancy is measured by the torsion 16 [23,31] T : Correspondingly, a priori the monodromy of ∇ vac is neither well-defined nor simply related to the one of ∇. A priori there is no simple relation between the quantum order of the FQHE Hamiltonian H and the quantum order of the susy model with Hamiltonian H W . In order to have a useful relation two "miracles" should occur: M1: the monodromy representation V of the flat connection ∇ should be reducible with an invariant sub-bundle L ⊂ V of rank 1. Then the 4-susy SQM has a unique preferred vacuum which spans the fiber L x of the line sub-bundle; M2: the physical FQHE vacuum |vac is mapped by the isomorphism in section 2.2.2 to the preferred vacuum of "miracle" M1 (up to corrections which vanish as B → ∞). In other words, L = L . Whether M1 happens or not is purely a question about the supersymmetric model H W . The question may phrased as asking whether H W has an unique preferred vacuum. Ref. [1] suggests that such a preferred vacuum exists and is the spectral-flow 17 of the identity operator. While this sounds as a natural guess, it is certainly not true that in a general tt * geometry the spectral-flow of the identity spans a monodromy invariant subspace of V . That M1 holds for the special class of 4-susy models (2.31) appears to be a genuine miracle. 16 Notations: in this paper Λ k stands for the space of smooth k-forms, while Ω k for the space of holomorphic ones. We use the same symbols for the corresponding sheaves. 17 See section 3 for a review of the spectral-flow isomorphism. JHEP12(2019)172 The validity of M2 then rests on the fact that the preferred vacuum -if it exists at all -is bound to be the most symmetric one. Then one may argue as follows [1]: as long as the susy breaking interaction H su.br. is symmetric under permutations of electrons/quasiholes, translations and rotations, the true vacuum |vac will also be the (unique) maximal symmetric one. The conclusion is that -under our mild assumptions -the quantum order of the FQHE is captured by the 4-susy SQM model proposed in [1]. Review of basic tt * geometry We review the basics of tt * geometry in a language convenient for our present purposes. Experts may skip to the next section. 4-supercharge LG models: vacua and branes Even if tt * geometry is much more general, we describe it in a specific context, namely Landau-Ginzburg (LG) models with four supercharges (4-susy). By a (family of) LG models we mean the following data: a Stein manifold 18 K and a family of non-degenerate holomorphic functions parametrized holomorphically by a connected complex manifold X of "coupling constants" 19 x. Non-degenerate means that, for all x ∈ X , the set of zeros of the differential 20 dW (z; x) is discrete in K; for technical reasons it is also convenient to assume that the square-norm of the differential dW (z; x) 2 is bounded away from zero outside a (large) compact set (cfr. section 2.2.3). In the LG model the coordinates 21 z of K are promoted to chiral superfields, and we have a family of Lagrangians of the form The details of the D-terms are immaterial for us; we only need that there exists some Kähler potential K yielding a complete Kähler metric: this is guaranteed since K is Stein. 22 Out of the data (K, W) we can construct two related 4-susy LG theories: a twodimensional (2d) (2, 2) QFT and a one-dimensional 4-supercharges supersymmetric Quantum Mechanical (SQM) system, the latter being the dimensional reduction of the first one by compactification on a circle S 1 . The physics of the two situations is quite different (e.g. 18 For properties of the Stein spaces see [23,[25][26][27]. We recall that: 1) a non-compact Riemann surface [19] is automatically Stein [16]; 2) all affine varieties are Stein. 19 It is often convenient to see the x's as a fixed background of additional chiral superfields. 20 d is the exterior derivative in K. It acts trivially on the constant couplings x. 21 In a Stein manifold, in the vicinity of each point there is a complex coordinate system made of global holomorphic functions [25][26][27]. I.e. we may choose the chiral fields z's so that they are well-defined quantum operators. 22 K : K → R may be chosen to be a global exhaustion [25][26][27]. JHEP12(2019)172 mirror symmetry [28] holds only in 2d), but the tt * geometries of the two theories are identical [3,29]. Thus, in studying the tt * geometry we may use the quantum-mechanical and the field-theoretical language interchangeably. Some aspects of the geometry may be physically obvious in one language but not in the other. Hence, while most of the literature uses the 2d perspective, in this paper we feel free to change viewpoint according convenience. Of course, the universality class of FQHE is described by the SQM LG model. The Hilbert space H of the SQM model is the space of differential forms on K with L 2 -coefficients [17,21]. The Lagrangian L x is invariant under a supercharge Q x which acts on forms as is isomorphic to the cohomology of Q x in H. Under the present assumptions, the vacuum space V x consists of primitive forms of degree N ≡ dim C K [17]. In particular, the vacua are invariant under the Lefshetz R-symmetry SU(2) R [23]. d ≡ dim V x is the Witten index [30], invariant under continuous deformations of x ∈ X such that dW 2 remains bounded away from zero outside a large compact set C K. The cohomology of Q x in the space of operators acting on H is called the chiral ring R x . A simple computation [17] yields 23 where J x ⊂ O K is the sheaf of ideals whose stalks are generated by the germs of the partials ∂ z i W(z; x). In the present framework, R x is a finite-dimensional, commutative, associative, unital C-algebra which in addition is Frobenius, i.e. endowed with a trace map − x : R x → C such that φ 1 φ 2 x is a non-degenerate bilinear form on R x . From the definitions we have an obvious linear isomorphism (the "spectral flow") [20] κ : which in the 2d context can be understood as the state-operator correspondence for the Topological Field Theory (TFT) obtained by twisting the physical model [3,28]. Eq. (3.6) extends to an isomorphism of R x -modules. The Frobenius bilinear form (the topological two point function in the 2d language) is [17] 24 JHEP12(2019)172 As a matter of notation, we shall write φ| for the vacuum state whose wave-function is κ(φ) which we write as a bra. We stress that in our conventions φ| is C-linear in φ, not anti-linear. Eq. (3.6) implies that tt * geometry is functorial with respect to (possibly branched) holomorphic covers 25 f : K → K [3] a property that will be crucial in section 5 below. Let ζ ∈ P 1 be a twistor parameter, and consider the smooth function Morse cobordism 26 implies the isomorphism [13] where H * (K, K x;ζ ; C) denotes the relative cohomology 27 with complex coefficients, and for some sufficiently large 28 constant Λ. The dual relative homology H * (K, K x;ζ ; C) is called the space of branes, because in 2d the corresponding objects have the physical interpretation of half-BPS branes [13]; the twistor parameter ζ specifies which linear combinations of the original 4 supercharges leave the brane invariant. The space of branes has an obvious integral structure given by homology with integral coefficients An integral basis of H * (K, K x;ζ ; Z) may be explicitly realized by special Lagrangian submanifolds of K and, more specifically, by Lefshetz timbles describing the gradient flow of F (z,z; ζ) for generic ζ [13]. By abuse of notation, we write |α; ζ x , (α = 1, . . . , d) for such an integral basis. Let {φ i } (i = 1, . . . , d) be a basis of R x ; we write { φ i |} for the corresponding basis {κ(φ i )} of V x . We may form the non-degenerate d × d matrix called the brane amplitudes. Ψ(x; ζ) iα is not uni-valued as a function of ζ due to the Stokes phenomenon [5] (and, in the 2d language, the related issue of BPS wall-crossing). tt * geometry On the coupling space X we have the vacuum vector bundle V If K is Stein, K is automatically also Stein [25]. 26 See e.g. Theorem 3.9 in [32]. 27 The space H * (K, K x;ζ ; C) is non-zero only in degree N . 28 Λ should be larger than the image of all critical values of W. JHEP12(2019)172 namely the sub-bundle of the trivial Hilbert bundle H × X whose fiber V x is the vacuum space (3.4) for the model with couplings x ∈ X . The differential operator Q x depends holomorphically on x (cfr. eq. (3.3)); then the isomorphism implies that the bundle V → X is holomorphic. The vacuum Berry connection, i.e. the sub-bundle connection on V induced by the trivial connection on H × X , is then both metric and holomorphic. There is a unique such connection, the Chern one [23], whose (1,0) and (0,1) parts are respectively where g is the tt * (Hermitian) metric matrix [3] Clearly, the (2,0) and (0,2) parts of the vacuum Berry curvature vanish We have a canonical (holomorphic) sub-bundle R of End(V ): whose fiber R x is the chiral ring of the theory with coupling x. Spectral-flow (or 2d topological twist) then yields the bundle isomorphism R ∼ = V . Note that tt * geometry defines two (distinct) natural Hermitian metrics on V : the one induced by the monomorphism V → H × X and the one induced by A superpotential W produces a (1, 0)-form C on X with coefficients in R, i.e. where [φ] x stands for the class of the holomorphic function φ in R x (cfr. (3.5)). Since we are free to add to W(z; x) a x-dependent constant, we may assume without loss that the coefficients of C belong to the trace-less part of End(V ). C is manifestly nilpotent, and both holomorphic and covariantly-closed [3] C ∧ C = DC = DC = 0. (3.20) We write C for the (0,1)-form which is the Hermitian conjugate of C with respect to the metric (3.16). C satisfies the conjugate of relations (3.20). It remains to specify the (1,1) part of the curvature of the Berry connection; one gets [3] DD + DD + C ∧ C + C ∧ C = 0. JHEP12(2019)172 Eq. (3.17), (3.20) and (3.21) are the tt * equations [3]. They are integrable [5,33] and, in fact most (possibly all) integrable systems reduce to special instances of tt * geometry. For ζ ∈ P 1 one considers the (non-metric!) connection on the vacuum bundle V → X The tt * equations can be neatly summarized in the statement that this connection is flat identically in the twistor parameter ζ Hence the linear system (called the tt * Lax equations) is integrable for all ζ. A fundamental solution to (3.24) is a d × d matrix Ψ(ζ) whose columns are linearly independent solutions, i.e. a basis of linear independent flat sections of V . A deeper interpretation of the tt * Lax equations as describing ζ-holomorphic sections of hyperholomorphic bundles in hyperKähler geometry may be found in ref. [7]. Given a fundamental solution Ψ(ζ) we may recover the tt * metric g. This is best understood by introducing the real structure (compatible with the rational structure induced by the branes) [3,5,33] Ψ(1/ζ) = g −1 Ψ(ζ). (3.25) Remark. Jumping slightly ahead, we observe that when the chiral ring is semi-simple (section 3.5) we may choose as an integral basis of branes the Lefschetz thimbles which originate from the (non-degenerate) critical points of W [13]. In this case (for a certain canonical basis of R defined in section 3.5) one has [3,5,33] Ψ(ζ)Ψ(−ζ) t = 1 (3.26) where one should think of the r.h.s. as the topological metric η in the canonical basis. Hence The interpretation of eqs. (3.26)(3.27) is that the brane spaces H * (K, K x,ζ ) and H * (K, K x,−ζ ) are each other dual (with respect to the natural intersection pairing 29 ) and both the topological and tt * metrics can be written in terms of the Lefschetz intersection pairing. This observation will be useful to clarify the tt * -theoretical origin of most constructions in the theory of braid group representations [11]. The tt * monodromy representation Let Ψ(ζ) (γ) be the analytic continuation of the fundamental solution Ψ(ζ) along a closed curve γ ∈ X in coupling space. Both Ψ(ζ) and Ψ(ζ) (γ) solve the tt * Lax equations at x ∈ X , hence there must be an invertible matrix (γ) ζ such that This produces a representation which is independent of the particular choice of the fundamental solution Ψ(ζ) modulo conjugacy in GL(d, C). Claim. We may conjugate the representation ζ in GL(d, C) so that it lays in the arithmetic subgroup SL(d, Z). To show the claim, we have to exhibit a preferred fundamental solution which has a canonical Z-structure. This is provided by the branes. It is easy to check that the brane amplitudes (3.12) are a particular fundamental solution to the tt * Lax equation [13]. This may be understood on general grounds: since the branes with given ζ ∈ P 1 have well-defined integral homology classes, for each ζ ∈ P 1 they define a local system on X canonically equipped with a flat connection, the Gauss-Manin one. Dually, the branes define a P 1 -family of flat connections on V which is naturally identified with the P 1 -family of tt * Lax connections ∇ (ζ) , ∇ (ζ) . The parallel transport along the closed loop γ should map a brane into a linear combination of branes with integral coefficients [13]. Then the matrix ζ (γ) in (3.28) and its inverse ζ (γ) −1 ≡ ζ (γ −1 ) should have integral entries, which entails det ζ (γ) = ±1. The negative sign is not allowed. 30 Since the entries of ζ (γ) are integers, they are locally independent of ζ. The brane amplitudes are multivalued on the twistor sphere; going from one determination to another the representation ζ (−) gets conjugated in SL(d, Z). Then, modulo conjugation, the tt * monodromy representation : is independent of ζ. By the same token, the conjugacy class of is also invariant under continuous deformation of the parameters x, i.e. changing the base point * ∈ X we use to define π 1 (X ) will not change the conjugacy class of (of course, this already follows from the properties of the fundamental group). The tt * equations (3.17), (3.20), (3.21) then describe the possible deformations of the coefficients of the flat connection ∇ (ζ) , ∇ (ζ) which leave the monodromy representation invariant, i.e. they are the equations of an isomonodromic problem. In the special case 30 If we normalize C to be traceless (as we are free to do), it follows from eq. (3.22) that the function det Ψ(ζ)/ det g ≡ det Ψ(ζ)/| det η| is constant in X . In special coordinates (which always exist [8]) η is a constant, so det Ψ(ζ) is also a constant with these canonical choices. JHEP12(2019)172 that the chiral rings R x (x ∈ X ) are semi-simple (≡ the 2d (2,2) model is gapped) the tt * isomonodromic problem is equivalent to the Miwa-Jimbo-Sato one [34][35][36][37][38][39][40][41], see ref. [4] for the detailed dictionary between the two subjects. At the opposite extremum we have the situation in which R x is a local ring for all x ∈ X . In this case the 2d (2,2) model is superconformal, and X is its conformal manifold; the tt * geometry is equivalent to the Variations of Hodge Structure (VHS) in the sense of Griffiths [31] and Deligne [42], see ref. [29] for a precise dictionary between the two geometric theories. In the particular case of Calabi-Yau 3-folds the VHS is called "special geometry" [43] in the string literature. For a generic superpotential R x is automatically semi-simple. 31 The locus in X where R x is not semi-simple is an analytic subspace, hence it has 32 real codimension at least 2; therefore, for all element [γ] ∈ π 1 (X ), we may find a representative closed path γ which avoids the non-semi-simple locus, that is, we may effectively replace X with the open dense subspace where R x is semi-simple. R x local For completeness, we briefly mention the situation for R x local, even if the main focus of this paper is the semi-simple case. Historically, tt * geometry was created [3,29] on the model of VHS, thinking of it as a "mass deformation" of VHS which holds even offcriticality. Hodge theory provides a good intuition about the properties of tt * geometry, and many Hodge-theoretical arguments may be extended to the wider tt * context. Typical massive (2,2) systems have a UV fixed point which is a regular SCFT, whose tt * geometry is described by VHS. In this case VHS geometry supplies the boundary condition needed to specify the particular solution of the massive tt * PDEs which corresponds to the given physical system: the correct solution is the one which asymptotes to the VHS one as the radius R of the circle on which the 2d theory is quantized is sent to zero [3,5,44]. R x semi-simple We recall some useful facts about semi-simple chiral rings. A commutative semi-simple C-algebra of dimension d is the product of d copies of C. Hence there is a complete system of orthogonal idempotents e i (i = 1, . . . , d) which span the algebra R x and have a very simple multiplication table 33 Explicitly, e i represents the class of holomorphic functions on K with value 1 at the i-th zero of dW and 0 at the other critical points (such functions exist since K is Stein). The Frobenius bilinear pairing has the form Rx is semi-simple iff, for all z ∈ K, the stalks (Jx)z of the sheaf Jx ⊂ OK are either the trivial ideal, i.e. (OK)z, or a maximal ideal mz ⊂ (OK)z. Then the coherent sheaf OK/Jx is a skyscraper with support on the (isolated) zeros of dW, the stalk at a zero being C. Therefore Rx ≡ Γ(K, OK/Jx) ∼ = υ∈sup Jx (C)υ. 32 Assuming that X is not contained in that locus, as it is the case for the models of interest in this paper. 33 In refs. [3,5] the basis {ei} of Rx was called the "point basis". JHEP12(2019)172 We write The basis { E i |} of V x yields the canonical (holomorphic) trivialization of V ; the natural trivialization is the one associated to the non-normalized basis { e i |}. The canonical trivialization is convenient since it makes the tt * equations model-independent and the connection with the isomonodromy PDEs transparent. But it has a drawback: the sign of the square-root in (3.33) has no canonical determination. Going along a non-trivial loop in coupling space we may come back with the opposite sign. The unification comes at the price of a sign conundrum: getting the signs right in the present matter is a wellknown headache. To simplify our live, we often study sign-insensitive quantities, such as squares, and be content if they have the correct properties, without bothering to fix the troublesome signs. Since E i E j = δ ij , in the canonical trivialization we do not need to distinguish upper and lower indices. The reality constraint [3] implies that the canonical tt * metric 34) Her(d) + being the set of positive-definite d × d Hermitian matrices. In the canonical trivialization the Berry connection for certain functions w i : X → C. The {w i }'s are the critical values of W(z; x). The map w : X → C d given by x → (w 1 (x), . . . , w d (x)) is a local immersion. In facts, the w i form a local coordinate system on the Frobenius manifold of all couplings of the TFT [8,33] which contains the physical coupling space X as a submanifold. 34 We writeX ⊂ X for the dense open domain 35 in which R x is semi-simple and Equivalently,X is the domain in which the function W is strictly Morse. Let E := w i ∂ w i be the Euler vector in X ; the anti-symmetric matrix In general it is a submanifold of positive codimension. Consider e.g. the 2d σ-model with target P n with n > 1. Higher powers of the Kähler form are elements of the chiral ring and their 2-form descendents can be added to the TFT action. Adding them to the physical action would spoil UV completeness. The corresponding phenomenon in the tt * geometry is that the solutions to the PDEs become singular for R small enough, i.e. at some large (but finite) energy scale. 35 The qualification in footnote 32 applies here too. JHEP12(2019)172 is called the new index [44]. In 2d (2,2) models Q ij plays two roles. First [5,44] it is the index capturing the half-BPS solitons in R which asymptote the i-th (resp. j-th) classical vacuum as x → −∞ (resp. +∞) where the theory is quantized in a strip of width L with boundary conditions the classical vacua i, j on the two boundary components. From (3.39) one learns 36 that the matrix Q is Hermitian. Second [3] it is a generalization of Zamolodchikov c-function [45] since Q ij is stationary only at fixed points of the RG flow, where the eigenvalues of Q ij become the U(1) R charges of the Ramond vacua of the fixed point (2,2) SCFT 37 which determine the conformal dimension of the chiral primaries and, in particular the Virasoro central charge c. The new index is a central object in tt * geometry also in 1d, where the above physical interpretations do not hold. Indeed, the full tt * geometry may be described in terms of the matrix Q only as we now review. We warn the reader that in the rest of this subsection the convention on the sum over repeated index does not apply. Lemma (see [4,33]). Let R x be semi-simple. The Berry connection in the canonical holomorphic gauge is antisymmetric with off-diagonal components The tt * equations may be written as a pair of differential equations for Q kl [33]. The first one expresses the fact that the (2,0) part of the curvature vanishes where the symbol u kl stands for the Arnold form [46] in configuration space The other equation for Q is obtained by contracting Computing the monodromy representation We study the tt * monodromy representation , eq. (3.30), for R x generically semi-simple. Since tt * is an isomonodromic problem we are free to continuously deform the model in coupling constant space X ; the only effect is to get matrices (γ) which possibly differ by an irrelevant overall conjugation. In particular, the eigenvalues of the monodromy matrices (γ), γ ∈ π 1 (X ) (3.44) 36 The statement is less elementary that it sounds. 37 If the 2d (2,2) model is asymptotically free the statement requires some specification, see [5]. JHEP12(2019)172 (which are algebraic numbers of degree at most d) and the dimension of their Jordan blocks are invariant under any finite continuous deformation. However, typically, to really simplify the computation we need to take the limit to a point at infinite distance in parameter space, i.e. a point in the closureX of the "good" space. In this case the limiting monodromy may be related by a singular conjugation to the original one. The eigenvalues of the monodromy matrices (γ) are continuous in the limit but the Jordan blocks may decompose into smaller ones. 38 This happens, for instance, when we take the UV limit of an asymptotically free model (see [5]). Therefore, monodromy eigenvalues are typically easy to compute, while the Jordan structure is subtler. However in many situations we know a priori that the monodromy matrix is semi-simple and so we do not loose any information. In the case relevant for the FQHE, when π 1 (X ) is a complicate non-Abelian group, the Jordan blocks are severely restricted by the group relations, so it is plausible that they can be recovered from the knowledge of the eigenvalues. There are three obvious limits in which the computation is expected to simplify; in the tt * literature they are called: i) the IR limit, ii) the asymmetric limit [6,13], and iii) the UV limit. In a related math context ii) is called the homological approach and iii) the CFT approach [47]. The IR and asymmetric approaches are widely known and used [5,6,13,48]. They essentially reduce to the combinatorics of 2d wall-crossing [5,13] (equivalently, of 1d BPS instantons [49]). The UV approach seems less known, and we are not aware of a good reference for it, so we shall develop it in some detail in section 3.6.2 below. Of course, the three approaches yield equivalent monodromy representations (at least when we have a good UV point as in the CFT context) and this statement summarizes many results in the math literature. From this point of view, the wall-crossing formulae are consistency conditions required for the monodromy representation, as computed in the IR/homological approach, to be a well-defined invariant of the UV fixed-point theory. We briefly review the asymmetric approach for the sake of comparison. Asymmetric approach (homological) One starts by rescaling the critical values where R is some positive real number. 39 The tt * flat connection becomes and the Berry curvature Having a determinate spectrum is a closed condition in the matrix space, while the having a Jordan block of size > 1 is open. 39 In the context of the 2d (2,2) LG model quantized in a cylinder, R is identified with the radius of the cylinder [3]. Alternatively, R is the 2d inverse temperature if we look to the path integral on the cylinder as the theory quantized on the line at finite temperature R −1 . JHEP12(2019)172 Then one takes the unphysical limit R → 0 with R/ζ fixed and large. The Berry curvature vanishes in the limit, so the metric connection A is pure gauge. The tt * linear problem (3.24) then (formally!) reduces to A solution to this equation, asymptotically for R/ζ large, is where the cycles Γ α are the supports of an integral basis of branes, say Lefshetz thimbles, and E i (z) holomorphic functions representing the rescaled unipotents E i in the chiral ring. Computing the r.h.s. by the saddle point method, one checks that it is indeed a fundamental solution to (3.48). The homology classes of the branes (with given ζ) are locally constant in coupling constant space X , but jump at loci where (in the 2d language) there are BPS solitons which preserve the same two supercharges as the branes. The jump in homology at such a locus is given the Picard-Lefshetz (PL) transformation [5,13,50,51]. Taking into account all the jumps in homology one encounters along the path (controlled by the 2d BPS spectrum), one gets the monodromy matrix which is automatically integral of determinant 1. The full monodromy representation is given by the combinatorics of the PL transformations. Dually, instead of the action of the monodromy group on the homology of branes we may consider its action in the cohomology of the (possibly multivalued) holomorphic n-forms The method is conceptually clear and often convenient. On the other side, the fact that we consider a limit which do not correspond to any unitary quantum system tends to make the physics somewhat obscure. For our present purposes the UV approach seems more natural. The UV approach ("CFT") This is the physical UV limit of the 2d model. Again one makes the redefinition (3.45) and sends the length scale R → 0. But now ζ is kept fixed at its original value, which may be a unitary one |ζ| = 1. From eq. (3.46) we see that in this limit the flat connection reduces to the Berry one The Berry connection then becomes flat in the limit, as eq. (3.47) indeed shows. Since the monodromy of the flat connection ∇ (ζ) + ∇ (ζ) is independent of R, the flat UV Berry connection should have the same monodromy modulo the subtlety with the size of the Jordan blocks mentioned after eq. (3.44). While the monodromy matrices (γ) as computed in the asymmetric (or IR) approach are manifestly integral, the monodromy matrices (γ) computed in the UV approach are manifestly unitary (since the Berry connection is metric). JHEP12(2019)172 This observation is a far-reaching generalization to the full non-Abelian tt * monodromy representation : π 1 (X ) → SL(d, C) of the formula for the relation between the 2d quantum monodromy as computed in the UV and in the IR, i.e. for the monodromy representation of the Abelian subgroup Z ⊂ π 1 (X ) associated to the overall phase of the superpotential W [5] UV monodromy where Q is the U(1) R charge acting on the Ramond vacua of the UV fixed point SCFT and S is the integral Stokes matrix of the tt * Riemann-Hilbert problem [5] |S ij + S ji | = 2 δ ij + # 2d BPS solitons connecting vacua i and j . (3.53) Instead the Jordan block structure is, in general, different between the two sides of the correspondence (3.52) as the examples in ref. [5] illustrate. This statements implies but it is much stronger than the strong monodromy [5]. 40 Indeed arithmetic subgroups of SL(d, Z) such that the spectrum of all elements consists of roots of unity have a very restricted structure. To give an explicit description of the UV Berry connection we need additional details on tt * geometry which we are going to discuss. Advanced tt * geometry I To compute the monodromy representation of the Vafa model in the UV approach we need a more in-depth understanding of tt * geometry. A first block of advanced tt * topics is discussed in this section. Most material is either new or presented in a novel perspective. The crucial issue is the notion of a very complete tt * geometry. What makes the UV approach so nice is its relation to the Kohno connections [52][53][54][55] in the theory of the braid group representation [11,56]. Flatness with respect to a Kohno connection may be seen as a generalization of the Knizhnik-Zamolodchikov equations [9]. In this section we go through the details of this beautiful relation. Since some statement may sound a bit unexpected to the reader, we present several explicit examples. tt * monodromy vs. the universal pure braid representation Following the strategy outlined in section 3.6.2, we rescale w i → Rw i and send R → 0 (note that if w i ∈X also Rw i ∈X for all R > 0, so the limiting point indeed lays in 40 The strong monodromy theorem is the same statement but restricted to a special element of the monodromy group, i.e. the quantum monodromy. The corollary claims that the property extends to the full group. JHEP12(2019)172 the closureX of the semi-simple domain). As we approach a fixed point of the RG flow the element [W] x ∈ R x becomes a multiple of the identity operator [4,5] and eq. (3.43) implies ∂Q → 0. Since Q is Hermitian, ∂Q → 0 as well, so that lim R→0 Q is a constant matrix. Naively, to get the UV Berry connection we just replace this constant matrix in the the basic tt * formula (3.40). However, this is not the correct way to define the R → 0 limit. The point is that the canonical trivialization becomes too singular in the UV limit: the chiral ring R is believed to be regular (even as a Frobenius algebra) in the UV limit but, since the limit ring is no longer semi-simple, its generators are related to the canonical ones by a singular change of basis. A trivialization which is better behaved as R → 0 is the natural one. We write A for the natural gauge connection. Starting from eq. (3.40), and performing the diagonal gauge transformation, we get where h l = e l 1/2 . Taking the limit R → 0, the second term in the r.h.s. of (4.1) becomes (locally) a meromorphic one-form f l,i (w j ) dw i invariant under w j → w j + c and w j → λw j with at most single poles when w l = w j for some l = j. In addition, its contraction with the Euler vector E has no poles. Thus as R → 0 the Berry connection D ≡ D + ∂ should locally take the form where the entries of the matrix B ij are holomorphic functions of the w k − w l homogeneous of degree zero. They should produce the correct quantum monodromy (3.52), so that exp 2πi where Q is the same matrix as in eq. (3.52) but now written in a different basis which makes it symmetric trace-less. The simplest solution to these conditions is given by constant B ij matrices. However, the B ij cannot be just constant in general; indeed the matrices B ij are restricted by a more fundamental condition i.e. that the connection (4.2) is flat how predicted by the tt * equations. Note that this constraint on B ij arises from setting to zero the (2,0) part of the curvature, which vanishes for all R. Complete and very complete tt * geometries Let X be the essential coupling space of a 4-susy LG model with Witten index d. We writeX ⊂ X for the open domain (assumed to be non-empty and connected) in which the superpotential is a Morse function. 41 Configuration spaces The configuration space C d of d ordered distinct points in the plane is The cohomology ring H * (C d , Z) is the ring in the d 2 generators The fundamental group P d = π 1 (C d ) is called the pure braid group on d strings. The configuration space of d unordered distinct points is the quotient space [11]. It is an extension of the symmetric group S d by the pure braid group B d has a presentation with d − 1 generators g i (i = 1, . . . , d − 1) and relations (4.10) (Very) complete tt * geometries The critical value map is a holomorphic immersion by definition of "essential" couplings. We say that the tt * geometry is complete if, in addition, w is a submersion, hence a covering map. The notion of completeness is akin to the one for 4d N = 2 QFTs [57,58]; in particular, the 2d correspondent [59] of a 4d complete theory has a complete tt * geometry in the present sense. Equivalently, we may say that a tt * geometry is complete iff it is defined over the full Frobenius manifold X Frob of the associated TFT [8]. 42 Completeness is a strong requirement. 42 In general the perturbations of the model by elements of the chiral ring, φ ∈ R are "obstructed" in the sense that the coupling is UV relevant and the perturbed theory develops Landau poles. In this case the TFT theory is still well-defined, but the tt * metric gets singular for R less than a certain critical values Rc (from the formulation of tt * in terms of integral equations, it is clear that a smooth solutions always exists for large enough [33], but nothing prevents a singularity to appear at finite R). In practice, the tt * being complete means that all chiral "primary" operators are IR relevant or marginal non-dangerous. JHEP12(2019)172 The category of coverings of Y d is equivalent 43 to the category B d -sets. We say that a complete tt * geometry is very complete iff the action of B d on the B d -set S which corresponds to the coverX → Y d factors through the map β in (4.9) or, equivalently, if the canonic projection p : In this case we may view C d as the coupling space on which it acts a group of "S-dualities" given by the deck group of the cover C d →X . We may pull-back the vacuum bundle V →X to a bundle over C d , which we denote by the same symbol V , and consider the tt * geometry on the configuration space C d . In a very complete tt * geometry, pulled back to C d , the local expression (4.2) becomes global, since in this case the w i are global coordinates and the partials ∂ w i W define a global trivialization of the bundle R → C d . In the very complete case the entries of the matrices B ij are holomorphic functions on C d , homogenous of degree zero and invariant under overall translation, which satisfy (4.3). Assuming the r.h.s. of that equation to be well-defined (we take this as part of the definition of very complete), the matrices B ij should be constant. They are further constrained by the flatness condition D 2 = 0. This leads to the theory of Kohno connections [52,53] that we briefly review. Remark. An especially important class of very special tt * geometries are the symmetric ones; in this case the d × d matrices B ij satisfy (B π(i)π(j) ) π(k)π(l) = (B ij ) kl for all π ∈ S d . (4.13) This holds automatically when the map w in eq. (4.12) is an isomorphism. Khono connections In the very complete case the flat UV Berry connection D has the form (4.2) with B ij constant d × d matrices. We recall the Lemma (see e.g. [52][53][54]). Then D is flat if and only if the following relations hold JHEP12(2019)172 A rank-d Kohno connection defines a representation of the pure braid group P d in d strings via parallel transport with the connection D on the configuration space C d The family of representations σ parametrized by the matrices B ij satisfying the infinitesimal pure braid relations is called the universal monodromy [53]. If, in addition, eq. (4.13) holds the connection D descends to a flat connection on a suitable bundle V → Y d and yields a universal monodromy representation of the full braid group B d [52][53][54]. We conclude: For a very complete tt * geometry the UV Berry monodromy representation is the universal monodromy representation of the pure braid group P d in d string specialized to the Khono matrices B ij computed from the grading of the UV chiral ring R UV in superconformal U(1) R charges. If, in addition, the tt * geometry is symmetric the UV Berry monodromy representation extends to a representation of the full braid group B d . The condition of being very complete is really very restrictive for a tt * geometry. So, to convince the reader that we are not concerned with properties of the empty set, we present a few examples. They will be used later to illustrate various aspects of the theory. First examples of very complete tt * geometries We omit example 0, the free (Gaussian) theory. Note that its superpotential W (z) satisfies the ODE (d z W (z)) 2 = a W (z) + b. Example 1. All massive models with Witten index 2 are trivially very complete. The simplest instance is the cubic model W (z) = x 1 (z 3 − 3z)/2 + x 2 whose critical values are Example 2 (Mirror of 2d P 1 σ-model). This LG model has a superpotential W (z) which is a solution to the ODE dW dz The general form is 20) which shows that the model is very complete. JHEP12(2019)172 Example 3 (The Weierstrass LG model). Eq. (4.18) is replaced by dW dz 2 = P 3 (W ), P 3 (z) cubic polynomial. (4.21) and the superpotential becomes The coupling constant space is X =X where 44 The critical values are where e a (x 2 ) (a = 1, 2, 3) are the three roots of the Weierstrass cubic polynomial as a function of the period x 2 which are globally defined for x 2 ∈ H/Γ(2). Note that for The model is very complete and symmetric. Example 4 (A d = 4 model). We consider the elliptic superpotential with parameter space covered bỹ The four critical points correspond to the 2-torsion subgroup ( same notation as in the previous example. Since the e a (x 2 ) are distinct, we have the isomorphism One checks that S 4 is a "duality", so the actual coupling space isX /S 4 ∼ = Y 4 . The superpotential satisfies an ODE of the form dW dz of level 2. We recall that SL(2, Z)/Γ(2) ∼ = S3, the symmetric group in three letters. JHEP12(2019)172 Example 5 (Hyperelliptic models). The above three examples may be easily generalized to the case of LG models [5] whose superpotential W (z) satisfies the ODE Comparing with eq. (4.1), we see that the diagonal components of the UV Berry connection are while the off-diagonal components are given by the entries of the UV matrix Q. Since this matrix is constant, we may compute Q kl in the limit w k − w l → 0 whose effective theory is (4.18). The 2 × 2 matrix (Q ab ) a,b=k,l (k = l) is symmetric with zeros on the diagonal and eigenvalues ±ĉ/2 whereĉ is the Virasoro central charge of the effective theory, in this caseĉ = 1; hence The matrix B ij is J ij (λ) is called the Jordan-Pochhammer d×d matrix [53]. It is well-known that the Jordan-Pochhammer matrix satisfies the infinitesimal braid relations for all λ. Since it also satisfies the symmetry conditions, B ij = J ij (λ) defines a (reducible) representation of the braid group B d . This representation is conjugate to the usual Burau representation [10,11] over the ring Z[t, t −1 ] where t = −e 2πiλ . For t = 1, i.e. λ = 1 2 mod 1, the Burau representation factors through the symmetric group S d , so the braid representations from the UV Berry connection of the hyperelliptic models is somehow "trivial": where 1 k is the k × k unit matrix and σ 1 the usual Pauli matrix. Remark. Notice that very complete tt * geometries corresponds to 2d (2,2) models with no non-trivial wall-crossing phenomena. Generalities tt * geometry (in the domain where R x is semi-simple) may be stated in a fancier language [4,61]. Let F (x,x) (x ∈ X ) be a physical quantity which is susy protected, i.e. invariant under continuous deformations of the D-terms: all tt * quantities have this property. Since the w i are local coordinates in X , we may rewrite 45 F (x) in the form F (w 1 , · · · , w d ) where d is the Witten index which we assume to be finite. The functions F (w 1 , · · · , w d ) enjoy intriguing properties. First of all F (e iϕ w 1 + a, e iϕ w 2 + a, · · · , e iϕ w d + a) = F (w 1 , w 2 · · · , w d ) ∀ a ∈ C, ϕ ∈ R/2πiZ, (4.37) since w i → e iϕ w i + a corresponds to the trivial deformation of the F -terms which leaves invariant all physical quantities. The group C R/2πiZ in (4.37) is the 2d Euclidean Poincaré symmetry: in this regard the protected functions F (w 1 , · · · , w d ) behave as Euclidean d-point functions This idea may be made more sound if we choose the function F (w i ) in a clever way; we may find tt * quantities F which obey all the Osterwalder-Scharader axioms for the correlators of an Euclidean QFT except locality and statistics, i.e. univaluedness of the d-point functions. In other words, for an appropriate choice of the tt * quantity F , the only unusual feature of the would-be "operators" O k (w) in (4.39) is that, in general, they are not mutually local but rather have non-trivial braiding properties. The origin of this peculiar fact is easy to understand: tt * geometry states that suitable combinations of the susy protected quantities satisfy exactly the same PDEs as the correlation functions for (the scaling limit of) the offcritical Ising model (the Sato-Miwa-Jimbo isomonodromic PDEs [34][35][36][37][38][39][40][41]). The tt * functions differ from the actual Ising correlators only because the solutions of the PDEs relevant for a given susy model are specified by a different set of boundary conditions [4,5]. For an isomonodromic system of PDEs, the boundary conditions are encoded in the monodromy representation, i.e. in the braiding properties of the O k (w). Given the braid group action on the O k (w), the tt * geometry is fully determined. There is an obvious necessary condition for the existence of a fancy correspondence like (4.39). The tt * quantities F A (w i ) must have the following property: they should be regular when the w i are all distinct, but get singularities of the form JHEP12(2019)172 as two critical values coalesce together, w i − w j → 0. More precisely, when two critical values collide in the coupling constant space X we should see an emergent OPE algebra for the "operators" O j (w j ). To check that the condition holds, it is convenient to adopt the 2d perspective and work in the set-up where the (2,2) LG is quantized on the line R at a finite temperature T = R −1 . The infinite volume Hilbert space decomposes into subspaces H i,j of definite susy central charge Z: the sector H i,j is defined by imposing the boundary condition that the field configuration approaches the i-th classical vacuum (resp. the j-th one) as x → −∞ (resp. x → +∞). In H i,j the susy central charge is Z ij = 2(w i − w j ) [3,5,44], and the BPS states in H i,j (i = j), if any, have masses 2|w i − w j |. A typical protected quantity F A (w i ) may be computed by a periodic Euclidean path integral over the cylinder, and hence has the schematic form for some operator F A . Only BPS configurations contribute to susy protected quantities, so that where M-BPS stands for the set of BPS multi-particle H-eigenstates. The matrix element m|e −RH |m is suppressed by a factor e −2R|w i −w j | where the product is over the BPS particles in the state |m . The sum (4.43) is absolutely convergent if all the masses |Z ij | are non-zero, but it may get singular as w i − w j → 0, producing a power-law IR divergence in F A (w i ) of the general form (4.40). This is the only mechanism which may spoil regularity of F A (w i ) as a (multivalued) function of the w i . Since the coupling space OPEs (4.41) encode the monodromy representation, they fully determine the tt * geometry. Understanding the leading singularity as w i −w j → 0 amounts to know how many BPS species become massless in the limit w i − w j → 0 together with some tricky signs 46 (or, more generally, phases) in the comparison between O i (w i ) O j (w j ) and O j (w j ) O i (w i ). In particular, the limit is regular if and only if the net number of BPS solitons connecting the i-th and j-th vacua is zero. The idea of the reconstruction approach to tt * geometry is that in principle we can reconstruct a non-local QFT Q on the w-plane from the tt * quantities interpreted as certain combinations of (multi-valued) correlation functions. Conversely, if we have a putative nonlocal QFT Q we may compute the tt * quantities by standard field-theoretical technics. The Q-reconstruction strategy is potentially effective since we know that Q is a "free" theory in the sense that its amplitudes are computed by Gaussian path integrals [4]. w-plane OPEs The w-plane theory Q is modelled on the QFT describing the Ising model off-criticality. The basic degree of freedom is an Euclidean 2d Majorana 47 free spinor of mass R [4]. Locally on the w-plane the Lagrangian of Q may be written simply What makes Q non-trivial is the fact that Ψ is not univalued, but rather has complicated branching properties due to the insertion of topological defect operators O k (w k ) at the points corresponding to critical values of the superpotential W of the original LG model. Let us study the singularity in the OPE when u → w k . Let z * ∈ K be a critical point of W which is mapped to the k-critical value w k by the map W : K → C. Since R w is semi-simple, the superpotential is weakly 48 Morse, so that in a neighborhood U z * we may find local holomorphic coordinates z a such that Working in perturbation theory around the k-th classical vacuum z * , the situation is indistinguishable from free field theory to all orders. Hence locally we see the same singularities as in free field theory. The free-field behavior defines two possible defect insertions at w k : µ k and σ k . Their OPEs are [4] Ψ up to O(|u − w| 1/2 ) contributions. Hurwitz data and defect operators Although the tt * defect operators µ k (w), σ k (w) have the same OPE singularities with the fermion field Ψ(u) as the Ising order/disorder operators, they are not in general mere 47 Imposing the Majorana condition is equivalent to imposing that the vacuum wave-functions are real. While we may chose a real basis for the wave-function, this is different from the holomorphic basis one uses in tt * geometry. This change of basis makes the comparison of formulae a little indirect. The relation is From the viewpoint of tt * taking the fermion to be Dirac rather than Majorana may be more natural. 48 I.e. its critical points are non-degenerate but the critical values are not necessarily all distinct. JHEP12(2019)172 Ising order/disorder operators since globally they have different topological order/disorder properties. In other words, their insertion makes the multi-valued Fermi field Ψ(u) of the Q theory to have different monodromy properties. Let us see how this arises. The fermion Ψ(u) is univalued on a suitable connected coverΣ of the w-plane punctured at the positions {w k } of the defects (4.49) By the Riemann existence theorem [62], we may extendf over the punctures to a branched cover of Riemann surfaces, f : Σ → P 1 , branched at {w 1 , w 2 , · · · , w d , ∞}. In "good" models the order of the monodromy at ∞ is finite. Let us consider first the special case that the cover has a finite degree m. Then f is specified by its Hurwitz data at the (d+1) branching points [62]. The Hurwitz data consist of an element π k ∈ S m for each finite branch point w k , while π ∞ = (π 1 π 2 · · · π d ) −1 . The monodromy group 49 of the cover, Mon, (not to be confused with the tt * monodromy group Mon!) is the subgroup of S m generated by the π k 's Mon = π 1 , π 2 , · · · , π d ⊂ S m . (4.50) Since Σ is connected, Mon ⊂ S m acts transitively on {1, 2, . . . , m}. In other words, {π 1 , . . . , π d , π ∞ } is a constellation in S m [62]. We recall that the passport of a constellation [62] is the list of the conjugacy classes of its permutations π k . From the OPEs (4. 48) we see that, for k = ∞ the conjugacy class corresponds to the partition conjugacy class of π k #critical points over w k 2 + 2 + · · · + 2 + 2 +1 + 1 + · · · + 1 = m. In particular, when R is semi-simple π k is an involution for k = ∞. In a complete 50 tt * geometry generically we have just one critical point over each critical value, and π k acts as a reflection in the standard representation of S m ; then for a semi-simple, complete tt * geometry with m finite, Mon is a finite rational reflection group, hence the Weyl group of a Lie algebra. In this case the order h of π ∞ is equal to the order of the adjoint action of the quantum monodromy, that is, to the smallest positive integer h such that hq s ∈ N for all s, where {q s } is the set of U(1) R charges of the chiral primaries at the UV fixed point of the 2d (2,2) Landau-Ginzburg QFT. Note that the cover f is Galois only for m = 2. When m is infinite the geometry is a bit more involved. One still has Mon = π 1 , π 2 , · · · , π d where (in the semi-simple case) the π k are involutions. But now Mon is an infinite group. In general, Fact 2. For a semi-simple tt * geometry, the topological defect operator O k (w k ) inserted at the k-th critical value (cfr. (4.39)) is specified by the choice between σ-type and µ-type and the involution π k ∈ Mon. 49 Also called the cartographic group [62]. 50 For a non-complete tt * geometry the following assertion is false. Complete tt * geometries and Coxeter groups For a complete tt * geometry, generically 51 π k (k = ∞) consists of just one 2-cycle (i k , j k ) interchanging the i k -th sheet of Σ with the j k -th one. The k-th classical vacuum is to be identified geometrically with the intersection of these two sheets. 52 The absolute number of 2d BPS solitons connecting the k-th and h-vacua is given by the number of sheets they share (4.52) Indeed, the map W : K → C factors through Σ, and the BPS solitons are just the lifts of the straight segment in P 1 with end points w k and w h to sheets of the cover Σ which contain both classical vacua. In particular, for a complete theory the number of 2d BPS solitons between any two vacua is at most 2. This can be easily seen directly. One has The fermionic part of the wave-function introduce some extra tricky minus signs. Let us illustrate our claims in some examples. Example 6. The simplest instance are the Ising n-point functions themselves. In this case σ, µ are Z 2 order/disorder operators and we do not need to distinguish them with the subfix k; in facts, in the Ising case m = 2 and all π k 's are the permutation (12). Σ is the hyperelliptic curve Remark. Ising n-point functions are not just complete tt * geometries, they are very complete. Indeed, examples 2, 3 and 4 correspond, respectively, to Ising n = 2, 3, 4 points. 51 An instance of the non-generic situation is described in example 10. 52 The precise sense of the identification will be clarified momentarily in sections 4.4.5, 4.4.6. JHEP12(2019)172 Example 7. A (2,2) minimal model of type g ∈ ADE is complete but not very complete for d ≡ rank g ≥ 3. The monodromy group Mon ∼ = Weyl(g), while Mon differs by a Z 2 flat bundle due to the aforementioned signs. For k = ∞, the involution π k is a reflection with respect to some root of g. The monodromy at infinity π ∞ belongs to the (unique) conjugacy class of the Coxeter element; its order h is the Coxeter number. 53 The relation π 1 · · · π r π ∞ = 1 is the usual expression of the Coxeter element in terms of simple reflections. Vacua k and j are connected by s kj − 2 BPS solitons, s kj being the order of π k π j . When d ≥ 3 we have non-trivial wall-crossing: the several inequivalent BPS chambers are in one-to-one correspondence with the integral quadratic forms [64] q( Z-equivalent to the Tits form of g, 1 2 C ij x i x j , where C ij is the Cartan matrix of g. In particular, there is a special BPS chamber with soliton multiplicities |µ ij | = |C ij | for i = j. In this special chamber Γ is just the Dynkin graph of g. Example 8. In the previous example we may replace the finite-dimensional Lie algebra g by an affine (simply-laced) Lie algebra g ∈ A D E. The tt * monodromy group Mon is again a Z 2 twist of Weyl( g), and the cover monodromy group Mon is a quotient of Weyl( g). For g = A 1 we get back the case n = 2 of example 6 and Mon = Weyl(A 1 ) ∼ = Weyl( A 1 )/Z. The discrepancy (modulo signs) between Mon and Mon expresses the fact that the affine LG models are asymptotically-free instead of having a regular UV fixed point [5]. The conjugacy class of π ∞ is the image of a Coxeter element c ∈ Weyl( g); in facts (π ∞ ) s.s. ∼ = e 2πiQ is the element of Weyl(g) obtained by reducing the action of c on the root lattice modulo the imaginary root. 54 In the A r case there are r inequivalent conjugacy classes of Coxeter elements [66], hence r inequivalent LG models whose π ∞ ∈ GL(r + 1, Z) satisfies (π p ∞ − 1)(π r+1−p ∞ − 1) = 0, p = 1, 2, . . . , r. (4.57) Their superpotential reads [57] W(z) = y p + y p−r−1 , p = 1, . . . , r. (4.58) The case A 1 coincides with example 1. Again the BPS chambers are related to quadratics forms Z-equivalent to the Tits form of the affine Kac-Moody algebra, but the relation is no longer one-to-one. JHEP12(2019)172 of nullity κ and type g r ∈ ADE. These Lie algebras are central extensions of the Lie algebra of maps (S 1 ) κ → g r ; for κ = 1 we get back the affine Kac-Moody algebra g and for κ = 2 the toroidal Lie algebras. The EALA A (1,1,...,1) 1 corresponds to the Ising (κ + 1)point functions: in this special case the tt * geometry is very complete, not just complete, and Γ reduces to the A 1 Dynkin graph. The role of the EALA's in the classification of complete 4d N = 2 QFTs is outlined in ref. [69]. E.g. D (1,1) 4 corresponds to N = 2 SU(2) SYM coupled to N f = 4 fundamentals; the corresponding 2d (2,2) complete model has superpotential where P 4 (z) is a polynomial of degree 4 coprime with the denominator. Example 10. Let us consider the minimal A 2m−1 models at a maximally non-generic point in coupling space. We take W(z) to be proportional to the square of the m-th Chebyshev polynomial, 55 W(z) = ∆w T m (z) 2 . The superpotential is a Belyj function with Grothendieck dessin d'enfants [70][71][72] • Erasing the two black nodes at the ends -which do not correspond to susy vacua -we get back the Dynkin graph of A 2m−1 . The SQM monodromy group Mon coincides with the cartographic group of the dessin (4.61): it is generated by two involutions π • , π • ∈ S 2m ≡ Weyl(A 2m−1 ) associated to the black/white nodes where s j ∈ Weyl(A 2m−1 ) is the j-th simple reflection. It is well-known that π ∞ = π • π • ∈ Weyl(A 2m−1 ) is a Coxeter element. The case of one chiral field The relation between the non-local QFT Q on the w-plane and the 4-supercharge SQM is especially simple when the superpotential depends on a single chiral field z, W (z; w 1 , · · · , w d ). The actual Schroedinger wave-function of the i-th vacuum (the one which corresponds to the idempotent e i ∈ R w under the isomorphism V w ∼ = R), written as a one-form through the ζ-dependent identification is (for |ζ| = 1) 55 Up to fields redefinitions, it is the same as the model with the Chebyshev superpotential T2m(z). (4.65) It is easy to check that the free massive Dirac equation satisfied by Ψ ± (z) is equivalent to the zero-energy Schroedinger equation for ψ i (z; ζ). The Hurwitz data should be chosen so that ψ i (z; ζ) is univalued in z for the given W (z). The exact brane amplitudes then have the form where the relative one-cycle Γ is the support of the brane. Notice that i|Γ, ζ depends only on the image of Γ in the curve Σ, i.e. in the smallest branched cover of the W -plane on which the wave-functions are well defined. If the Stein manifold K is one-dimensional, we may lift the condition that the chiral ring R x is semi-simple. In facts, by the Chinese remainder theorem, 56 we have where the product is over the distinct zeros of ∂ z W and υ k are their orders. In correspondence to the critical value w k we have to insert in the w-plane a topological operator which introduces a υ k -th root cut instead of a square-root cut as in the semi-simple case (υ k = 2). The two-fold choice of spin operators µ, σ gets replaced by a (υ k + 1)-fold choice of topological insertions τ s k (s k = 1, . . . , υ k + 1), to be supplemented by an element π k ∈ S m of order (υ k + 1). One easily checks that, with these prescriptions, eq. (4.64) reproduces the correct vacuum wave-functions whenever they are known from other arguments [17]. N -fields: a formula for the brane amplitudes Suppose now that we have a complete LG model with N chiral fields, i.e. dim K = N . The inverse image of a point w in the W -plane has the homotopy type of a bouquet of (N − 1)-spheres [5,13,74]; we fix a set of (N − 1)-cycles S α (w) (α = 1, . . . , d ≡ the Witten index) which form a basis of the homology of the fiber. The SQM wave-function of a susy vacuum Ψ is a N -form on K so ψ α (w) = is a d-tuple of one-forms on the W -plane. If we transport the homology cycles S α (w) along a closed loop in the W -plane (punctured at the critical values) we come back with a different (integral) basis of (N − 1)-cycles S α (w) = N αβ S β (w). The integral matrix N αβ is described by the Picard-Lefshetz theory [5,50,51]. Thus (4.68) is best interpreted as a single but multi-valued wave-function ψ(w) on the W -plane branched at the critical points whose monodromy representation is determined by the Picard-Lefshetz formula in terms of the intersection matrix S α · S β , i.e., in physical terms [5,13], by the BPS spectrum of the corresponding 2d model. Let Σ be the minimal branched cover of the W -plane such 56 See footnote 11. JHEP12(2019)172 that ψ(w) is uni-valued (Σ is then automatically Stein [25]). Clearly the map W : K → C factorizes through Σ. Let Γ be the image in Σ of the support of the brane B. Then where, in terms of Q-amplitudes (|ζ| = 1), In other words, an N -field LG model with a Morse superpotential may be replaced by a one-field LG model with K = Σ and superpotential w : Σ → C given by the factorization of W through Σ. τ -functions vs. brane amplitudes From the isomonodromic viewpoint, the most important susy protected function is the τ -function [4,[34][35][36][37][38][39][40][41] i.e. the d-point function of the would-be order operators τ (w j ) is just the partition function of a free fermion with the non-trivial monodromy properties implied by the insertions of the µ's at the points w j . Stated in a different language, it is the partition function of a free massive fermion on Σ with suitable boundary conditions at w k and infinity. The τ function may be recover from the vacuum wave ψ(u; w j ) by quadratures [4]; geometrically τ is given by the formula where g is the tt * metric in the canonical bais and K is the Kähler potential for the metric on the coupling space X [4] Example 11. Consider the case of just two vacua, and let with −1 ≤ r ≤ 1. r characterizes the regular solution completely [75][76][77][78]. Changing the sign of r just interchange order and disorder; we fix our conventions so that r is non-negative. For a regular arithmetic solution r is the rational number such that [5] 2 sin πr 2 = # BPS solitons in 2d ⇒ 0 ≤ r ≤ 1. UV limit: the Q conformal blocks In the physical 2d (2,2) LG model, the UV limit consists in sending to zero the radius R of the circle S 1 on which we quantize the theory. But R is also the mass of the Majorana fermion in the Q theory, see eq. (4.45). Hence the physical UV limit of the 2d LG model coincides with the UV limit of the Q theory on the w-plane. As R → 0 the Q theory gets critical, the left and right modes of the fermion Ψ(u) decouple, and the multi-valued would-be correlation functions (4.39) become sums of products of bona fide left/right conformal blocks. The statement holds (roughly) for all tt * quantities: in the UV they become some complicate combination of conformal blocks. Then the differential equations they satisfy -the tt * equations -should be related in a simple way to the PDEs for the conformal blocks: the analogue of the BPZ equations for the conformal blocks of the (p, q) minimal models [80] and Knizhnik-Zamolodchikov equations for the 2d current algebra [9]. Both sets of equations define flat connections and monodromy representations. 57 As already mentioned, they are specializations of the universal Kohno monodromy [53]. From this viewpoint the fact that in the UV limit the Berry connection (≡ the tt * Lax one in the limit) has the Kohno form -typical of the monodromy action on conformal blockscomes as no surprise. That things should work this way is somehow obvious in the case of example 6 where the tt * geometry describes correlations of the Ising model off-criticality: 57 See e.g. chap. XIX of [81]; for the minimal model case, see [82]. JHEP12(2019)172 sending the mass to zero R → 0, we just get the critical Ising model (≡ the minimal (4,3) CFT), and the PDEs of the massive case should reduce to the conformal ones. In connecting the tt * monodromy with the braid representation of Q blocks, we need to use the precise disctionary between the two. From eq. (4.70) we see that brane amplitudes, being normalized, are to be seen as ratios of n-point functions in Q theory Ψ ± (z)µ(w 1 ) · · · σ(w j ) · · · µ(w n ) µ(w 1 ) · · · µ(w n ) (4.81) rather than correlators. Hence the actual braid representation on the Q theory operators is the tt * one twisted by the one defined by the τ -function. In this way one solves an apparent problem with example 5: there the tt * UV Berry monodromy factorizes through S n , whereas the braiding action of Ising blocks do not. Taking into account the twist by τ solves the problem. Moreover, the ratio (4.81) does not correspond to the amplitude written in a holomorphic trivialization of the vacuum bundle V . We continue example 11. Example 12. The asymptotics of the amplitudes (4.79), (4.80) as L ∼ 0 is [79] µ (4.82) So in the limit R → 0 the second correlation vanishes, whereas the first one becomes the CFT 2-point function (4.86) Setting r = 1 we recover well known properties of the Ising model. For the left-movers of the critical Ising model we have 58 From the OPEs (4.48) we see that the two fields have the same dimension. JHEP12(2019)172 The integral is regular as w 1 → w 2 and is a constant. Hence The w-plane CFT method works better if the underlying tt * geometry is very complete (as in the Ising cases). This leads to the idea of computing the tt * monodromy representation by guessing the "CFT Q" on the w-plane. However to do so one has to establish a precise dictionary between correlators in the Q theory and tt * quantities. Relation with sl(2) Hecke algebra representations We need to look more in detail to the matrices B ij in the UV Berry connection (4.2) for a very complete tt * geometry. We already computed them for example 5. We consider the B ij 's from the point of view of the Q theory on the w-plane. We put ourselves in the generic case, where the critical values w k are all distinct, although the argument goes through even without this assumption. 59 Since B ij is the residue of the pole of A as w i − w j → 0, we focus on this limit from the viewpoint of the 2d (2,2) LG model. Without loss, we may deforme the D-terms so that the only light degrees of freedom are the BPS solitons interpolating between vacua i and j of mass 2|w i − w j |. We may integrate out all other degrees of freedom, and we end up with an effective IR description with just these two susy vacua. 60 From the viewpoint of SQM, the 2d BPS solitons look BPS instantons. The effect of these BPS instantons is to split the two vacua not in energy as it happens in non-susy QM -vacuum energy is susy protected! -but in the charge q of the U(1) R symmetry which emerges in the w i − w j → 0 limit. In this limit there is also an emergent Z 2 symmetry interchanging the two "classical" 61 vacua, so the U(1) R eigenstates should be the symmetric and anti-symmetric linear combinations of the two "classical" vacua. Their charges q should be opposite by PCT, and may be computed from eq. (4.78) 2 sin(πq) = ± # BPS instantons . (4.90) Two simplify the notation, we renumber the w k so that w i , w j become w 1 and w 2 . Then, with a convenient choice of the relative phases of the two states, the upper-left 2 × 2 block of the matrix Q takes the form (4.91) 59 Cfr. example 10. 60 A theory with just 2 vacua is not UV complete if the number of BPS species connecting them is more than 2 [57], but here UV completeness is not an issue since we use the two-vacua theory just as an effective low-energy description valid up to some non-zero energy scale. 61 By "classical" vacua we mean the quantum vacua which under the isomorphism R ∼ = V correspond to the idempotents of the chiral ring. JHEP12(2019)172 This formula holds for the canonical trivialization; in a "natural" trivialization we have a shift by a constant multiple of 1. To be fully general, we allow a shift of the r.h.s. of (4.91) by µ Example 13. For the mirror of the P 1 σ-model (example 1) one has At each of the two critical points w 1,2 we have a two-fold choice: we may insert either a σ-like defect or a µ-like one. 62 From eq. (4.64) we see different choices correspond to different vacua of the original LG model. The matrix σ 1 in (4.91) has the effect of flipping σ ↔ µ in the two-vacua system. It is therefore convenient to introduce a two-component notation and write the UV Q-amplitudes (conformal blocks) for the effective two vacua theory in the form where V (a) ∼ = C 2 , a = 1, 2, are two copies of the representation space of sl(2, C). Notice that the amplitudes span only a subspace of V (1) ⊗ V (2) of dimension 2, two linear combinations vanishing since they are bounded holomorphic functions on the cover Σ which vanish at infinity. Acting on these blocks, the matrix (4.91) reads where σ (a) is the Pauli matrix acting on the (a)-copy V (a) of C 2 and λ is a constant. We are led to conclude that the UV Berry connection D of a very complete tt * geometry with d susy vacua must have the general form acting on sections of a bundle V → X whose fibers are modelled on the vector space s (a) being the su(2) generators which act on the V (a) ∼ = C 2 factor space, i.e. s (a) = 1 ⊗ · · · ⊗ 1 ⊗ a-th 1 2 σ ⊗1 ⊗ · · · ⊗ 1, = 1, 2, 3. (4.99) 62 It is convenient to make complex the Ising fermion Ψ [4]; then the two-fold choices corresponds to the two components of the spin operator for the free fermion system. JHEP12(2019)172 The natural connection on the "Q conformal blocks" may differ from D by a line bundle twisting; for instance, in the Ising case we have the normalization factor µ(w 1 ) · · · µ(w g ) −1 . This corresponds to replacing D → D + 1 · d log f for some multivalued holomorphic function f , the "normalization factor". We shall omit this term which may be easily recovered using the reality constraint. The actual brane amplitudes live in a rank d sub-bundle V of the rank 2 d bundle V. The tt * Lax equations requires this sub-bundle to be preserved by parallel transport with the connection D. To see this, consider the total angular momentum The constants λ i,j , µ i,j in eq. (4.97) are restricted by two conditions: 1) D is flat acting on the sub-bundle V ; 2) the monodromy representation of D is "arithmetic". If, in addition, the very complete tt * geometry is symmetric: 3) the constants λ i,j , µ i,j should be independent of i, j. The Knizhnik-Zamolodchikov equation A well-known solution to condition 1) is [52,53]: that is, This connection is automatically flat for all λ when acting on sections of the big bundle V since its coefficients are given by the universal sl(2) R-matrix [83]; 3) is also satisfied. We shall see momentarily that condition 2) reduces to λ ∈ Q. On the other hand, it is easy to check that the only symmetric (i.e. independent of i, j) solution to the flatness condition for a connection of the form (4.97) is given by eq. (4.102). We see this observation as the basic evidence than a symmetric very complete tt * geometry has a UV Berry connection of the form (4.103). JHEP12(2019)172 Since D is flat on the larger bundle V, the (physical) UV limit of the tt * linear problem, 63 DΨ = 0 with Ψ ∈ Γ(X , V ), may extended to the big bundle This equation is the celebrated sl(2) Knizhnik-Zamolodchikov for the d-point functions in the 2d WZW model with group SU(2) [9]. In that context λ is quantized in discrete values for the 2d SU(2) current algebra at level κ. Since the connection (4.103) is invariant under the symmetric group S d , the representation σ of P d given by its monodromy extends to a representation of the full braid group B d in d strings. A representation of this form is called a Hecke algebra representation of B d [53,85] since it factorizes through a (Iwahori-)Hecke algebra [11]. By the argument leading to eq. (4.101), parallel transport by the Knizhnik-Zamolodchikov connection D preserves the tt * vacuum sub-bundle V ⊂ V. In facts more is true: indeed, D preserves all eigen-bundles V l,m ⊂ V of given total angular momentum Comparing with (4.101) The fiber of V d/2,1−d/2 is spanned by a unique vacuum preserved by the monodromy representation. It has the properties expected for the preferred vacuum |vac of section 2.3. As we shall see momentarily, the monodromy representation of the braid group B d defined by restricting the Knizhnik-Zamolodchikov connection to the tt * sub-bundle V is isomorphic to the Burau one [11,86]. Remark. The identification of D| V with the UV Berry connection of a very complete tt * geometry entails that its monodromy representation is unitary. It is known that the Burau representation is unitary [11,87]. Hecke algebra representations The presentation of the Artin braid group B d is given in eq. (4.10). Let q ∈ C × . The Hecke algebra of the symmetric group S d , H d (q), is the C-algebra [11] with generators 1, g 1 , g 2 , · · · , g d−1 (4.110) 63 The equations do not contain the twistor parameter ζ any longer. Indeed, ζ is essentially the phase of the susy central charge, but in the superconformal algebra which emerge in the UV the central charge should be the zero operator by the Haag-Lopuszański-Sohnius theorem [84]. JHEP12(2019)172 and relations H d (1) is simply the group algebra C[S d ] of the symmetric group S d . If q is not a root of unity of order 2 ≤ n ≤ d, H d (q) is semisimple [11] and its simple modules are q-deformations of the irreducible representations of S d . If q is a non-trivial root of unity new interesting phenomena appear [88]. Comparing eqs. (4.10), (4.111) we see that the correspondence σ i → g i yields an algebra homomorphism : The Temperley-Lieb algebra A d (q) [14,15] is the C-algebra over the generators 1, e 1 , · · · , e d−1 satisfying the two relations Theorem (see [52]). The monodromy representation of the flat connection (4.103) is a Hecke algebra representation of the braid group B d which factorizes through the Temperley-Lieb algebra A d (q) with q = exp(πiλ) (4.116) given by the correspondence Comparing with the third of eq. (4.111) we see that the σ i are semi-simple and eigenvalues of the monodromy generator The transport of the i-th defect operator around the (i + 1) one corresponds to the square of σ i and has spectrum q 1/2 , q −3/2 . By arithmeticity of tt * , in the present context requires these eigenvalues to be roots of unity, that is, q ∈ µ ∞ . (4.120) Hence the Hecke algebra representations which appear in tt * are the tricky ones at q a root of 1. We have recovered from tt * the known action of the braid group B d on the Ising d-point functions at criticality. In particular they are flat sections of the connection in eq. (4.103) with λ = 3/2. Chern-Simons, Jones polynomials, minimal models, etc. The braid group actions which factor through representations of the Temperley-Lieb at q a root of unity are ubiquitous in mathematical physics. They describe the monodromy of the conformal blocks of two-dimensional SU(2) current algebra at level k [9,54]. Due to the relation of 2d current algebra with 3d Chern-Simons [89], they also describe the braiding properties of Wilson loops in SU(2) Chern-Simons theory, and hence are the representations relevant for the Jones polynomials [85] and the theory of the quantum groups [81]. In addition, they also describe the braiding properties of the (p, q) Virasoro minimal models [82]. In the Virasoro (p, q) minimal models the operator φ r,s has dimension The braiding the operator φ 1,2 (which for (p, q) = (4, 3) reduces to the spin field σ of example 14) correspond to the Temperley-Lieb algebra with q ≡ e πi λ = e 2πi q/p . (4.125) The fusion rule of φ 1,2 is similar to eq. (4.122) One has corresponding to eq. (4.119) with q = e 2πi q/p . The Knizhnik-Zamolodchikov equations is also related to an integrable statistical model, the Gaudin model of type gl(2) [12,90]. This is discussed in the next subsection. Comparison with the asymmetric limit tt * geometry predicts that the monodromy representations defined by the asymmetric limit of the brane amplitude and the UV Berry connection are "essentially" the same (and isomorphic when the UV limit is regular). For a very complete tt * geometry, where π 1 (X ) = B d , the asymmetric limit monodromy yields a so-called homology representation of the braid group, a.k.a. the Lawrence-Krammer-Bigelow (LKB) representation [47,91,92], see also [11,55,96,97]. In facts, it is known that the LKB representation is essentially equivalent to the monodromy of the Knizhnik-Zamolodchikov connection. In this section we limit ourselves to sketch the relation between the two points of view on the monodromy. There exist explicit integral representations of the solutions to the Knizhnik-Zamolodchikov equations of the schematic form [93,94] where Φ, known as the master function, has the same functional form as the superpotential W but with "renormalized" couplings in general. Γ is a basis of relative cycles (which may be chosen to be Lefschetz thimbles). In the limit λ → ∞ the integral may be evaluated by the saddle point. The saddle point equations coincide with the algebraic Bethe ansatz equations of an integrable model [95], the Gaudin model of type gl(2). Φ(z; ∞) is the corresponding Yang-Yang functional. The vector A evaluated at the saddle point is a common eigenstate of the Gaudin Hamiltonians [12,90] The integral expression (4.129) may be identified with the asymmetric limit brane amplitude, taking into account the redefinitions of the couplings in relating the different limits. Puzzles and caveats Some aspects of the previous discussion look a bit puzzling because they do not fit in the intuition one gets from the study of "typical" 4-susy models, i.e. Landau-Ginzburg models whose superpotentials are entire functions on C n . Except for a couple of instances, the very complete tt * geometries do not belong to the "typical" class of LG models. The very special tt * geometries have no non-trivial wall-crossing, and this aspect implies quite JHEP12(2019)172 peculiar properties when d > 2. Luckily, we have a nice theoretical laboratory to study these issues, namely the Ising d-point functions. We know that the Ising functions exist and define a totally regular tt * geometry, so naively unexpected facts which do occur for Ising functions should not be regarded as "strange" but rather as archetypical of very complete tt * geometries not arising from entire superpotentials. Let us discuss the puzzling aspect in the context of the d-point Ising function. The corresponding tt * brane amplitudes are the solution to the Riemann-Hilbert problem in twistor space with Stokes matrix S = 1 + A, where A is the strictly upper triangular d × d matrix Note the identity S −1 = U SU −1 with U = diag{(−1) i }, which implies |S ij | = |S −1 ij |, a manifestation of the fact that these T T * geometries have no wall-crossing. As in example 9, the symmetric matrix C = S t + S, (i.e. minus the 2d quantum monodromy [5]). From eq. (4.132) it is clear that Cox acts as the identity on Γ im , while it acts as ±1 on Γ root /Γ im . Since det Cox = (−1) d , we conclude that Cox = (−1) d . Thus for d odd Cox is semi-simple with minimal polynomial z 2 − 1, and the radical of the skew-symmetric form 64 S t − S has rank 1; for d even all eigenvalues of the Coxeter elements are +1 and Cox has a single non-trivial Jordan block of size 2. From the arguments at the end of section 4.3 in [5] means that for d odd the model behaves as a UV superconformal theory, and for d even as an asymptotically-free QFT. This matches the physics of the first few d's in terms of the 4d N = 2 QFT which corresponds to the Ising correlators in the sense of refs. [57,59]: in 4d d = 1 leads to a free hyper, d = 2 to pure N = 2 SYM with G = SU(2), and d = 3 to SU(2) N = 2 * SYM. For d > 3 things become less obvious and we get the aforementioned puzzles. The 2d quantum monodromy is H ≡ −Cox; its eigenvalues are identified with e 2πiq R [5], where q R are the U(1) R charges of the Ramond vacua of the SCFT emerging at the UV fixed point. For d even we get q R = 1 2 mod 1 for all Ramond vacua, while for d odd there is in addition a single Ramond vacuum with q R = 0 mod 1. The Ramond U(1) R charges should JHEP12(2019)172 be distribute symmetrically around zero by PCT, so the natural conclusion is that we have [d/2] Ramond charges q R = − 1 2 , [d/2] Ramond charge q R = + 1 2 and for d odd one q R = 0. For d ≥ 4 this looks odd since one expects the largest q R to have multiplicity 1 because it should correspond to a spectral flow of the identity operator. However this argument can be circumvented in several ways: e.g. one may think that some couplings get weak in the UV and the fixed point consists of several decoupled sectors; in this case in the UV limit we may get a distinct spectral flow of the identity for each decoupled sector. We adopt the following attitude: we know for certain that the Ising d-point functions exist and are pretty regular; this proves beyond all doubt that the tt * geometry with Stokes matrix (4.131) does exist and is well behaved, even if does not fit in the intuition from experience with LG models whose superpotential is an entire function in C N . There is another issue which may look tricky. Let us consider the one-dimensional subspace of U ⊂ X given by w i = λ w 0 i for some fixed (generic) w 0 i . The pulled-back connection dλ λ i<j B ij (4.135) should agree with the connection along the "RG flow". For the d-point Ising functions the matrix i<j B ij does not coincide with the UV limit of the new index Q as one would expect. However, when comparing two connections we should content ourselves to check that they are gauge-equivalent, not identical. For flat connections this mean they need to have the same monodromy up to conjugacy. In addition, we need to remember that to get a nice UV limit we twisted the vacuum bundle by factors of the form which holds identically. Relation with the Vafa proposal for FQHE Let us compare the above discussion of the effective Q theory in the w-plane with the Vafa proposal for FQHE. As stated in the introduction around eq. (1.4), the microscopic dynamics of the N electrons should produce an effective QFT for the quasi-hole "fields". Now we have a natural candidate for the effective macroscopic description expected on physical grounds, namely the Q theory. The idea is that (1.4) and (4.39) should be identified. The identification works provided the tt * geometry is very complete, so that the spaces in which we insert the operators h(w) and O(w) are identified. Thus, to close the circle of ideas, it remains to show that the Vafa models have very complete tt * geometries (and work out their specific details). Before going to that, we need to develop some other tool in tt * geometry specific to the LG model of the Vafa class. Symmetry and statistics For simplicity, in this subsection we assume the target space of our LG model to be C N . Suppose the superpotential is invariant under permutations of the chiral fields S N is a symmetry of the SQM, so the vacuum space carries a unitary representation of the symmetric group. Hence the vacuum bundle V and the chiral ring R ⊂ End V have parallel orthogonal decompositions The component associated to the trivial representation, R s , is a ring, while for all η, R η is a R s -module. The linear isomorphism R where a is the sign representation. This follows from the explicit form of the isomorphimsm 4) and the fact that dz 1 ∧ · · · ∧ dz N belongs to the sign representation. We define the Fermi (resp. Bose) model to be the one obtained by restricting V to its symmetric (resp. antisymmetric) component V s (resp. V a ). To call "fermionic" the model having symmetric wave-functions may look odd. To justify our definition let us count the number of ground states in an important special case. Special case: N non-interacting copies Suppose we have a one-particle superpotential W (z) with d vacua and let f j (z) (j = 1, . . . , d) be a set of holomorphic functions giving a basis for the one-particle chiral ring. We consider N non-interacting copies of the model The chiral ring of the N -particle model is R = ⊗ N R 1 with R 1 the one-particle chiral ring. Then If dim R 1 = d, the number of anti-symmetric resp. symmetric elements of R is JHEP12(2019)172 which correspond, respectively, to Fermi and Bose statistics. Using the basic spectral-flow isomorphism R ∼ = V , we get the linear isomorphisms 8) and the tt * metric, connection and brane amplitudes are the ones induced from the corresponding one-particle quantities. The group S N acts on the sub-spaces of ⊗ N V 1 as Remark. Eqs. (5.7) remain true if we add to the superpotential (5.5) arbitrary supersymmetric interactions (which do not change the behaviour at infinity in field space) since the dimension of the chiral ring is the Witten index d. The Fermi model chiral ring We return to the general case, eq. (5.1). The elements of the R s -module R a have the form The chiral ring of the Fermi model is then where I Van ⊂ R s is the annihilator ideal of the Vandermonde determinant ∆(z i ). We have the linear isomorphism (5.12) 5.2 tt * functoriality and the Fermi model tt * functoriality [3,5] yields a more convenient interpretation of the Fermi model. Review of tt * functoriality Suppose the superpotential map W x : K → C factorizes through a Stein space S for all values of the couplings x ∈ X where f : K → S is a (possibly branched) cover independent of x. We wish to compare the tt * geometry over X of the LG model (S, V x ) with the original one (K, W x ). Let 65 ψ = φ ds 1 ∧ · · · ∧ ds N + (∂ + dV x ∧)η ∈ Λ 1 (S) (5.14) 65 The si's are coordinates on S. JHEP12(2019)172 be a vacuum wave-function of (S, V x ). The pull-back f * ψ is (∂ + dW x )-closed in K and not (∂ + dW x )-exact, hence cohomologous to a vacuum wave-form 66 of (K, W x ). Comparing Q-classes, we see that the pulled-back vacuum is the spectral flow of the chiral operator [3] f The linear map is an isometry of topological metrics compatible with the R S -module structures where (·) ⊥ stands for the orthogonal complement in the tt * metric. tt * functoriality is the statement that f : R S → f (R S ) is an isometry also for the tt * metric. To show this fact, one has to checks two elements: 1) the two tt * metrics solve the same tt * PDEs, and 2) they satisfy the same boundary conditions. Since the classes in R K of the operators ∂ xa W belong to the sub-space f (R S ) ⊂ R K , the first assertion follows from eqs. (5.18) and (5.17). The boundary conditions which select the correct solution to these PDEs are encoded in the 2d BPS soliton multiplicities [5]. The BPS solitons are the connected preimages of straight lines in the W -plane ending at critical points [5,13]. Since the map W x factorizes through V x (cfr. eq. (5.13)) so do the counterimages of straight lines, and the counting of solitons agrees in the two theories. Definition. A tt * -duality between two 4-susy theories is a Frobenius algebra isomorphism between their chiral rings which is an isometry for the tt * metric (hence for the BPS brane amplitudes). tt * functoriality produces several interesting tt * -dual pairs. See section 5.3 for examples. The standard lore is that tt * -duality implies equivalence of the full quantum theories for an appropriate choice of the respective D-terms. Thus tt * functoriality is a powerful technique to produce new quantum dualities. 66 In general, the actual wave functions are not given by the pull back since the D-terms do not agree. In the case of one-field the vacuum Schroedinger equation does not contain the Kähler metric, and the exact wave-function on K is the pull back of the one on S [17]. Application to Fermi statistics If the superpotential W(z 1 , · · · , z N ) is invariant under S N , it can be rewritten as a function of the elementary symmetric functions W(z 1 , · · · , z N ) W(e 1 , e 2 , · · · , e N ) (5.21) where e k := The superpotential W : C N → C factorizes through the branched cover map of degree N ! E : C N → C N , E : (z 1 , · · · , z N ) → (e 1 , · · · , e N ). (5.23) One pulls back the susy vacua of the LG model with superpotential W(e i ) getting non-trivial Q-cohomology classes, hence vacua of the W(z k ) theory up to Q-trivial terms. Then the pull back yields a correspondence E : |h e → |h∆ z , h ∈ R e ≡ C[e 1 , . . . , e N ]/(∂ e 1 W, · · · , ∂ e N W), (5.25) which is an isometry for the underlying TFT metric (cfr. eq. (5.12)). In other words, E sets an equivalence between the TFT of the W(e k ) model and the TFT of the Fermi sector of the W(z j ) model. By the arguments in section 5.2.1, E is also isometry of tt * metrics and hence of brane amplitudes. Examples of tt * -dualities tt * functoriality relates the fermionic version of a N -field LG model to some other 2d (2,2) supersymmetric system. In this subsection we present several examples of such tt * -dualities. Only the first one will be referred to in the rest of the paper; the following examples may be safely skipped. The chiral superfields u a ≡ P (x a ; e k ) are N linearly independent linear combinations of the N chiral superfields e k ; by a linear field redefinition we can take the u a to be the fundamental fields. e 1 is a certain linear combination of the u a ; by rescaling the u a 's we may write e 1 = u 1 + · · · + u N + const. Then 67 µ u a + m a log u a (5.31) i.e. the ν = 1 Fermi models is equivalent to N non-interacting copies of the one-field "Penner" model W (u) = µ u + m log u. Next, tt * functoriality with respect to the planeto-cylinder cover map x a → e xa ≡ µ u a (5.32) sets a tt * -duality between the ν = 1 Fermi system (5.28) and N free twisted chiral supermultiplets with twisted masses equal to the residues m a of the one-field rational differential dW = (µ + a m a (z + x a ) −1 )dz. The final superpotential is e xa + m a x a , (5.33) whose tt * equations were explicitly solved in [6,7,59]. We shall return to this basic example below. Example 16 (Polynomial ν = 1 models). We may consider N non-interacting copies of other LG systems with Witten index N (so that ν = 1) getting similar conclusion. For instance, we may take the one-field differential dW to have a single pole of order N + 2 at infinity JHEP12(2019)172 is an arbitrary monic polynomial of degree N + 1 which we take to be odd for definiteness. The Newton identities yield W(e 1 , . . . , e N ) = N/2 k=1 e k − (N + 1) e N +1−k + A k (e 1 , · · · , e N −k ) (5.36) for some polynomial A k (e 1 , · · · , e N −k ) which depends on the t j 's. The field redefinition which has constant Jacobian, reduces the Fermi model to N copies of the free Gaussian theory W(e 1 , · · · , e N/2 , u 1 , · · · , u N/2 ) = which has a single vacuum. That the tt * metric of the original Fermi model coincides with the one for the Gaussian theory is easily checked: the tt * metric of the ν = 1 Fermi model is the determinant of the one-field tt * metric (cfr. (5.8)); from the reality constraint [3] det g is just the absolute value of the determinant of the topological metric | det η|. η may be set to 1 by a change of holomorphic trivialization [8]. The covering E automatically implements such a trivialization. The wave-function of the unique vacuum of the Gaussian model (5.38), when written in terms of the original chiral fields z i , has the form This wave-function is cohomologous to the one obtained solving the Schroedinger equation in the original Fermi model, but not equal since the covering map E implicitly involves a deformation of the D-terms. The tt * metric, i.e. the Hermitian structure of the vacuum bundle is correctly reproduced since it is independent of the D-terms. We stress that the Vandermonde factor in the wave-function (5.39) is produced by the cover map E, not by an interaction in the superpotential. This is physically correct, since this is the wave-function at ν = 1 of a N non-interacting fermions. In particular, tt * functoriality automatically yields the correct ν = 1 Laughlin wave-function [24]. Example 17. More generally, we may take the rational differential dW to have a pole of order + 2 at infinity and N − > 0 simple poles in C with residues m a ; the corresponding Fermi model is reduced by tt * functoriality to a non-interacting system of ordinary free massive chiral multiplets and N − twisted ones e xa + m a x a . Comparing this expression with eq. (5.31), we see that already for ν = 1 assuming dW to have a double pole at ∞ leads to a nice simplification (besides of making the model better defined). Example 18 (tt * particle-hole duality). We consider the case we have N (super-)particles z i and d = N + 1 (vacuum) one-particles states, so the Fermi statistics (5.7) yields N + 1 vacua, which may be seen as single-hole states. For simplicity, we consider the model 42) where P N +2 (z) is any polynomial of degree N + 2. Going through the same steps as in example 16, we get (say for N even) Fermi statistics vs. Hecke algebras representations We consider the Fermi model of N decoupled LG systems where the rational differential dW has d ≥ N zeros. In addition we assume that the one-field theory W (z) yields a very complete symmetric tt * geometry, so the results of section 4.5 apply. The vacuum bundle of the N -particle model is JHEP12(2019)172 and its UV Berry connection is just the one induced in the N -index antisymmetric representation by the one-particle UV Berry connection. It is convenient to introduce the "Grand-canonical" bundle The total number of states is 2 d since each of the d one-particle (vacuum) states may be either empty or occupied. Remark. In (5.48) we added a direct summand V 0 which does not correspond to any LG model (the number of chiral fields N is zero). This can be done without harm since by the particle-hole duality (example 18) the extra summand is V 0 ∼ = V N , i.e. the trivial line bundle. In section 4.5 we associated a spin degree of freedom s (j) ( = 1, 2, 3) to the j-th one-particle vacuum: spin down ↓ (up ↑) meaning that the j-th state is empty (resp. occupied). Then where V (j) ∼ = C 2 is the space on which the s (j) act. A vacuum with occupied states ) corresponds (linearly) to the element of the N -particle chiral ring where {E j (z)} d j=1 is the canonical basis of the one-particle chiral ring R 1 . Note that the operators s W ∼ = V, (5.52) and the linear PDEs satisfied by the "grand-canonical" brane amplitudes Ψ is just the sl(2) Knizhnik-Zamolodchikov equation up to twist by "normalization" factors. In W one defines the operator number of particles aŝ The underlying one-particle model, having a very complete symmetric tt * geometry, defines a Kohno connection acting on V ⊗d that we argued has the sl(2) KZ form up to an overall twist, i.e. for some constants λ and ξ. We shall see momentarily that the constant ξ is related to λ. JHEP12(2019)172 The eigen-subbundles V N ≡ ker(N − N ) ⊂ W are preserved by parallel transport with D, and hence define a monodromy representation π 1 (X ) (also denoted 68 by V N ) which is the one associated to the N -field Fermi theory (5.46). For most N 's this representation is highly reducible. Indeed, the eigen-bundles of the operator L 2 ≡ L L are also preserved by parallel transport. So one has the monodromy invariant decomposition 55) and Since sl(2) diag centralizes the monodromy representation 57) in particular the monodromy representations V N , V d−N are isomorphic. This is a manifestation of the particle-hole duality in Fermi statistics explicitly realized in terms of the tt * -dualities described in section 5.3. The non-zero eigenbundles V l,m have ranks given by the Catalan triangle The eigen-bundle V d/2,N −d/2 has rank 1, i.e. it contains a unique monodromy invariant vacuum |d/2, N − d/2 . |d/2, N − d/2 is a "preferred" vacuum for the Fermi model with N -fields. It is tempting to identify it with the one discussed in section 2.3. The fact that it is invariant under the monodromy representation is already a strong suggestion that this is the case. For a fixed number of particles N the determinant of the brane amplitudes, det Ψ, is a constant section of the line-bundle corresponding to the preferred vacuum for the N -particle Fermi model. The overall twist ξ in eq. (5.54) is fixed by the requirement that the preferred vacuum has trivial monodromy. This fixes ξ in terms of λ In other words, the normalized amplitudes Ψ norm are related to the KZ ones Ψ by the formula with Ψ priv a parallel section of the line-bundle V d/2,d/2 . In particular the normalized monodromy is trivial for the ν = 1 case. 68 When no confusion is possible, we identify the monodromy representation with its representation space. Relation with the Heine-Stieltjes theory We consider a LG model with N chiral fields with superpotential differential where dW (z i ) is a rational differential with d zeros and a pole of order 1 ≤ ≤ d + 2 at ∞. Generically, dW has p ≡ d + 2 − simple poles at finite points {y 1 , · · · , y p } ⊂ C (all distinct), i.e. for some degree d polynomial B(z) coprime with A(z). The LG model proposed by Vafa to describe FQHE has the form (5.62) with the residues of dW equal ±1 and 2β = 1/ν. We think of this model as defined on the quotient Kähler manifold K is affine (hence Stein). Indeed, the basic chiral fields are the elementary symmetric functions, e k = e k (z j ); we identify the field configuration (e 1 , · · · , e N ) configuration with the degree N monic polynomial where S is the hypersurface (divisor) where discr P ) and Res(A, P ) are the discriminant and the resultant of the polynomials seen as functions of the coefficients e 1 , · · · , e N of P (z) for fixed A(z). A vacuum configuration of the model (5.62) defined on the quotient manifold K (i.e. up to S N action) is described by a degree N monic polynomial P (z) ≡ P (z; e k ) as in eq. (5.65) which satisfies the Heine-Stieltjes differential equation where f (z) is a polynomial of degree d−1. The degree N polynomials P (z) which solve this equation for some f (z) are called Stieltjes polynomials; to each of them there corresponds a degree d−1 polynomial f (z), called its associated van Vleck polynomial. The Heine-Stieltjes theory is reviewed in the context of tt * in ref. [6]. We refer to the vast literature for further details. JHEP12(2019)172 If dW is a generic rational differential, with just simple poles in P 1 , eq. (5.68) is a generalized d-Lamé equation. The d-Lamé equation [114] is the special case β = 1 and dW = d log Q(z), where Q(z) is a polynomial (which we may choose square-free and monic with no loss) of degree d. Taking the same differential dW , but choosing β = −1, the superpotential W in eq. (5.62) becomes the Yang-Yang [48,123] functional (and its exponential the master function [124,125]) of the sl(2) Gaudin integrable model on V ⊗d , the Heine-Stieltjes equation is equivalent to the corresponding algebraic Bethe ansatz equations, and the roots of P (z) are the Bethe roots [48,126]. The case most relevant for us is when precisely one of the poles in P 1 is double: then the Heine-Stieltjes equation is a confluent generalized d-Lamé equation. 69 The ODE is equivalent to the Bethe ansatz equation for the Gaudin model with an irregular singularity [48,[127][128][129]. Going to the confluent limit is very convenient, as we have already observed. In the Gaiotto-Witten language [48], passing to the confluent limit corresponds to breaking the gauge symmetry by going in the Higgs branch of the 4d N = 4 gauge theory ("complex" symmetry breaking [48]). A basic result of Heine-Stieltjes theory states that the number of solutions (P (z), f (z)) of eq. (5.68) is (at most, and generically) By definition, this is also the Witten index of the model (K, W) hence, by eq. (3.9), the dimension of the appropriate relative homology group. By construction [6], solving the Heine-Stieltjes equation (5.68) is equivalent to solving the equation dW and considering the solutions modulo S N . Explicitly, which is a generalization of the Algebraic Bethe Anzatz equation for the Gaudin model (the z i 's are analogue to the Bethe roots). The Gaudin model arises from the semi-classical limit of the solutions to the Knizhnik-Zamolodchikov, and it is natural to expect that the relation remains valid in the present slightly more general context. Remark. In the (related) context [130][131][132] of large-N matrix models the Heine-Stieltjes equation (5.68) is called the Schroedinger equation. The obvious guess is that -in order to get the correct FQHE phenomenology -one has to consider only the subspace The fermionic truncation of vacua which survive in the limit β → 0. In this limit all other vacua |ω ∈ V ⊥ Fer escape to the infinite end of K, that is, they fall into the excised divisor S, cfr. eq. (5.66). The fermionic truncation from V to V Fer is geometrically consistent if and only if it is preserved under parallel transport by the tt * flat connection, that is, if V Fer is a sub-representation 70 of the monodromy representation. Since the flat tt * connection is the Gauss-Manin connection of the local system on X provided by the BPS branes (for fixed ζ ∈ P 1 ), this is equivalent to the condition that the model has d N preferred branes which remain regular as β → 0 and span the dual space to V Fer . Luckily, the fermionic truncation has already been studied by Gaiotto and Witten in a strictly related context, see section 6.5 of [48]. They show that preferred branes with the required monodromy properties do exist. We review their argument in our notation. We assume that the rational one-form dW has a double pole at infinity of strength µ and d simple poles in general positions. Then B(z) = µ A(z) + lower degree, (5.73) and eq. (5.68) becomes 2β A(z) P (z) + µ A(z) + · · · P (z) = µf (z) P (z). The monodromy representation is independent of µ as long as it is non-zero. One takes µ finite but very large (the reasonable regime for FQHE). Up to O(1/µ) corrections the zeros of P (z) coincide with zeros of A(z) and hence with the zeros of B(z). The fermionic truncation amounts to requiring that their multiplicities are at most one, i.e. that the polynomials P (z) and P (z) are coprime. In this regime, the product of N one-particle 70 Since the integral monodromy representation is not reductive in general, V Fer needs not to be a direct summand of the monodromy representation. However the contraction of the monodromy given by the UV Berry monodromy is reductive, so V Fer must be a direct summand of the UV Berry monodromy. JHEP12(2019)172 Lefshetz thimbles starting at distinct zeros of B(z) is approximatively a brane for the full interacting model; while the actual brane differs from the product of one-particle ones by some O(1/µ) correction, they agree in homology and this is sufficient for monodromy considerations. Essentially by construction, the Fermi truncation is equivalent for the purpose of tt * monodromy to deleting the Vandermonde interaction from the superpotential (i.e. to setting β = 0) while inserting a chiral operator of the form ∆(z i ) 2β in the brane amplitudes. Hereβ is a kind of "renormalized" version of β. To understand this operation we have preliminary to dwell into some other aspects of tt * geometry which we discuss next. tt * geometry for non-univalued superpotentials The basic version of tt * geometry works under the assumption that the superpotential W is an univalued holomorphic function K → C. Supersymmetry only requires the one-form dW to be closed and holomorphic, but not necessarily exact. When the periods of the differential dW do not vanish, the topological sector of the SQM is non-standard. This aspect is more transparent in the (equivalent) language of the 2d TFT obtained by twisting the 2d (2,2) QFT with the same Kähler target K and (multivalued) superpotential W. The 2d TFT is well-defined also when dW is not exact, but now it is not always true that an infinitesimal variation δx of the parameters entering in W, is equivalent to the insertion in the topological correlators of a 2-form topological observable ∂ x W (2) , since ∂ x W may be multivalued, and hence not part of the TFT chiral ring R. Thus, while the TFT exists, it does not define a structure of Frobenius manifold on the essential coupling space X , and the dependence of the topological correlations on the parameters x ∈ X is not controlled by the Frobenius algebra R. Since tt * geometry is obtained by fusing together the topological and anti-topological sectors, this means that the PDE's which govern the dependence of the tt * amplitudes on x cannot be written remaining inside R: one needs to enlarge R to a bigger Frobenius algebra. Abelian covers We review the procedure in detail since the Vafa model of FQHE involves all possible subtleties in this story. In this section we work in full generality: K is any Stein (hence complete Kähler) field space endowed with a family of holomorphic superpotential oneforms dW x , parametrized by x ∈ X , which are closed but not exact. An obvious way to get univalued superpotentials W x and reduce ourselves to ordinary tt * geometry, is to enlarge the model by replacing the field space K by its universal cover K u −→ K endowed with the pulled back superpotential one-form u * dW x which is automatically exact on K. However, typically, this universal extension of the theory introduces insuperable and unnecessary intricacies. A more economic fix is to replace K by its universal (Galois) Abelian cover A, i.e. the cover A → K with deck group the Abelianization π 1 (K) Ab of π 1 (K) and π 1 (A) = [π 1 (K), π 1 (K)]. (5.76) JHEP12(2019)172 If π 1 (K) Ab contains torsion, we may further reduce the cover to A/(π 1 (K) Ab ) tor . To keep the formulae simple, we assume π 1 (K) Ab to be torsion-free, and concretely define A as the quotient of the space of paths starting at a base point * ∈ K by a suitable equivalence relation endowed with the projection The superpotentials W x are well-defined on A. A is the smallest cover such that all superpotentials are defined. By construction, the cover (5.79) is Galois with Galois group the Abelianization of the fundamental group The deck group π 1 (K) Ab acts freely and transitively on the pre-images of any point, i.e. A → K is a principal π 1 (K) Ab -bundle. A is automatically Stein [25]. Since the first Betti number b 1 (K) > 0, the cover A → K has infinite degree, which means that each vacuum of the original SQM defined on K has infinitely many pre-images in A which are distinct vacua for the Abelian cover SQM, which then has Witten index ∞ · d. Luckily, this additional infinity in the number of vacua causes not much additional trouble. π 1 (K) Ab acts as a symmetry of the covering quantum system, and hence its vacuum space V A decomposes in the orthogonal direct sum of unitary irreducible representations of π 1 (K) Ab . The group is Abelian, and all its irreducible representations are one-dimensional. The fiber of (5.79) carries the regular representation of π 1 (K) Ab , and each irreducible representation appears with the same multiplicity d. Then we have an orthogonal decomposition of the vacuum bundle V A → X into π 1 (K) Ab eigen-bundles associated to the irreducible (multiplicative) characters of π 1 (K) Ab, This orthogonal decomposition is preserved by parallel transport with the Berry connection D (since D is metric), but not in general by the flat connection ∇ (ζ) . There are two ways to remedy this. The first is to consider the UV Berry monodromy representation. This contraction of the tt * monodromy representation is unitary and metric, hence preserves the orthogonal decomposition (5.81). The second in discussed in section 5.7.2. Identifying π 1 (A) Ab (modulo torsion) with Z b , we write the characters as and call the states in the eigen-bundle V θ ≡ V χ θ the θ-vacua [3,6,7]. JHEP12(2019)172 Example 20. Consider the SQM with K ≡ C × and where m ∈ X ≡ C × . This is the basic model entering in the description of the ν = 1 phase of FQHE (cfr. example 15). The Abelian cover of K is C, : X → e X ≡ Z, whose Galois group Z acts as k : X → X +2πik; the corresponding characters are θ : k → e iθk , θ ∈ [0, 2π). Since the model is free, its Witten index is 1, and the representations θ : π 1 (X ) ≡ Z → C × are one-dimensional. One finds θ (s) = e is(θ−π) (s ∈ Z). The vacuum |θ ∈ V θ is (up to normalization) the one corresponding to the chiral operator z θ/2π ∈ R A . In particular, the brane amplitudes in character θ (which, for the present model, were computed explicitly in [6,7]) contain the insertion of z θ/2π , so that the effective mass parameter entering in the asymmetric limit amplitudes is m eff = m − iζθ/2πR [6]. The so-called θ-limit consists in taking the coupling in the superpotential to zero, m → 0, while keeping m eff fixed. General Abelian covers Let H ⊂ π 1 (K) Ab a subgroup, and let A H = A/H. We have an Abelian cover and we may consider the 4-susy SQM with target space A H which is well-defined. One has where β is the surjective group homomorphism The susy vacua of the LG theory formulated on A H may be identified with the H-invariant vacua of the universal Abelian covering theory, that is, The A H model has its own (generalized) BPS branes, which lift to branes of the cover A theory, and its vacuum-to-brane amplitudes are preserved by parallel transport with respect to the tt * Lax connection. Thus, even if each V χ may not be preserved by the brane monodromy representation Mon, we have one monodromy sub-representation Mon H ⊂ Mon for each subgroup H ⊂ π 1 (K) Ab . This is an important condition on the monodromy representation Mon. In particular, we may choose H to be of finite index in π 1 (A) Ab , so that Gal(A H /K) is a finite Abelian torsion group. In this case the theory on A H has finite Witten index JHEP12(2019)172 To a sequence of finite-index subgroups there corresponds an inverse sequence of tt * monodromy sub-representations where Mon H 0 is the monodromy representation for the original model defined on K. Finite covers vs. normalizable vacua It follows from the above that not all characters χ ∈ (π 1 (K) Ab ) ∨ are created equal. Suppose χ is torsion, that is, and let J χ ⊂ (π 1 (K) Ab ) ∨ be the finite cyclic group generated by χ, and N χ = ker χ ⊂ π 1 (K) Ab the corresponding finite-index normal subgroup In this case we may reduce from an infinite to a finite cover Such a finite cover (5.94) is much better behaved that , e.g. if K is affine χ is a regular morphism of affine varieties. 71 From the physical viewpoint, torsion characters χ ∈ (π 1 (K) Ab ) ∨ have the special property that they allow a consistent truncation of the chiral ring R A to a finite-dimensional ring R χ so that the θ-vacua | θ, a , θ ∈ J χ become normalizable, while they are never normalizable for χ non-torsion. Normalizability of the ground state(s) is a basic principle in quantum mechanics. tt The periods of dW x define an additive character of where T n is the element of the deck group corresponding to n ∈ Z b . We assume all components of ω i ≡ ω(x) i to be non-zero and Q-linearly independent (otherwise we consider a smaller Abelian cover and reduce to this case). We choose the local coordinates in X so that the first b coordinates are the ω i 's, writing t a for the remaining ones such that ∂ ta W are well-defined holomorphic functions on K, representing elements of R K . We write the character χ in the form n → e i n· θ . 71 See e.g. [136] page 124. JHEP12(2019)172 We consider the rank-d vector bundle V θ → X for a fixed character θ, endowed with the tt * Hermitian metric G( θ ). In the canonical trivialization G( θ ) satisfies the reality condition As shown in [7], the metric G( θ ), seen as a function of the variables for fixed t a , satisfies the 3b-dimensional analogue of the 3d non-Abelian Bogomolnji monopole equations with gauge group U(d). Indeed the "Higgs field" in the ω i direction, C ω i , becomes an U(w k ) covariant derivative in the θ i direction At fixed t a , the components of the tt * flat connection take the form Seeing the ϑ i 's as complex coordinates with real part θ i , and introducing the new complex coordinates (η i , ξ i ) (i = 1, . . . , b) which defines a P 1 family of complex structures parametrized by the twistor variable ζ, and a flat hyperKähler geometry with holomorphic symplectic structures i.e. the first-order differential operators D just say that the brane amplitudes Ψ(ζ) are holomorphic in complex structure ζ and independent of Im ϑ i [7]. The tt * equations then say that the curvature of the connection D (ζ) on the flat hyperKähler manifold is of type (1,1) in all complex structures, i.e. Ψ(ζ) is a section of a hyperholomorphic vector bundle [7]. The hyperholomorphic condition, supplemented by the condition on translation invariance in Im ϑ i , is equivalent to the higher dimensional generalization of the Bogomolnji monopole equations on (R 2 × S 1 ) b . The tt * geometry decomposes into an Abelian U(1) monopole and a non-Abelian SU(d) monopole. The monopoles are localized at loci in parameter space X where the mass gap of the 2d LG model closes. Thus each such locus carries an Abelian and a non-Abelian JHEP12(2019)172 magnetic charge. Restricted to the Abelian part, the tt * equations become linear; writing L( θ) = − log det G( θ), they read These equations hold in regions in parameter space where the model has a mass-gap; on the massless locus there are sources in the r.h.s. localized at trivial characters, that is, they are the loci where a non-zero abelian magnetic charge is present. In the i-th factor 3-space of coordinates ω i , θ i (all other fixed) this is a real codimension 3 locus. We note that the equations (5.104) are identical to the HKLR equations [133] describing a hyperKähler metric H n of quaternionic dimension n with n commuting Killing vectors K a such that their Sp(1) orbits span T H n . For instance, for the model in example 20 the Kähler manifold H 1 is the Hoguri-Vafa space [134] (a.k.a. periodic Taub-NUT). This is the target space of the GMN 3d σ-model obtained compactifing 4d N = 2 SQED [135], and the brane amplitudes Ψ(ζ) -which are locally holomorphic functions in complex structure ζ -coincide with the GMN holomorphic Darboux coordinates [6,59]. The Abelian part of the Berry connection is The tt * relation 72 [A w i , C ta ] = [A ta , C w i ], together with eq. (5.98), implies Taking the trace gives ∂ θ i A ta = 0; since A ta is odd in θ, we conclude that the t a -components of the U(1) connection vanish. The covering chiral ring R A The chiral ring R A of the (torsion-free 73 ) universal Abelian cover SQM has a simple form. 74 For a LG model with target a Stein manifold K and a superpotential differential dW with finitely many simple zeros, the chiral ring R is identified with the space of functions on the critical set {dW = 0}. This remains true for the LG model uplifted to the torsion-free Abelian cover A of K. Let us sketch the construction. Since K is Stein 75 Then we may find holomorphic one-forms k ∈ Ω 1 (K) (k = 1, · · · , b) whose classes generate H 1 (K, Z)/tor. The critical set of W in A (≡ classical vacua in the Abelian cover model) is Here A is the full Berry U(d) connection. 73 By the torsion-free Abelian cover we mean eqs. (5.77)(5.78) where we replace H1(K, Z) with H1(K, R) in the definition of the equivalence relation ∼. The LG models relevant for FQHE have free H1(K, Z), so the distinction is immaterial. 74 In the following argument the assumption that K is Stein is crucial. 75 See pages 445, 449, or 451 of [23], or theorem G on page 198 of [27]. JHEP12(2019)172 Adding to k an exact term we may assume with no loss l k ∈ Z for all l ∈ cri A . (5.110) On A there exist global holomorphic functions h k such that a = dh k . Now let {φ a } ∈ R K be holomorphic functions on K which form a basis of the chiral ring for the original model, with φ 0 = 1 K and product table φ a φ b = C ab c φ c . Clearly the yield a topological basis of R A diagonal in the characters of H 1 (K, Z)/tor. The product From this it also follows that the UV Berry connection A( θ) uv is a piece-wise linear function of θ [6]. The discontinuous jumps of A( θ) uv correspond to gauge transformations, and the characters of the monodromy representation are continuous. For generic θ the eigenvalues of the monodromy matrices are distinct, and hence no Jordan blocks are present; at characters where we have "jumps" typically a non-trivial Jordan blocks appear. A fancier language For the sake of comparison with the literature on representation of braid groups and the Knizhnik-Zamolodchikov equation [11,55] we state the above result in a different way. We write q i = e iθ i for i = 1, . . . , b. Clearly R A is a module over the ring C[{q ±1 i }] of Laurent polynomials in q 1 , · · · , q b . The isomorphism 76 allows us to restrict the scalars to Z. Thus i }]-module of rank d. Then the tt * Lax connection defines a group homomorphism (cfr. eq. (3.26)). 6 tt * geometry of the Vafa 4-susy SQM Now we have all the tools to analyze the Vafa model of FQHE. Generalities For simplicity, we take the N electrons to move in the plane C instead of the more rigorous treatment in which they move in a periodic box (i.e. a large 2-torus E). We write z i , x a (a = 1, . . . , n), and y α (α = 1, . . . , M ) for, respectively, the positions of the electrons, of the quasi-holes, and the support of the polar divisor D ∞ ∼ D which models the magnetic flux (cfr. section 2.2). The points d ≡ n + M points {x a , y α } ⊂ C are all distinct. The Vafa model is the LG SQM with target space In the experimental set-up N is very large, while N/d = ν and n are fixed. Despite this, we shall keep N arbitrary as our arguments apply both for N small and large. We have already noted in section 5.5 that K d,N is an affine variety It is convenient to write K d,N = P N \ T , where T is the obvious divisor. By Hironaka theorem we may blow-up the geometry so that with S snc a normal crossing divisor (see [55] for details). The superpotential is We have introduced the coupling µ to make the problem better behaved. Note that, as long as µ is not zero, it can be set to 1 by a field redefinition. The superpotential W is not univalued in K. As discussed in section 5.7, we have two kinds of couplings: the ω-type given by the residues of dW at its poles, and the t-type given by the positions x a , ζ α . The residues of the poles of dW at x a and ζ α are frozen to the values ±1 by the argument in section 2.2.4, and the corresponding couplings will play no role in the following discussion. The only relevant ω-type coupling is β. Working in a periodic box, β is frozen to the rational number 1/(2ν) (cfr. section 2.2.7); on the contrary, when the electrons move on C, the SQM model makes sense for an arbitrary complex β. The monodromy representation is independent of β, and we are free to deform it away from its physical value 1/(2ν) to simplify the analysis. The non-frozen couplings are the x a and the ζ α which form a set of d distinct points in C identified modulo permutation of equal "charge" ones. The manifold of essential couplings is then where Y n is the space defined in (4.8). The ζ α 's are homogeneously distributed on C, and their detailed distribution is not very important for our present purposes, so we mainly focus on the projection on Y n . One has 1 → P n+M → π 1 (X ) → S n × S M → 1. (6.8) π 1 (X ) contains B n as a subgroup. The UV Berry connection yields a family of unitary arithmetic representations of π 1 (X ); restricting to B n we get a monodromy representation parametrized by the characters θ of Gal(A/K). Before the fermionic truncation the number of vacua with fixed θ is which reduce to just d N after the truncation. JHEP12(2019)172 where B(n, S g,p ) stands for the braid group in n strings on the surface S g,p of genus g with p punctures. 77 B(n, S 0,p ) has the following convenient presentation ( [137] thm. 5.1): generators: σ 1 , σ 2 , · · · , σ n−1 , z 1 , z 2 , · · · , z p−1 (6.12) relations: The σ i generate a subgroup of B(n, S 0,p ) isomorphic to the Artin braid group B n . Then B(n, S 0,p ) Ab = the free Abelian group in the generators σ, z 1 , · · · , z p−1 ∼ = Z p , (6.14) which corresponds to H 1 (K, Z) ∼ = Z p with generators (cfr. eq. (6.2)) A priori, there is one angle associated to each of these generators; let as call them θ, φ a , ϕ α , (6. 16) respectively. If (as physically natural) we consider the quasi-holes and the magnetic-flux units to be indistinguishable we shall takes the corresponding angles to be all equal φ a ≡ φ and ϕ α ≡ ϕ. In the formalism developed in section 5.7.2, this corresponds to taking the quotient group Z ⊕ Z ⊕ Z of In particular π 1 (A H ) ∼ = Z 3 . Therefore, a priori, we have three angles θ, φ and ϕ. Setting q = e iθ , t = e iφ , and y = e iϕ we conclude: Fact 5. In the LG model with indistinguishable defects, the BPS branes span a free Z[q ±1 , t ±1 , y ±1 ]-module of rank d d,N . Normalizability of the ground states requires specialization to q, t and y roots of unit. However the physical FQHE is a much simpler quantum system, and further truncations are present. We shall dwell on this issue in section 6.4. Before going to that, we present a different application of the tt * geometry of a special case of the LG model (6.4). 77 In this notation the standard (Artin) braid group is Bn = B(n, S0,1). Homological braid representations as tt * geometries The theory of general homological braid representations [11,47,55,91,92] is just a special topic in tt * geometry. For the sake of comparison with the geometry of the Vafa model, we briefly review that story following [55,91,92] but using tt * language. There is a sequence of such monodromy representations Mon N for the braid group B n labelled by an integer N ∈ N [47]; for N = 1 we get the Burau representation [10,53], and for N = 2 the Lawrence-Krammer-Bigelow one [11,91,92]. Mon N (B n ) is just the tt * (Lax) monodromy representation for the superpotential (6.5) for N electrons and n quasi-holes with µ = 0 (which makes things a lot less nice), M = 0 and β a real positive number, say 1. The quasi-hole are indistinguishable. Since M = 0, there are no angles ϕ α and eq. (6.18) reduces to H ≡ ker H 1 (K n,N , Z) → Z ⊕ Z. (6.19) One defines the LG model on the Abelian cover A H , so that the BPS-branes at given ζ corresponding to the TFT metric η (cfr. eq. (3.26)) given by the standard tt * formula already written in the original paper [3]. In the present context, and for the special case N = 2, it is called the Blanchfield pairing [91,138,139] (for details, see e.g. section 3.3.5 of the book [11]). For generic q and t, the monodromy representation (6.21) is equivalent to a subrepresentation of a sl 2 Kniznick-Zamolodchikov representation on the sub-bundle of the eigenbundle of the total angular momentum L 3 corresponding to the N electron sector (cfr. section 5.4) of higher weight states, see [55] for details. 6.4 The FQHE quantum system 6.4.1 Characters of Gal(A/K) The FQHE quantum system is a particular version of the 4-susy LG model with superpotential W in (6.5). Quasi-holes and magnetic-flux units are indistinguishable, but we have still to fix the characters θ, φ, and ϕ. The sole purpose of the poles in dW at the points ζ α is to mimic the external magnetic field B via the isomorphism in section 2.2. The discussion in that section was done entirely 78 The shift n → n − 1 is due to µ = 0. JHEP12(2019)172 in K, without any mention of a non-trivial Abelian cover A H , and hence referred to the trivial character ϕ = 0 mod 2π. Therefore we set to zero the angles associated to the generators of H 1 (K, Z) of the form d log P (ζ α )/2πi. This may look as a simplification, but it has a technical drawback. With this choice of character the genericity condition in (say) ref. [55] fails, and several standard results do not longer apply. The quasi-holes may be though of as "wrong-sign" elementary magnetic fluxes, so it looks natural to expect that the characters associated to the generators d log P (x a )/2πi should also be trivial, φ = 0 mod 2π. Clearly, one may extend the analysis to φ = 0. The previous caveat apply to this character as well. We remain with just one non-trivial angle θ associated to the Vandermonde coupling β. From the considerations in section 5.7.3 we expect θ to be rational multiple of 2π JHEP12(2019)172 Keeping into account the Jacobian of {z i } → {e k }, section 5.2.2, the vacuum wave-functions in terms of the z i 's contain the factor i<j (z i − z j ) 1+θ/π , 0 ≤ θ ≤ 2π. (6.29) Comparing with the Laughlin wave-functions [24] we are led to the identification 1 ν = 1 + θ π = 2 ± a b (6.30) which gives 1 ≤ 1/ν ≤ 3. In particular, the minimal b-torsion character, a = 1, yields the FQHE principal series [1] Although this series are the most natural LG quantum systems of the form (6.5), it is by no means the only possibility in the present framework. The tt * geometry of the Vafa model is very complete We are reduced to the fermionic truncation of the model (6.4) which allows us to effectively put the coupling β to zero. Then, provided we may show that the tt * geometry of the onefield model is very complete, we may apply the arguments of section 5.4 and conclude that the monodromy representation factors through a Hecke algebra (in facts through the Temperley-Lieb algebra). Again we follow [48]. The monodromy representation is independent of µ, and we choose it to be very large µ 1. The susy vacuum equations (the Bethe ansatz equations in the language of [48]) have n + M solutions of the form z = x a + O(1/µ) or z = ζ α + O(1/µ) and the critical values, rescaled by a factor µ −1 , are w 1 , · · · , w n+M = x 1 , x 2 , · · · , x n , ζ 1 , ζ 2 , · · · , ζ M + O 1 µ log µ . (6.34) So that the cover C n+M of the coupling space X (cfr. eq. (6.7)) is naturally identified with the cover C n+M of the critical value space. Since to get X we quotient out only the subgroup S n × S M ⊂ S n+M , we need to work on a cover of the actual critical value space Y n+M ≡ C n+M /S n+M , but this is immaterial for the monodromy representation of the subgroup B n . The commutative diagram (4.12) takes the form where all maps are canonical projections. This shows that the one-field theory (hence the tensor product of N decoupled copies of it) has a very complete tt * geometry. The ideas of section 4 lead to the conclusion that the UV Berry monodromy representation of π 1 (Y n ) ≡ B n is given by the holonomy in Y n of a flat sl(2) Kohno connection acting on the space V n+M , restricted to the subspace of total angular momentum L 3 = N − (n + M )/2. (6.37) In eq. (6.36), λ(θ) is some piece-wise linear function of θ. In the context of actual FQHE the character θ is expected to be related to the filling fraction ν as in eq. (6.30). In particular, the monodromy representation factors through a Temperley-Lieb algebra. It remains to compute the function λ(θ). Remark. One expects a simple relation between the monodromy of the Knizhnik-Zamolodchikov connection (6.36) and the homological one associated to the asymmetric limit. It is known that for generic angles θ, φ and ϕ the monodromies associated to the sl(2) Gaudin model with an irregular singularity at ∞ (i.e. with µ = 0) yield all monodromy representations of the sl(2) eigenvectors [48,129]. However here we have three major sources of difference with the situation studied in the math literature: A. the fermionic truncation: we consider a sub-module of B ± of "small" rank; B. the angles are very non-generic. The math arguments do not apply; C. the representation is twisted by the one-dimensional one given by the overall normalization factor 1/τ (w j ) N that has an important effect as example 11 shows. Determing λ(θ) Computing λ(θ) directly is hard and subtle. 79 Therefore we shall take a different approach, namely try to fix it using the properties that it should have and consistency conditions. We fix the (discontinuous) function λ(θ) mod 1. In order not to get confused by tricky issues of signs and bundle trivializations, we focus on the intrinsically defined quantity, q(θ) 2 , namely the ratio of the two distinct eigenvalues of σ 2 i ∈ P n+M , i.e. of the operation of transporting one quasi-hole around another and getting back to the original position after a 2π rotation of their relative separation w i − w j . For the connection (6.36) one has q(θ) 2 = exp 2πi λ(θ) . (6.38) Since the tt * geometry is very complete and symmetric between the quasi-holes, we conclude that λ(θ) is a universal function which does not depend on n, M . Moreover we know that it must be piece-wise linear, i.e. λ(θ) = C 1 + C 2 θ π mod 1, (6.39) 79 See appendix A of [6] for an example of how tricky the computation may be even in simple examples. JHEP12(2019)172 for some real constants C 1 , C 2 . We may assume C 2 > 0 by flipping the sign of θ if necessary. Requiring q(θ) 2 to satisfy the periodicity and "reality" conditions q(θ + 2π) 2 = q(θ) 2 q(−θ) 2 = q(θ) −2 , (6.40) we get 2C 1 = 0 mod 1 and 2C 2 = 0 mod 1. Imposing the same conditions on the ratio q(θ) of the eigenvalues of the braid generator σ i would give the stronger conditions C 1 = 0 mod 1 and C 2 = 0 mod 1. The simplest solution to these conditions is λ(θ) = θ π mod 1. First of all, let us see from the present viewpoint what singles out the principal series (6.31) as "preferred" filling levels. Suppose the connection (6.36) is an actually Knizhnik-Zamolodchikov connection for SU(2) current algebra, with the level κ properly quantized in integral units [9,89]. One has the identification 80 q(θ) = −e ±2πi/(κ+2) , κ ∈ Z. (6.43) Taking the square root of the two sides of (6.42) one has q(θ) = e πiλ(θ) (6.44) and eq. (6.43) becomes ± 2 κ + 2 = 1 + θ π mod 2 = ± a b mod 2 (6.45) which has solutions a = 1 with κ even and a = 2 with b and κ odd. The first case corresponds to the principal series with odd denominators Since κ is even, it is natural to think of the principal series as related to SO(3) Chern-Simons rather than SU(2) Chern-Simons. This is the more natural solution. But there are others. 80 The minus sign in this formula arise from the minus sign in the r.h.s. of eq. (4.118). JHEP12(2019)172 The second case yields filling fractions with denominators divisible by 4 , b odd. (6.47) On the other hand, we may consider the opposite (and less natural) solution to eq. (6.42) q(θ) = −e πiλ(θ) (6.48) which implies 1 + 2 κ + 2 = 1 + θ π mod 2 = 1 ν mod 2 (6.49) that is, ν = m m + 2 m = κ + 2 ∈ N ≥2 . (6.50) a series of filling fractions present in [1] which contains the values of ν corresponding to the Moore-Read [140] and the Read-Rezayi models [141]. There is yet another possibility, namely we may replace (6.49) by Remark. Even if we do not know any compelling argument from the tt * side to require κ+ 2 ∈ Z, this condition is certainly part of the definition of "good" Knizhnik-Zamolodchikov su(2) connections, and we are pretty willing to believe that it is a necessary condition for consistency. Thus we conjucture that the above list of filling fractions is complete as long as ν ≤ 1. Non-abelian statistics (principal series) From the point of view of section 2 of [1] the element σ 2 i of the pure braid group for the principal series has two distinct eigenvalues, in correspondence with the two different fusion channels of the φ 1,2 operator in the minimal (2n, 2n ± 1) Virasoro model. The ratio of the two eigenvalues is As discussed at the end of section 5.4, we have a unique preferred vacuum invariant under parallel transport by the UV Berry connection. We wish to identify it with the unique physical vacuum |vac of the FQHE quantum system when all details of the Hamiltonian H are taken into account, including the non-universal interaction H int (that is, the true vacuum is a topological trivial deformation of |Ω ). Between the states satisfying (6.54), |Ω is the most symmetric under permutations of the spin degrees of freedom. The most symmetric linear combinations of the idempotents e i is their sum 1 = e 1 + · · · + e w . However, we have twists by signs, so we can conclude only that the preferred vacuum corresponds to an element ρ of the chiral ring of the form ±e 1 ± e 2 ± · · · ± e w for some choice of sign. One has ρ 2 = 1, and if an usual LG model with intere superpotential this means ρ = 1. That this applies to the present case is less obvious. Anyhow |Ω = |ρ is the most symmetric vacuum. As long as the interaction H int preserves the symmetry between the holes and the units of fluxes, it lifts the degeneracy keeping the most symmetric state as the ground state. So it is natural to think of |Ω as the true ground state of the FQHE system. Conclusions In this paper we studied the supersymmetric quantum many-body system proposed by Vafa as a microscopic description of the Fractional Quantum Hall Effect from the perspective of tt * geometry. Albeit our arguments are not fully mathematically rigorous (and improvements are welcomed) our "exact" methods lead to an elegant and coherent picture which agrees with physical consideration from several alternative viewpoints. In particular they agree and strengthen the results of [1]; it also make stronger the case for the 4-supercharge Vafa Hamiltonian to represent the correct universality class of the fundamental many-electron theory. Indeed we argued that any Hamiltonian describing the motion in a plane of many electrons coupled to a strong magnetic field are described (at the level of topological order) by Vafa's 4-susy independently of the details of the interactions between the electrons. It is remarkable that one can show that the electron filling fractions ν of any such quantum system should be a rational number belonging to one of the series in section 6.7.1. Of course, this is a manifestation of the universal nature of the topological quantum phases. It is well-known that 3d Chern-Simons is a good effective description of the FQHE. From our present perspective this is quite obvious: the geometric structures we found JHEP12(2019)172 (Kohno connections, Hecke algebras and all that) are the essence of Chern-Simons theory. The nice aspect is that we started from the "obviously correct" quantum description of the FQHE systems in terms of the many-body Schroedinger equation describing N electrons coupled to a strong magnetic field and interacting between them in some "generic" way, and ended up with the Chern-Simons-like structure as an "exact" IR description.
41,767.6
2019-10-11T00:00:00.000
[ "Physics" ]
Building Information Modeling (BIM) Driven Carbon Emission Reduction Research: A 14-Year Bibliometric Analysis Governments across the world are taking actions to address the high carbon emissions associated with the construction industry, and to achieve the long-term goals of the Paris Agreement towards carbon neutrality. Although the ideal of the carbon-emission reduction in building projects is well acknowledged and generally accepted, it is proving more difficult to implement. The application of building information modeling (BIM) brings about new possibilities for reductions in carbon emissions within the context of sustainable buildings. At present, the studies on BIM associated with carbon emissions have concentrated on the design stage, with the topics focusing on resource efficiency (namely, building energy and carbon-emission calculators). However, the effect of BIM in reducing carbon emissions across the lifecycle phases of buildings is not well researched. Therefore, this paper aims to examine the relationship between BIM, carbon emissions, and sustainable buildings by reviewing and assessing the current state of the research hotspots, trends, and gaps in the field of BIM and carbon emissions, providing a reference for understanding the current body of knowledge, and helping to stimulate future research. This paper adopts the macroquantitative and microqualitative research methods of bibliometric analysis. The results show that, in green-building construction, building lifecycle assessments, sustainable materials, the building energy efficiency and design, and environmental-protection strategies are the five most popular research directions of BIM in the field of carbon emissions in sustainable buildings. Interestingly, China has shown a good practice of using BIM for carbon-emission reduction. Furthermore, the findings suggest that the current research in the field is focused on the design and construction stages, which indicates that the operational and demolition stages have greater potential for future research. The results also indicate the need for policy and technological drivers for the rapid development of BIM-driven carbon-emission reduction. Introduction The issue of global warming caused by carbon dioxide (CO 2 ) emissions is a concern for countries across the world [1]. The Paris Agreement, signed at the Paris Climate Conference in year 2015, led to a consensus among several countries to aim to limit the global average warming to 2 • C above the preindustrial levels, and to work towards limiting it to below 1.5 • C [2]. To achieve this goal, the Special Report on Global Warming of 1.5 • C from the Intergovernmental Panel on Climate Change (IPCC) emphasizes that, to limit warming to 1.5 • C, global carbon emissions need to be halved by year 2030, and a carbon-neutrality target needs to be achieved by mid-century [3]. The construction industry accounts for a significant proportion of the world's energy consumption and carbon emissions, of which the construction and operation of buildings represent 36% of the total final energy use, and nearly 40% of the greenhouse gas (GHG) emissions [4]. Therefore, reducing the energy consumption and carbon emissions of buildings has great significance for environmental Materials and Methods This study adopted a mixed-research approach that comprised: (1) accessing relevant publications from the Web of Science (WoS) database; (2) exploring the relationship between BIM, carbon emissions, and sustainable buildings from both the macroquantitative and microqualitative perspectives. In the macroquantitative analysis, bibliometric methods were used to identify the research structures and themes. The research methodology of bibliometric analysis provides an objective and straightforward demonstration of the characteristics and trends of existing research, informing and inspiring more in-depth studies [20]. First, the results-analysis tools in the WoS database were used to analyze the number and sources of BIM publications in the field of sustainable-building carbon emissions in the form of bar and pie charts in order to obtain the current research progress and research trends. Second, the literature was visually analyzed for the keywords, taking advantage of the different features of the bibliometric visualization software from VOSviewer and CiteSpace. VOSviewer is a software tool for building and visualizing bibliometric networks. VOSviewer provides text-mining capabilities that can be used to build and visualize co-occurrence networks of important terms extracted from large volumes of scientific literature. In addition, VOSviewer's interface is easy to operate and more suitable for identifying and analyzing topic clusters [21]. Because the keywords of the literature can distil the core content of the literature, the keywordco-occurrence-analysis method can well reflect the current academic research hotspots, knowledge structures, and development trends of some disciplines [22]. The keywordco-occurrence method has been applied to many research fields, such as medicine, environmental science, and engineering [23][24][25][26]. Due to the large timespan of the literature search, VOSviewer's presentation of the research hotspots in the temporal dimension is not ideal, but CiteSpace's burst-detection-analysis function compensates for this shortcoming. Based on CiteSpace's keyword-burst detection, it is possible to identify the burst keywords for each year of the research period, as well as to better analyze the research hotspots and trends in the temporal dimension [27], thus combining the visual-measurement software of the literature in a macroanalysis. Due to the complexity of scientific development, the use of bibliometrics can only provide a general analysis of the laws of scientific development. To fill in the gaps in the macroanalysis, and based on the findings of the macroquantitative analysis and the purpose of this research paper, a microqualitative analysis was conducted. In the microanalysis, the publications searched in the WoS database were first reviewed, and the strong relevant literature on BIM in the field of sustainable carbon emissions was identified. This was followed by the mapping of the strongly relevant literature to the building lifecycle, and a comprehensive analysis of the impact of BIM on building carbon emissions at all stages of the building lifecycle, to obtain more knowledgeable contributions. The quantity and quality of the data have a significant impact on the results of bibliometric visualization. To ensure the comprehensiveness and higher reliability of the research data, one of the largest and most important international databases, WoS, was used as the data source for this study [28]. Additionally, to ensure the quality of the articles, the WoS core collection was selected, for which the indexed results were set as journal articles and reviews. The search was based on topics, with keywords such as "BIM", "building information model*", "carbon emissions", "carbon trading", "carbon credits", "low carbon", "sustainable building", and "green building". The research-method flow is illustrated in Figure 1, and it comprises five steps: (1) a search by topic in the WoS core collection to collect data; (2) an analysis of the current status of the publications through bar charts and pie charts; (3) a keyword-co-occurrence analysis through VOSviewer software, including network visualization and high-frequency keywords; (4) keyword-burst detection through CiteSpace software; (5) a lifecycle-based analysis of the impact of BIM on carbon-emission reduction within the context of sustainable buildings. Number of Publications and Sources of Publications in the Field of Building Information Modeling (BIM), Carbon Emissions, and Sustainable Buildings The keyword searches, such as "BIM", "building information model*", "carbon emis- Number of Publications and Sources of Publications in the Field of Building Information Modeling (BIM), Carbon Emissions, and Sustainable Buildings The keyword searches, such as "BIM", "building information model*", "carbon emissions", "carbon trading", "carbon credits", "low carbon", "sustainable building", and "green building", generated a total of 524 results for the period from 2008 to 2021 (14 years), of which the first related study in the WoS database is from year 2008. As shown in Figure 2, the graph of the publication statistics shows that the number of publications on BIM associated with carbon emissions and sustainable buildings slowly increased from year 2008 (2 articles) to 2018 (48 articles). Between the years 2018 and 2020, the number of publications showed a rapid increase, of which the total 100 published articles in the year 2020 was more than twice the number of publications compared with 2018. Interestingly, almost the same number (102) of articles were published in year 2021 and 2020, which indicates that studies that investigate the relationship between BIM and carbon emissions for sustainable buildings have become an emerging scheme. Results of Macroquantitative Analysis The 524 articles were imported into the software VOSviewer (version 1.6.17) to gen- The sources of the BIM publications in the area of carbon emissions within the context of sustainable buildings are shown in Figure 3. A total of 524 articles have been published in 150 journals, of which 50% of all the published articles are from 10 leading sources, which are: Sustainability, with 11.64% of all the published articles, followed by Journal of Cleaner Production (9.16%), Energy and Buildings (5.73%), Building and Environment (4.58%), Automation in Construction (4.20%), Sustainable Cities and Society (3.63%), Renewable Sustainable Energy Reviews (3.24%), Energies (3.05%), Building Research and Information (2.48%), and Buildings (2.29%). Results of Macroquantitative Analysis The 524 articles were imported into the software VOSviewer (version 1.6.17) erate a term diagram of the co-clustering for the keyword-co-occurrence analysis. of 2434 keywords were generated, from which the minimum threshold number keyword occurrence was 7 times, encompassing 112 keywords that met the thr They are displayed using network and overlay visualization. Figure 4 shows a network-visualization map of the keyword co-occurrence field of BIM and carbon emissions for sustainable buildings. In network visual there are text labels, circles, connections, and color areas. By default, the items are sented by the text labels and circles. The size of the circle indicates the frequency occurrence of a keyword. The larger the circle, the more frequently the keyword a The distance between the positions of two items and the thickness of the connectin represent the strength of their direct affinity [29]. In addition, the more lines there more co-occurrence there is between the keywords. At the same time, the different Results of Macroquantitative Analysis The 524 articles were imported into the software VOSviewer (version 1.6.17) to generate a term diagram of the co-clustering for the keyword-co-occurrence analysis. A total of 2434 keywords were generated, from which the minimum threshold number of the keyword occurrence was 7 times, encompassing 112 keywords that met the threshold. They are displayed using network and overlay visualization. Figure 4 shows a network-visualization map of the keyword co-occurrence in the field of BIM and carbon emissions for sustainable buildings. In network visualization, there are text labels, circles, connections, and color areas. By default, the items are represented by the text labels and circles. The size of the circle indicates the frequency of the occurrence of a keyword. The larger the circle, the more frequently the keyword appears. The distance between the positions of two items and the thickness of the connecting lines represent the strength of their direct affinity [29]. In addition, the more lines there are, the more co-occurrence there is between the keywords. At the same time, the different colored areas represent different clusters, and this view allows each individual cluster to be viewed [30]. Based on the network-visualization map of the keyword co-occurrence, it is possible to analyze the knowledge structure and research hotspots of BIM in the field of carbon emissions in sustainable buildings. Network Visualization of BIM in the Field of Carbon Emissions for Sustainable Buildings Cluster 4 (green color) is focused on the sustainable-building energy efficiency, with "design", "performance", "consumption", and "energy efficiency". Cluster 5 (pur color) is correlated to environmental strategies, including "energy", "green building "policy", "emissions", "challenges", "governance", "sustainable development", and " mate change". "BIM", "Green building", and "sustainability" are very closely linked in Cluste (red color). The distances between the keywords in the network-visualization m roughly indicate their relevance in the co-occurrence network, in which the closer the sition of two items, the stronger their correlation in terms of the occurrence links in group of publications analyzed. The terms "green building" and "sustainable buildin are deemed as two terms with the same meaning [31]. The development of green buildin and standards are instrumental in achieving the national targets for reducing carbon em sions, for which BIM provides an important technical support to achieve the green-bu ing concepts [32]. In addition, BIM contributes to the development of environmental-p formance-evaluation systems for green buildings, accelerating the process of the build assessment [33][34][35], such as Leadership in Energy and Environmental Design (LEE which assists designers to evaluate the sustainability solutions during the conceptual Cluster 1 (red color), with the theme of sustainable-building construction, including "BIM", "construction", "sustainability", "green building", and "framework", indicates that close links have been established between BIM and green building or sustainable construction. Cluster 2 (yellow color) is associated with the building lifecycle assessment, including "life cycle assessment", "carbon emissions", "energy consumption", "embodied energy", and "environmental impacts". Cluster 3 (blue color) relates to sustainable-building materials, including "buildings", "systems", "efficiency", "concrete", and "cement". Cluster 4 (green color) is focused on the sustainable-building energy efficiency, with its "design", "performance", "consumption", and "energy efficiency". Cluster 5 (purple color) is correlated to environmental strategies, including "energy", "green buildings", "policy", "emissions", "challenges", "governance", "sustainable development", and "climate change". "BIM", "Green building", and "sustainability" are very closely linked in Cluster 1 (red color). The distances between the keywords in the network-visualization map roughly indicate their relevance in the co-occurrence network, in which the closer the position of two items, the stronger their correlation in terms of the occurrence links in the group of publications analyzed. The terms "green building" and "sustainable building" are deemed as two terms with the same meaning [31]. The development of green buildings and standards are instrumental in achieving the national targets for reducing carbon emissions, for which BIM provides an important technical support to achieve the green-building concepts [32]. In addition, BIM contributes to the development of environmental-performance-evaluation systems for green buildings, accelerating the process of the building assessment [33][34][35], such as Leadership in Energy and Environmental Design (LEED), which assists designers to evaluate the sustainability solutions during the conceptual design stage [36][37][38][39]. Furthermore, the role of BIM in supporting the carbon-emission calculation for green buildings greatly promotes the implementation and development of energy-conservation and emission-reduction policies [40]. It is worth focusing on the fact that, although the concept of sustainable buildings has been proposed for a long time, it has mainly been promoted and applied in developed countries, while the promotion in developing countries still faces problems associated with technology, funding, and energy [41], which encounter challenges similar to the current implementation of BIM. In Cluster 2 (yellow color), there is a strong link between "Life Cycle Assessment (LCA)", "carbon emissions", and "carbon footprint". To enhance and measure the sustain-ability performance in the construction industry, the LCA can be used to assess the environmental impacts of buildings throughout all the lifecycle stages [42]. The integrated tool based on BIM and LCA can calculate carbon emissions during the lifecycle of a building, and it can thus help to set carbon-emission targets and sustainable policies [18,[43][44][45]. In addition, the BIM-enhanced LCA system can be used to quickly assess building design solutions and enable sustainable decision making [43,46,47]. Furthermore, LCA-based BIM can support the selection of sustainable products and materials during the building design and construction stages, thereby improving the sustainability of the building [48]. In Cluster 3 (blue color), "concrete", "cement," and "buildings" are very closely associated with each other. The choice of building material has a significant impact on the building performance [49]. Concrete and its main material, cement, are the main building materials in the world, but the production of cement produces large amounts of CO 2 , which is not conducive to sustainable development in the construction industry [50][51][52]. With the development of new technologies and the invention of new materials, there are opportunities to reduce the carbon emissions from cement [53,54]. Prefabricated houses that are built with innovative prefabrication technologies and that are facilitated by BIM have been shown to reduce the carbon footprint of concrete and alleviate the environmental pressure caused by construction [55]. In addition, BIM-assisted analyses of building materials, such as concrete, steel, and cement, provide evidence to inform the development of policies and regulations to reduce the energy consumption and emissions [56,57]. In Cluster 4 (green color), the energy efficiency has a close relation to keywords such as "performance", "design", "modelling", and "simulation". The energy performance and efficiency simulation for a building using data from BIM can help to derive sustainable and sound decisions at the design stage, resulting in reduced energy consumption and carbon emissions [9,[58][59][60][61][62]. The collaboration between BIM and other energy-design software can also yield key information for improving the energy efficiency [63]. In addition, BIM-based technologies can be used to evaluate the existing building solutions or retrofit solutions, thereby facilitating the development of net-zero-energy buildings (NZEBs) [64][65][66][67]. In addition to the design, the reliance on BIM technology aids in addressing the building energy efficiency in the operational stage, which contributes to the carbon-emission targets [68,69]. Although BIM offers new opportunities to improve the building energy efficiency and minimize carbon emissions, it has been argued that there is a need to upskill stakeholders, such as construction workers, through proper BIM education to meet the demands of the digital transformation of the construction industry [70]. The keywords (e.g., "energy", "green buildings", "policy", "emissions", "challenges", and "governance") are strongly connected in Cluster 5 (purple color). Although green buildings are more expensive to build upfront, they improve the energy consumption with the associated water efficiency, which saves operating costs and reduces the carbon footprint [71]. Green building has been considered one of the least costly approaches to mitigate climate change [72]. Different countries around the world have developed a variety of policies and measures to address the challenges of green-building development [45,[73][74][75][76]. Although the incentives of policies can promote the use of green-building technology (GBT) in the building industry, the effect of multiple policies on reducing the carbon emissions from urban buildings is not the same as the associated effect of individual policies [72,77]. Assessing the long-term environmental benefits of multiple policies is essential for policy improvement and prioritization [78]. In terms of adjustment and climate change, the market-based policy mechanisms, such as carbon taxes and carbon trading, assist to achieve the carbon-emission targets while stabilizing industrial production [73]. Table 1 shows the top-10 keywords with co-occurrence strengths in the published articles regarding BIM and carbon emissions for sustainable buildings from 2008 to 2021 (14 years), which are: "BIM"; "life cycle assessment"; "green buildings"; "design"; "construction"; "carbon emissions"; "performance"; "sustainability"; "energy"; "residential buildings". Keywords with higher connection strengths and frequencies have more influence. Design and construction are the stages of the building lifecycle, indicating that the current focus of BIM in the field of carbon emissions for sustainable buildings is on the early stages of the lifecycle: design and construction. "Life cycle assessment" is a significant node in Figure 4, which is further revealed in detail in Figure 5, with strong links to "BIM", "design", "construction", "green building", "energy", "carbon emissions", and "performance", which suggests that the LCA is already well integrated in the field of sustainable design and construction. Design and construction are the focuses of the studies on the lifecycle, and they play significant roles in influencing the carbon emissions in sustainable buildings. Figure 4) in the WoS-core-collection database via VOSviewer software (generated by the authors). Interestingly, "China" is the only keyword representing a country that appears as a significant node in the network-visualization map (Figure 4), which is further highlighted in detail in Figure 6. The keywords closely associated with China are "life cycle assessment", "carbon emissions", "green building", "BIM", "design", and "construction", which indicates that China has a certain influence in the field of BIM, carbon emissions, and sustainable buildings. China has been the world's largest carbon emitter since 2006 [79], and it has now announced that it expects to reduce its carbon-emission level per unit of gross domestic product (GDP) to 60-65% of the year 2005 by the year 2030, and to achieve carbon peaking around 2030 [80]. The construction industry in China generates approximately 30% of the total carbon emissions, and it is therefore a key industry for achieving the national strategy targets on carbon emissions [81]. In addition, China has Interestingly, "China" is the only keyword representing a country that appears as a significant node in the network-visualization map (Figure 4), which is further highlighted in detail in Figure 6. The keywords closely associated with China are "life cycle assessment", "carbon emissions", "green building", "BIM", "design", and "construction", which indicates that China has a certain influence in the field of BIM, carbon emissions, and sustainable buildings. China has been the world's largest carbon emitter since 2006 [79], and it has now announced that it expects to reduce its carbon-emission level per unit of gross domestic product (GDP) to 60-65% of the year 2005 by the year 2030, and to achieve carbon peaking around 2030 [80]. The construction industry in China generates approximately 30% of the total carbon emissions, and it is therefore a key industry for achieving the national strategy targets on carbon emissions [81]. In addition, China has been actively promoting economic instruments, such as carbon emissions and carbon taxes, to promote the sustainable development of the construction industry [73,82]. In order to promote the adoption of GBT in construction, the Chinese government has introduced a series of industry standards and environmental-protection acts that support the development of green buildings and BIM [81,83,84]. In this context, green and sustainable buildings and BIM have significant market potential in China [17,85]. CiteSpace Keyword-Burst Detection Burst-detection analysis identifies and explores the frontiers of the research and latest trends in a particular field. Burst detection in CiteSpace is mainly based on Kleinberg's algorithm [86], which identifies time periods in which a target trend is uncharacteristically frequent. A burst keyword is a keyword that suddenly increases in the number of references or occurrences within a certain period. Its basic principle is to identify hot words based on the growth rates of the frequencies of their occurrences. The time-dependent nature of these keywords is often considered to be at the forefront of the research in a particular field [87]. The detection of bursting keywords allows for obtaining research hotspots and predicting future research trends. The top-20 bursting keywords were obtained through CiteSpace. Figure 7 shows the top-20 keywords with the strongest citation bursts, of which the "keywords" represent the burst terms; "Year" represents the start of the analysis, which spans 2008-2021; "Strength" represents the intensity of the burst; "Begin" represents the year of the start of the keyword burst; "End" represents the year of the end of the burst; the red line represents the duration of the burst [88]. CiteSpace Keyword-Burst Detection Burst-detection analysis identifies and explores the frontiers of the research and latest trends in a particular field. Burst detection in CiteSpace is mainly based on Kleinberg's algorithm [86], which identifies time periods in which a target trend is uncharacteristically frequent. A burst keyword is a keyword that suddenly increases in the number of references or occurrences within a certain period. Its basic principle is to identify hot words based on the growth rates of the frequencies of their occurrences. The time-dependent nature of these keywords is often considered to be at the forefront of the research in a particular field [87]. The detection of bursting keywords allows for obtaining research hotspots and predicting future research trends. The top-20 bursting keywords were obtained through CiteSpace. Figure 7 shows the top-20 keywords with the strongest citation bursts, of which the "keywords" represent the burst terms; "Year" represents the start of the analysis, which spans 2008-2021; "Strength" represents the intensity of the burst; "Begin" represents the year of the start of the keyword burst; "End" represents the year of the end of the burst; the red line represents the duration of the burst [88]. As shown in Figure 7, "sustainable building" and "optimization" became research hotspots in 2012, and this lasted for four years, which is the longest duration of all the bursting words. Sustainable buildings have huge potential to reduce the greenhouse effect [89]. Rising energy costs and the need for greater energy efficiency have raised the public awareness of the need to reduce the energy consumption throughout the lifecycles of buildings, and they have led to efforts to integrate green-and sustainable-building initiatives into the traditional building design, construction, and operation processes to optimize buildings towards sustainability [39]. "Simulation" was a research hotspot in 2016, which is the strongest (strength: 3.84) of the bursting keywords. Advances in information technology have enabled BIM to become an energy-simulation tool for early integrated building design [90]. In addition, the bursting keywords in 2021 are "LEED" and "circular economy", which are also current research hotspots in the sustainable-building sector. The growing demand for sustainable buildings over the past few years has resulted in several countries establishing their own green-building rating systems. LEED is the most widely used building-assessment system worldwide, and it is used in several countries/regions, including the United States, Canada, Brazil, Mexico, India, and China [38,91]. The current research shows that the integration of BIM with LEED can speed up the LEED-certification process by assessing the sustainability of buildings at the design stage [37]. Due to the increasing demand for buildings to be environmentally friendly, some public-sector buildings are being required to use BIM from the design stage, which has led to an increasing number of building projects requiring both BIM and LEED [92]. The circular economy (CE) has great potential in the construction industry, which currently has an unmatched impact on the environment with the requirements of sustainable development [93]. The bursting keywords emerged in recent years for a relatively short periods of time, which indicates that the hotspots of research on BIM in field of carbon As shown in Figure 7, "sustainable building" and "optimization" became research hotspots in 2012, and this lasted for four years, which is the longest duration of all the bursting words. Sustainable buildings have huge potential to reduce the greenhouse effect [89]. Rising energy costs and the need for greater energy efficiency have raised the public awareness of the need to reduce the energy consumption throughout the lifecycles of buildings, and they have led to efforts to integrate green-and sustainable-building initiatives into the traditional building design, construction, and operation processes to optimize buildings towards sustainability [39]. "Simulation" was a research hotspot in 2016, which is the strongest (strength: 3.84) of the bursting keywords. Advances in information technology have enabled BIM to become an energy-simulation tool for early integrated building design [90]. In addition, the bursting keywords in 2021 are "LEED" and "circular economy", which are also current research hotspots in the sustainable-building sector. The growing demand for sustainable buildings over the past few years has resulted in several countries establishing their own green-building rating systems. LEED is the most widely used building-assessment system worldwide, and it is used in several countries/regions, including the United States, Canada, Brazil, Mexico, India, and China [38,91]. The current research shows that the integration of BIM with LEED can speed up the LEED-certification process by assessing the sustainability of buildings at the design stage [37]. Due to the increasing demand for buildings to be environmentally friendly, some public-sector buildings are being required to use BIM from the design stage, which has led to an increasing number of building projects requiring both BIM and LEED [92]. The circular economy (CE) has great potential in the construction industry, which currently has an unmatched impact on the environment with the requirements of sustainable development [93]. The bursting keywords emerged in recent years for a relatively short periods of time, which indicates that the hotspots of research on BIM in field of carbon emissions for sustainable buildings have changed rapidly over time [94]. Studies have focused on climate change since 2011, after which sustainable buildings gained attention. In recent years, "government", "cities", "tools", "LEED", and "circular economy" have become hotspots, indicating that the carbon emissions in sustainable buildings have gradually improved, for which government policy plays an important role. Microqualitative Analysis In order to analyze the specific role of BIM in sustainable buildings in relation to carbon emissions, 81 articles closely related to BIM, sustainable buildings, and carbon emissions were selected for a microqualitative analysis from the 524 publications in the macroanalysis, and the articles were mapped across the building lifecycle stages, including the design, construction, operation, demolition, and full lifecycle. The selection of strong relevant literature was based on the following criteria: first, the articles dealt with the three knowledge concepts of BIM, sustainable buildings, and carbon emissions; second, the articles could be mapped to the stages of the building lifecycle for analysis. As shown in Table 2, the mixed-research method [95][96][97][98][99] is mainly used in the design stage, followed by case analysis [9,60,100,101] and modeling [35,102,103], to explore the BIM decision-making model for reducing carbon emissions in sustainable buildings. The environmental impact associated with the design stage is up to 70% of the whole impact throughout the building lifecycle phases [104]. The integration of BIM with decisionmaking tools helps to address the difficulties of making sustainable-material decisions early in the design process [105]. BIM-assisted multiple-criteria decision making (MCDM) allows for an analysis of the key factors that affect the carbon emissions and energy efficiency in sustainable buildings [96], in which the MCDM allows alternatives to be evaluated and optimal decisions to be made [97,106]. By implementing BIM and LCA with a database, the environmental impacts of the design solutions can be measured at an early stage [103], which allows for a faster and more accurate quantification and assessment of the environmental impacts of different building materials for selecting the most sustainable building materials at the design stage [69,107]. A BIM-based approach to building design optimization can help with the tradeoff between the lifecycle cost (LCC) and lifecycle carbon emissions (LCCEs) of a building design, which aids designers to provide costeffective and environmentally friendly design solutions [108]. In addition, the BIM-assisted LEED-certification system provides a framework to calculate the points that are earned at the concept stage for automatically assessing the sustainability of the building [37]. The integration of BIM and the LEED-certification process at the conceptual design stage also allows for the automatic calculation of the LEED-certification points to be compiled and the associated registration costs for the green and certified materials. In terms of energy use, the integration of BIM with energy-modeling packages enriches the energy analysis of the building, which leads to significant energy cost savings and reductions in electricity and carbon [9,61,63,95,101]. In addition, BIM enhances the quantitative assessment of the implied carbon emissions, and it optimizes the design at the building-element (BE) level, enabling low-carbon design concepts to really take hold [99]. The use of BIM offers various opportunities for integration with systems of building analysis and decision making [109], which provides building designers faster and more accurate approaches for design decision making that have a positive impact on the building carbon emissions and sustainability assessment of the building [64]. [38] 2020 Model and case study Building costs and energy efficiency Galiano-Garrigos [60] 2019 Case study Energy-performance and carbon-footprint assessment Chen, SY [65] 2019 Model and case study Net-zero-energy buildings (NZEBs) Carvalho [100] 2019 Case study BSA Tushar [101] 2019 Case study Energy-consumption optimization Najjar [69] 2019 Model and case study Integration of BIM and LCA Singh [62] 2019 Case study Building energy assessment Eleftheriadis [111] 2018 Model and case study Structural design optimization Lee [61] 2018 Case study Green BIM Eleftheriadis [102] 2018 Modeling Relationship between structural costs and carbon emissions Akcay et al. [112] 2017 Model and case study BIM and LEED integration Chen et al. [106] 2016 Model and case study BIM and MCDM integration Liu et al. [108] 2015 Case study The tradeoff between lifecycle cost (LCC) and lifecycle carbon emissions (LCCEs) Jalaei, F et al. [36] 2015 Model and case study BIM and LEED integration Cemesova et al. [63] 2015 Case study BIM and building-performance-simulation (BPS) integration Jun et al. [35] 2015 Modeling Green BIM template (GBT) Jrade, A et al. [103] 2013 Modeling Integration of BIM and LCA Bank et al. [105] 2011 Modeling BIM and system-dynamics (SD) integration As shown in Table 3, the main research methods used in the construction phase of BIMaided building carbon emissions are modeling [64,[113][114][115] and case studies [116][117][118]. The refurbishment of buildings obtains attention at this stage. During the construction or renovation of a building, a large amount of energy and materials are used, and a large amount of carbon is produced [119]. Based on BIM, the carbon emissions during construction can be effectively assessed, which provides a good reference for the selection of low-carbon-emission materials. In addition, the longer the distance of the delivery of the construction material to the construction site, the greater the carbon emissions [114]. The integrated framework based on BIM and the web-map service (BIM-WMS) facilitates the selection of the construction-material suppliers and the planning of the material-transport routes [115]. Interestingly, refurbishment is effective at achieving a better energy performance to reduce carbon emissions [117]. A scan-to-BIM approach was used to assess the feasibility of retrofitting options for existing buildings [64]. Moreover, based on the BIM integration with energy-modeling software, the building energy performance can be optimized to determine the most energy-efficient and cost-effective strategy for the building renovation [117,118,120,121]. However, challenges remain in BIM for building renovations and retrofitting, with the data capture being the first and most critical issue for renovation projects [122]. With the vast number of metadata available from BIM models, the analysis of such building data using artificial-intelligence (AI) techniques offers new options for decision making based on continuous system learn-ing [113]. This is important for deep refurbishment projects to improve the energy efficiency of buildings towards NZEBs. As shown in Table 4, the research themes during the operations phase focus on the performance of the building operations and the tradeoffs between the implicit and operational energy, with modeling [44,68,116,126] and case studies [127][128][129] as the main research methods. The operation and maintenance stages take the longest and cost the most to the project owner compared with the other building lifecycle phases [130]. Studies have shown that the operating energy (OE) accounts for the major share of a building's lifecycle energy use, followed by the embodied energy (EE), while other stages of the lifecycle consume less energy [131]. The increase in the global energy use demonstrates the urgent need to effectively and comprehensively reduce the energy and carbon footprints of buildings [132]. Reducing the OE could increase the EE, demonstrating the importance of the tradeoff between the OE and EE [133]. A BIM-driven design process can efficiently address the tradeoffs between specific and operational energy sources [127,134]. In addition, the use of BIM to assist the energy efficiency during the operation and maintenance phases of a project facilitates bridging the gap between the predicted and actual energy consumption of a building, which contributes to the goal of reducing the carbon footprint of the building [68]. Table 4. The role of BIM in the operation and maintenance stages in terms of carbon emissions (generated by the authors). Source Year Research Method Research Topic Venkatraj [134] 2020 Mixed Tradeoffs between embodied and operational energy Cheng [46] 2020 Case study Integration of BIM and LCA Piselli [116] 2020 Case study Application of facility energy management Chen [126] 2019 Case study Workflow design [129] 2013 Model and case study Energy-efficient building operations As shown in Table 5, the impact of BIM on the carbon emissions during the building demolition phase has been less studied than in the other building lifecycle phases. The main themes of the studies focus on the greenhouse gases generated by construction waste, and the management and disposal of construction waste. The research methods for the studies in this phase are case studies [135], modeling [136], and reviews [50,137]. Based on BIM, it is possible to assess the carbon emissions of the building in the demolition phase during the building lifecycle, with the site-treatment phase being the largest contributor to the carbon emissions of the demolition phase [138]. Because the construction and demolition waste (CDW) end-of-life disposal process is a source of GHG emissions, the BIM-based quantification of CDW GHG emissions can lead to targeted GHG-reduction measures [136]. In addition, the application of BIM can increase the high recovery rate of CDW to achieve sustainable waste management [135]. Concrete has the highest emissions among the large amount of CDW, and asphalt has the highest CO 2 -emission capacity [136]. Furthermore, if concrete is recycled and reused at the end of its lifecycle, then the lifecycle of the GHG emissions can be reduced in the end-of-life phase [50]. Table 5. The role of BIM in the demolition stage in terms of carbon emissions (generated by the authors). Source Year Research Method Research Topic Shi [135] 2021 Model and case study Construction and demolition waste disposal technology Li [137] 2020 Review Construction and demolition waste management Xu [136] 2019 Modeling Greenhouse gas (GHG) emissions Wang [138] 2018 Case study Integration of BIM and LCA Wu [50] 2014 Review GHG emissions from concrete As shown in Table 6, during the whole lifecycle, studies focus on the impact of BIM on the overall carbon emissions of sustainable buildings, and on the means of using the latest technology to assist in reducing the carbon emissions of buildings. This stage mainly adopts mixed-research methods [56,[139][140][141][142][143][144], followed by modeling [40,51,[145][146][147] and review methods [13,16,42,148,149]. BIM has been employed with green-building concepts (green BIM), which acts as a model-based process that generates and manages coherent building data throughout the project lifecycle to improve the building energy-efficiency performance and contribute to the achievement of the sustainable development goals [13]. Green BIM has a strong capability for holistic BIM-based green-building analysis, where the energy modeling and analysis can have a significant impact on determining the building performance in terms of the carbon emissions, energy use, sustainable-material selection, and cost savings [16]. BIM-based analyses for energy consumption that correspond to different orientations reveal that a well-oriented building can save significant amounts of energy throughout its lifecycle [150]. 5D BIM models allow for optimal decisions to be made regarding the appropriate energy and cost-effective envelope components [145]. Associating BIM modeling with LCA is the best procedure for achieving sustainable development and environmental protection, and for empowering the decision-making process in the building sector [15]. The approach can be used to determine which building ele-ments are significant in the LCA. Furthermore, the integration of BIM and LCA allows for the assessment of the total carbon footprint of a building throughout its lifecycle, and it aids in completing the optimization of the greenhouse gas emissions throughout the lifecycle [4,18,51], for which the LCA is integrated with the BSA at an early stage of the project, based on the BIM approach. As such, designers can quickly assess the environmental impacts of their buildings while conducting concise sustainability assessments with few resources, addressing all the sustainability issues [141]. Table 6. The role of BIM across whole lifecycle stages in terms of carbon emissions (generated by the authors). Source Year Research Method Research Topic Gardezi [139] 2021 Model and case study The relationship between physical characteristics and carbon footprint Marzouk [151] 2021 Interviews BIM and green-building assessment Kurian [51] 2021 Modeling Building carbon-footprint estimation Li [56] 2021 Model and case study Assembled concrete buildings Figueiredo [140] 2021 Model and case study Sustainable-material selection Shukra [16] 2021 Review Holistic green BIM Carvalho [141] 2020 Model and case study Integration of BIM and LCA Fokaides [142] 2020 Mixed Intelligent buildings Dalla Mora [42] 2020 Review Integration of BIM and LCA Kaewunruen [143] 2020 Model and case study Whole-life costs and carbon emissions Wen [144] 2020 Mixed BIM and green-building assessment Montiel-Santiago [152] 2020 Model and case study Sustainability and energy efficiency Pucko [145] 2020 Modeling Building envelope Wang [4] 2020 Model and case study Integration of BIM and LCA Palumbo [48] 2020 Model and case study Integration of BIM and LCA Lu [18] 2019 Model and case study Integration of BIM and LCA Muller [148] 2019 Review Interoperability of BIM Petrova [146] 2019 Modeling Data-driven sustainable design Yang [45] 2018 Model and case study Integration of BIM and LCA Gan [153] 2018 Model and case study A holistic BIM framework for low-carbon design Marzouk [154] 2017 Model and case study GHG calculations Xie [40] 2017 Modeling BIM and carbon calculations Najjar [15] 2017 Model and case study Integration of BIM and LCA GhaffarianHoseini [149] 2017 Review Postconstruction-energy-efficiency testing Lu [57] 2017 Model and case study Integration of BIM and LCA Peng [155] 2016 Model and case study BIM and carbon calculations Abanda [150] 2016 Model and case study The effect of the building orientation on the building energy consumption Wong [13] 2015 Review Green BIM Lee [147] 2015 Modeling BIM green template As shown in Figure 8, research related to BIM around sustainable-building carbon emissions is concentrated in the full-building lifecycle stage and design stage, with the design and construction stages accounting for 48% of the overall microanalysis publications, and with only 16% of the publications focusing on the operational and demolition stages of the building. Although the nD capabilities of BIM make it potentially applicable throughout the full-building lifecycle phase, designers and contractors are primarily concerned with the application of BIM in the design-and construction-management stages. The capabilities of BIM are not well utilized in the operations, or in the demolition stages [149]. From an implementation-based value perspective, the implementing BIM shows a decreasing order of value throughout the lifecycle of a building [144]. Perhaps the wealth of the metadata that are available from BIM models opens up new possibilities for analyses at all stages of the building lifecycle [113]. The integration of BIM with a building management system (BMS) has the potential to improve the sustainability of the postconstruction phase of a building, which can go a long way to remedy the current deficiencies in the ap-plication of BIM [149]. In general, the integration of BIM and carbon-emission-enhanced frameworks will play a more important role in sustainability in the digitalization of the construction industry, in which the effective decision-making framework will help to achieve the goal of reducing carbon emissions [151]. cerned with the application of BIM in the design-and construction-management stages. The capabilities of BIM are not well utilized in the operations, or in the demolition stages [149]. From an implementation-based value perspective, the implementing BIM shows a decreasing order of value throughout the lifecycle of a building [144]. Perhaps the wealth of the metadata that are available from BIM models opens up new possibilities for analyses at all stages of the building lifecycle [113]. The integration of BIM with a building management system (BMS) has the potential to improve the sustainability of the postconstruction phase of a building, which can go a long way to remedy the current deficiencies in the application of BIM [149]. In general, the integration of BIM and carbon-emissionenhanced frameworks will play a more important role in sustainability in the digitalization of the construction industry, in which the effective decision-making framework will help to achieve the goal of reducing carbon emissions [151]. BIM Research Hotspots and Development Trends on Carbon Emissions within the Context of Sustainable Buildings BIM is in a process of rapid development to facilitate strategies for carbon-emission reduction. This is mainly due to both policy and technical reasons. On the policy side, several countries/regions have taken effective measures to curb the carbon emissions that emanate from buildings [156], in which Chinese researchers have been very active and have achieved significant results, as indicated in the results of Section 3.2.1. It is interesting to note that, of the 524 publications included in the macro analysis, China contributed 168, or about 32%, of the total number of publications. To achieve environmental sustainability, China's 12th and 13th Five-Year Plans have given significant impetus to the development of green buildings [157,158]. To achieve the goals of "carbon neutrality" and "carbon peaking", China has introduced various legal measures and financial incentives in the 14th Five-Year Plan, which covers the years from 2021 to 2025, to improve the energy efficiency in highcarbon sectors using digital technology [159]. Studies have shown that BIM is becoming a key tool for the sustainable transformation of China's construction industry [160,161]. China is therefore promoting the development of BIM in the field of sustainable buildings and carbon-emission reduction in many ways, including academic, policy, and technical. The results of the burst-word detection suggest that LEED and the CE were the current hot policies that past studies were concerned with. There are currently a large number of LEED-registered and certified building projects internationally [162]. As green-buildingassessment standards are crucial to the sustainable development of buildings, in recent years, there has been more and more attention paid to and studies on green-building-assessment tools [163]. There are studies that claim that LEED buildings contribute to reduced energy use, reduced carbon emissions, and greater human health benefits [164][165][166]. However, LEED requires appropriate monitoring and reporting mechanisms to ensure that it achieves its due design level. Otherwise, the actual performance of the building may not be as expected [167]. BIM-based capabilities can simplify the LEED-certification process and save time and resources [168]. The integration of BIM and value engineering can measure the relationship between the construction cost and energy saving while obtaining a LEED-compliant and costeffective design solution [38]. Building owners and designers can benefit from the integration of BIM and LEED-certification systems to enable a sustainability assessment of the building process [37]. The CE and circular construction (CC) have received increasing attention over the past decade [169]. Research on the application of CE principles to the construction industry has grown in recent years [170]. The CE principles promote the maximum reuse of materials and minimize waste generation, leading to environmental and economic benefits [171,172]. Therefore, the CE assists in saving resources, and in reducing carbon footprints, the risk of the material supply, and price fluctuations [107,172]. Based on the accumulation of the building lifecycle information, BIM can provide effective decision making in the design phase of buildings based on the principles of the CE [173]. In the demolition stage of buildings, the use of BIM to reduce the construction waste has attracted more attention, which is instrumental for reducing carbon emissions and implementing the CE principles in buildings [174]. In terms of technology, the results of this research indicate that the development of LCA technology plays an important role in the impact of BIM on carbon emissions in sustainable buildings. BIM offers opportunities to improve the data transparency and compliance checks, as well as to automate LCA assessments [175]. The integration of BIM with LCA contributes to the advancement of the CE, as it enables the assessment of the entire lifecycle [107], which is a growing trend [176]. Although BIM and LCA integration can significantly reduce a building's energy use and carbon footprint during the design phase, the potential of BIM and LCA for building carbon emissions in the other lifecycle phases has not yet been fully exploited. In the future, the integration of BIM and LCA is expected to play a more important role in carbon quantification and mitigation [43]. Furthermore, BIM is the first step towards the Industrial Revolution 4.0, of which digital twins and virtual reality are key elements [177]. BIM has been treated as a type of digital twin that offers the opportunity to integrate the physical and digital worlds, which greatly contributes to solving the industry's challenges. As a result, over the past few years, researchers have been applying digital twins to solve industry problems, including those in the construction industry [178]. Although digital twins have being implemented in construction, most of the attention has been focused on the design and construction phases, neglecting the demolition and restoration phases [178]. In addition, BIM and various XR technologies (VR, AR, and MR) have shown great potential to change the way that the AEC industry designs, builds, operates, and monitors [179]. The metaverse based on the integration of XR and BIM enables the project remote collaboration of partners [180]. With the background of the coronavirus pandemic, the establishment of a building metaverse will pave the way for new opportunities in managing the carbon emissions of buildings. Furthermore, building intelligence is one of the future trends in the construction industry, which is a shift that requires the design, monitoring, and control of the data and information related to the energy assessment of the built environment through technologies such as BIM and digital twins. Thus, in the future, the digitization of the building sector will contribute significantly to the achievements of building energy, carbon-emission, and environmental-performance goals [141]. Challenges for the Use of BIM in Managing Carbon Emissions for Sustainable Buildings The results of the microqualitative analysis indicate that BIM contributes to carbonemission reduction throughout the lifecycle of a building, but the current studies mainly focus on the practice in the design phase, with less attention paid to the operation and demolition phases. This could be associated with the green-building-rating dilemma of carbon emissions, in which most certified green buildings are only effective in the design phase and perform unreliably in the operational phase [144]. There is a consensus in the literature that appropriate design decisions at the initial stage of a building project have a noticeable impact on the sustainability of a building, including the reduction in carbon emissions. Several studies report that holistic approaches through BIM facilitate the design of sustainable buildings, in which the integration of BIM with LCA or LEED promotes the selection of sustainable building materials and improves the building sustainability to reduce carbon emissions [37,69,107]. However, the interoperability is still the biggest challenge to BIM, and the integration of LCA with BIM suffers from a lack of data and difficulties in data exchange [37,47,107]. In addition, the most BIMbased LCA studies to date have focused on one-off assessments, rather than on iterative assessments in the building design process for sustainable decision making [181]. The cost of LEED in terms of BIM and LEED integration is expensive, and less and less owners are actively seeking LEED certification. The energy-performance tools that are part of the BIM paradigm can help with sustainable building decisions [37]. However, the tools used can influence the results and thereby produce poor decisions regarding carbon emissions [60]. During the construction phase, the BIM-based assessment of the carbon emissions provides a reliable reference for the selection of low-carbon-emitting materials [114]. Interestingly, refurbished buildings are more energy efficient, environmentally friendly, and cheaper than new buildings [182]. Although BIM has been widely used in new buildings, it is still in its infancy in building-refurbishment projects [119]. The assessment of the options for building refurbishment in terms of carbon emissions is needed to consider the energy-efficiency benefits of the building, the disruption caused to occupants, and the costs involved in the renovation process [117]. Studies have shown that the operations and maintenance phases have the most important role in the reduction in greenhouse gas emissions throughout the lifecycle of a building [44]. Currently, only a few studies have addressed BIM methods for improving the building energy efficiency and reducing carbon emissions during the operations phase. However, energy simulations performed in BIM software, such as Autodesk Revit, may not provide accurate results, as the simulation may not capture some of the data of the building, such as the heat-transfer pathways of the building [140]. In terms of the demolition phase, the studies are concerned with the disposal of construction-waste debris. Adopting sustainable deconstruction strategies, such as reuse and recycling, can also result in economic, energy, and carbon savings [148]. Additionally, the demolition phase is an increasing contributor to global greenhouse gases due to continued industrialization and urbanization [183]. However, the carbon emissions previously generated in construction and demolition waste have been largely ignored [138]. Furthermore, there are increasing concerns about the carbon emissions generated by the on-site collection and sorting during the recycling of demolition waste [138]. The assimilation of all the professional models and integrated facility data with BIM is very complex, and it is difficult to achieve data interoperability, as different BIM models from various disciplines are built via a variety of BIM software [143]. As such, the main challenges at present are the usability and model complexity of the BIM software specified for carbon emissions in green buildings, and the lack of interoperability across the BIM packages. Additionally, BIM faces the challenge of the penetration and learning costs [144]. From an economic perspective, BIM's high initial investment costs and unpredictable returns have also hindered its development [184]. The lack of senior management attention also has a significant impact on the use of BIM in sustainable construction [185]. Building information models are considered to be the key components in future construction practice, in which their benefits for productivity and reliability are widely acknowledged. It is becoming increasingly important to use tangible performance data early in the design phase to influence the decisions and prevent errors on carbon emissions. Reusing existing BIM data repositories and operational building data can enable datadriven databases for sustainable building designs that aim to reduce carbon emissions [146]. Conclusions This paper adopts a mixed-research method to quantitatively and qualitatively explore the current status, hotspots, challenges, and research trends in the application of BIM in the field of sustainable buildings that target carbon emissions. The research employed a bibliometric approach to obtain relevant studies by querying the WoS database with keywords, and it visualized the relationship between BIM, carbon emissions, and sustainable buildings through keyword co-occurrence via VOSviewer software, including network visualization and high-frequency keywording, and citation-burst detection via CiteSpace software for a macroanalysis from 2008 to 2021, followed by a microanalysis on the role of BIM in reducing carbon emissions during the sustainable-building lifecycle stages. The main contributions of this paper are as follows: (1) this is the first attempt to explore the relationship between BIM, carbon emissions, and sustainable buildings for the last 14 years (from 2008 to 2021) using bibliometric analysis; (2) this is the first use of a visualization software tool to analyze the trends, research hotspots, and applications of BIM to support carbon-emission reduction within the context of sustainable buildings; (3) compared with the existing studies, this paper presents a comprehensive analysis of the impact of BIM on reducing carbon emissions across the design, construction, operation, and demolition stages of buildings, in which the current research status in the field of each lifecycle stage was explored and critically analyzed. Sustainable construction, building lifecycle assessment, sustainable building materials, energy efficiency, and environmental strategies are the five most popular research directions of BIM that enables carbon-emission reduction. Moreover, this paper examined the policy and technology reasons for the rapid development of BIM for the reduction in carbon emissions for sustainable buildings. To meet the goals of the Paris Agreement, many countries have adopted policies and measures to achieve carbon neutrality in sustainable buildings by setting legislation, industry standards, carbon taxes, and carbon trading, for which BIM, as an important digital tool for the construction industry, plays an important role in the carbon emissions of sustainable buildings. However, from a technical point of view, there are still challenges, such as interoperability, in integrating BIM with LCA for an energy-efficiency simulation that results in carbon reduction. Furthermore, BIM, as a kind of digital twin in the construction industry, has a wide opportunity scope for development in carbon emissions. Sustainable buildings enhanced by emerging technologies, such as VR and AR, could be the key for the construction industry to open the gate towards a metaverse in the building and construction environment. Within the context of the pandemic, the current trend in metaverse development paves a way for BIM approaches in the field of carbon emissions for sustainable buildings. As such, the findings of this analytical research provide pointers to inform designers, builders, and policymakers in the development of BIM-driven carbon-emission-reduction strategies. This is timely considering the current political global carbon-neutrality target and the target of reducing carbon emissions by half by 2030, as well as the technical development of the emerging building metaverse. Although the current studies in the field have been developing rapidly, the studies mainly focus on the design stage. As such, the carbon-emission reduction through BIM in the later stages of the building lifecycle could be explored in future research. The bibliometric analysis in this paper was conducted based on the WoS-core-collection database. Future research could be extended to other databases, such as Scopus, to provide a collective view of the potential and value of BIM and emerging technologies, and especially the future building metaverse, in reducing carbon emissions in buildings. Data Availability Statement: Publicly available datasets were analyzed in this study. These data can be found at https://login.webofknowledge.com/ (accessed on 2 January 2022).
13,237
2022-10-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathscr{P}}{\mathscr{T}}$$\end{document}PT-symmetric interference transistor We present a model of the molecular transistor, operation of which is based on the interplay between two physical mechanisms, peculiar to open quantum systems that act in concert: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathscr{P}}{\mathscr{T}}$$\end{document}PT -symmetry breaking corresponding to coalescence of resonances at the exceptional point of the molecule, connected to the leads, and Fano-Feshbach antiresonance. This switching mechanism can be realised in particular in a special class of molecules with degenerate energy levels, e.g. diradicals, which possess mirror symmetry. At zero gate voltage infinitesimally small interaction of the molecule with the leads breaks the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathscr{P}}{\mathscr{T}}$$\end{document}PT -symmetry of the system that, however, can be restored by application of the gate voltage preserving the mirror symmetry. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathscr{P}}{\mathscr{T}}$$\end{document}PT -symmetry broken state at zero gate voltage with minimal transmission corresponds to the “off” state while the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathscr{P}}{\mathscr{T}}$$\end{document}PT -symmetric state at non-zero gate voltage with maximum transmission – to the “on” state. At zero gate voltage energy of the antiresonance coincides with exceptional point. We construct a model of an all-electrical molecular switch based on such transistors acting as a conventional CMOS inverter and show that essentially lower power consumption and switching energy can be achieved, compared to the CMOS analogues. Implementation of molecules in integrated circuits (IC) offers great advantages due to extreme miniaturization and perfect reproducibility [1][2][3] . But despite long-term and intensive efforts since its origin in the early 70s 4 , molecular electronics (ME) has not yet presented any experimentally realized candidate to replace the silicon transistor as a "wheel-horse" of the modern IC industry. High expectations were held and are still in place with graphene 5 and post graphene organic Dirac materials 6 . During past period ME mainly concentrated on the attempts to reproduce typical elements of silicon electronics [7][8][9][10][11][12] . In the case of graphene and related materials this approach has been based on the efforts to develop band opening methods 13 , which, however, haven't resulted yet in a new IC technology either. On the other hand, due to complex geometry and topology of molecular structures one could expect that the devices with working principles different from the ordinary field-effect and bipolar transistors, could be designed. The energy spectrum of a molecule manifests itself in transport phenomena by means of resonances. If the molecule possesses different carrier paths, destructive interference can result in formation of asymmetric Fano-Feshbach resonance 14 , which combines a resonance (transmission peak) and an antiresonance (transmission dip) nearby. Existence of the interference effect in transport through molecules, which is intensively discussed in the literature [15][16][17][18][19] , is now well established experimentally [20][21][22][23] . In ref. 24 quantum interference transistor (QIT) was described with the "off " state corresponding to perfect interference destruction of both transmission and current. One of the main challenges in CMOS electronics is reduction of the operating voltage that doesn't follow Moore's law (ITRS 2.0). In ref. 25 it was argued that the interference control of the carrier transport over different paths can substantially reduce the operating gate voltage, because the suppression of the transmission function can be achieved at lower gate voltage compared with the one required to move the transmission function peak away from the distribution function window. However, antiresonances, which arise from the destructive quantum interference (DQI), are determined by the topology of the structure that includes different interfering carrier paths. Hence, variation of the on-site potential and/or intersite hopping can only shift the antiresonance in energy rather than destroy it, because interfering paths are retained under such variations. The voltage required to shift an antiresonance away from the operating energy region is determined by the carrier distribution in the leads on a scale no less than kT and, hence, is not small. Therefore, the proposed control of the transmission resonance by low voltages should rely on a mechanism more complex than multipath interference solely. For a logical gate to operate, its constituting elements (transistors) should undergo transitions between the "off " and the "on" SCientifiC REPORtS | (2018) 8:15780 | DOI: 10.1038/s41598-018-34132-0 states, with the latter state being even more important than the former one as it provides switching of the successive gate. The "on"/"off " ratio for the transistor conductance should be as high as possible to provide a reliable gate operation. However, this requirement is scarcely achievable in quantum interference transistors operating near the antiresonance because of the low transmission away from the antiresonance 26 . Hence, a quantum transistor is required, which possesses a combination of antiresonance and nearby resonance that is responsible for high conductance in the "on" state. In this paper we show that, indeed, the transmission probability of a special class of molecules can be controlled in a wide range by applying small gate voltages. This phenomenon can be easily understood with a deep connection between PT -symmetry and scattering problems 27,28 . Coupling of spatially symmetric molecule to electrodes results in PT -symmetry breaking, which is accompanied by coalescence of resonances 29 at the exceptional point of an open quantum system comprised of the molecule and the electrodes 28,30 and transmission decrease. This effect is enhanced by the shift of Fano-Feshbach antiresonance to the EP point. The mentioned special class of molecules consists of systems with degenerate energy levels, e.g. diradicals [31][32][33] (but not restricted to), which possess mirror symmetry. Results Phenomenological model. Consider an open quantum system comprised of a molecule and contacts that possesses EP in a sense of ref. 28 . At this EP two unity resonances coalesce and cancel each other making the transition to the "off " state very sharp. An open quantum system should be spatially symmetric in order to possess EP. To take advantage of both DQI and coalescence of resonances at the EP one should consider a system with two resonances and one antiresonance. The transmission coefficient of an arbitrary two-terminal quantum system can be written in the compact form 28, 30 : Here P(ω) and Q(ω) are some functions of an energy ω. Real zeroes of function P(ω) correspond to transmission nodes (antiresonances), while real zeroes of function Q(ω) determine exact positions of perfect (unity) resonances on the energy axis 28, 30 . In the vicinity of the resonances and antiresonance P(ω) and Q(ω) can be expressed as 28 : where ε 0 and ε ± 1 determine exact position of the transmission antiresonance and resonances, correspondingly, Γ is the imaginary part of the contact self-energy describing interaction of a molecule with the leads 34,35 and B is some positive dimensionless coefficient. Factors D P and D Q take into account the contributions from the remote energy levels and can be estimated as D P ~ D Q ~ Δ N−2 , where Δ is an average distance between the remote energy levels and N is the dimension of the molecular orbital Hilbert space. Phenomenologically, functions P(ω) and Q(ω) are defined up to an arbitrary common factor, hence, we can redefine the parameter  B BD D / P Q and replace three phenomenological parameters B, D P and D Q by just B. Further we will use B as such generalized parameter. Consider a model that possesses degenerate antiresonance and resonance levels in the symmetric phase, which can be distorted by an external perturbation described by parameter δ. Energies of the antiresonance ε 0 and resonances ε ± 1 can be expressed as: Here x 0,1 , y and z are some dimensionless parameters depending on the structure of a particular system. Terms in Eq. (3), which are linear in δ, describe the shift of the (anti)resonance positions due to the external perturbation and non-analytical term (square root) in the expression for ε ± 1 describes the coalescence of resonances phenomenon. Energy of the degenerate state (at δ = 0) is set to the energy origin. One should bare in mind that degenerate levels occur only in multiply-connected structures (in simple-connected, e.g. linear, all energy levels are non-degenerate). In such systems antiresonances naturally appear as well (P(ω) = 0). If the external perturbation δ is high enough (δ > δ EP = zy −1 Γ), then the transmission has two unity peaks at ω ε = ± 1 , which coalesce at δ = δ EP . This is the EP, which is associated with PT -symmetry breaking. For small δ or, equivalently, for strong enough coupling with the leads (δ < δ EP ) the roots of Q(ω) in (2) are complex. Therefore |Q(ω)| is non-zero for any real energy ω and the transparency is always less than unity. The poorest transmission profile (i.e. the "off " state) corresponds to δ = 0. From Eqs (1-3) one can see, that in this case there are two peaks at ω = ±zΓ with Microscopic model. The above described phenomenological properties of the transmission coefficient can be realized within the following microscopic model. There are two degenerate states |1〉 and |2〉 with the same energy ε. This system is attached symmetrically to two leads (left and right) in such a way that the mirror symmetry operation σ LR , which maps the left lead into the right one and vice versa, is also an element of the symmetry group G of the bare Hamiltonian of the system, i.e. σ LR ∈ G. Due to the degeneracy, there must be an irreducible representation of the symmetry group G acting on the subspace  = Span( 1 , 2 ) 12 of the total Hilbert space of states of the isolated system. Let us choose the basis in 12  as the basis of a symmetric |s〉 and an anti-symmetric |a〉 states, which are the eigenstates of the reflection operator σ LR : σ LR |s〉 = |s〉 and σ LR |a〉 = −|a〉. These states conserve their symmetry with introduction of the perturbation, which is invariant under σ LR . The tunnelling matrix elements between the leads and the symmetric state are of the same sign, whereas, the tunnelling matrix elements between the leads and the anti-symmetric state are of opposite signs (see Fig. 1a). Therefore in this basis couplings to the leads can be written as Application of the gate voltage introduces external perturbation that lowers the symmetry of the system, resulting in removal of the degeneracy. Suppose that the external perturbation lowers the symmetry of the system from the group G to its some non-trivial subgroup H ⊂ G, such that σ LR ∈ H. This perturbation introduces detuning of the energy of symmetric and anti-symmetric states: ε s,a (δ) = ε + k s,a δ with δ > 0 and dimensionless parameters −1 ≤ k s,a ≤ 1 accounting for the different influence of the perturbation on the energies of symmetric and anti-symmetric states (see Fig. 1a). Parameters k s,a can be estimated, for instance, from the perturbation theory (see Supplementary Materials for details); note that k a ≠ k s as the considered perturbation removes the degeneracy. Assume that couplings (5) are not affected by this perturbation. In fact, Γ and γ s,a are some smooth functions of the perturbation strength, i.e. δ. However, taking this into account does not change the qualitative picture described below. The transport through the states |s〉 and |a〉 (neglecting the contribution from remote states to the transport process) within the wide-band limit 35 can be described by the transmission coefficient in the form (1) with the following P and Q functions (see Methods section): is the bare Hamiltonian of the system (without taking the leads into account), is the non-Hermitian auxiliary Hamiltonian with its real eigenvalues corresponding to energies of perfect transmission 28 , and Î is the 2 × 2 identity matrix. denotes to the mirror reflection and  is the time reversal operator (complex conjugation). Indeed, one can easily check that operator Ĥ aux PT PT acts on any vector  ∈ v 2 in the same way as operator Ĥ aux . Thus, Ĥ aux is PT -symmetric 36 . Therefore, it can possess real eigenvalues, which correspond to perfect transmission peaks, and for certain parameters they can coalesce and the PT -symmetry of the Hamiltonian Ĥ aux will be broken, leading to coalescence of perfect resonances into one peak with amplitude lower than 1. Moreover, such resonance coalescence is accompanied by symmetry breaking of electron occupation at the energy corresponding to the transmission peak (see Methods for details). Using Eqs (5-8), the transmission coefficient can be written as: From this formula one can see that for a sufficiently large detuning δ there are two unity peaks of transmission Decreasing the detuning one can achieve the coalescence of resonances at δ = 4γ s γ a Γ|k a − k s | −1 , which corresponds to the EP of Ĥ aux . Further decreasing δ results in further lowering of the transmission coefficient peak. There is also a zero-valued antiresonance (zero of P) at ω ε δ γ s s a s a 2 2 2 2 , which additionally lowers the transmission with decreasing δ. One can see that, moving the energy origin to ε, Eq. (9) takes the phenomenological form described by Eqs (1-3) with the following phenomenological parameters: Thus, according to Eq. (4) one can see that the poorest transmission peaks (at δ = 0) are From (10) we see that there is a limiting case γ s /γ a → 1 that results in B → 0 and x 0 → ∞, while the product remains finite. In this case complete opaqueness, i.e. T ≡ 0, can be obtained for δ = 0. In practice, however, the transmission never vanishes because of the transport through remote energy levels, which are not taken into account in this model. Evolution of the transmission coefficient profile (9) with variation of δ for different ratios of the parameters γ s and γ a is illustrated in Fig. 1c-e. It is important to note that for a single-level system (i.e. γ a = 0 or γ s = 0), where the PT -symmetry breaking is absent, T peak equals unity identically. Quantum interference inverters based on PT -symmetric interference transistors. Consider a quantum analogue of CMOS inverter consisting of two quantum switches, connected between one common output lead and two reference voltage sources with voltages V ref1 and V ref2 , respectively. Input signal V in is applied to the common gate of these switches, which is galvanically isolated from the system. Figure 2 depicts two examples of such quantum interference inverters. For a high-resistance load we can implicitly evaluate the voltage transfer characteristic V out (V in ) of this inverter and estimate its maximum negative gain, which is achieved at Here ΔV = V ref2 − V ref1 is fixed by the external supply voltage. In the saturation regime ( Δ  e V kT ) the maximum value of G grows exponentially with ΔV due to the factor Δ sinh e V kT 2 . For  Δ e V kT (in the ohmic regime) it becomes independent of the temperature and we can estimate the minimum difference of the reference voltages (supply voltage) ΔV crit needed to make the inverter operate, i.e. which provides G max = 1: the gain G max reaches its maximum: 2(γ s /γ a ) 2 for γ s < γ a or 2(γ a /γ s ) 2 for γ s > γ a . Hence, the steepest negative gain of the voltage transfer characteristic is limited to −2. Nevertheless, it is suitable for operation of the inverter. Model examples of real molecular structures. Possible candidates for a physical realization of the proposed quantum switch are molecules with degenerate states, e.g. diradicals 31 , which are already known for providing transmission antiresonances 32 . Moreover, linkers can stabilize diradical character of such molecules 37 . Hence, we can expect that connection of certain contacts to them will not destroy the degeneracy of the states, but rather stabilize it. Diradicals can be classified into two types: disjoint and non-disjoint depending on how their non-bonding orbitals intersect (i.e. whether they have common atomic orbitals or not). It was shown that simple starring procedure can distinguish between these two types 38,39 . Disjoint diradicals seem to be the most appropriate candidate for our quantum switch. Indeed, applying contacts to atoms comprising different degenerate orbitals means that symmetric and antisymmetric combinations of these orbitals will be connected to the leads by equivalent coupling strength, i.e. parameters γ s and γ a [introduced in Eq. (5)] in this case can be made equal (at least within the nearest neighbour tight-binding approximation). As was highlighted above, according to Eq. (11) this leads to zero conductance in the "off " state. Operation principle of quantum interference inverter requires that one switch must be in the "on" state and another in the "off " state. There are two possible ways of dealing with this task. First of all, one can choose two different quantum systems (molecules) to make two quantum switches that is similar to the conventional CMOS, where there are two different types of transistors: n-channel MOS and p-channel MOS. This approach requires a technology of synthesis of two different molecules with strictly given parameters. On the other hand, we can use the same quantum system (molecule) to create both switches, but influence their spectrum in different ways by additional gates. This method needs only one type of molecule to be synthesised, but the introduction of additional gates results in some complication of the conventional technological process. In the following subsections we consider some schematic examples of quantum inverters with the same molecules in both switches. Different energies of the on-site atomic states are assumed to be achieved by a certain configuration of additional gates (e.g. backgates). Model of non-disjoint diradical. The first example structure we consider is a model of the trimethylenemethane molecule, which is a non-disjoint diradical 32 . Schematically the quantum inverter structure composed of two such four-atomic (carbon skeleton) molecules is shown in Fig. 2a. Presented schematic model corresponds to a tight-binding Hückel structure of one of the resonance configurations of the trimethylenemethane, which is stabilized as it coincides in symmetry with the leads couplings. Hence, hopping integral τ is assumed to be greater than τ 1 as it corresponds to a higher bond order. The transmission coefficient, phenomenological, and microscopical parameters of such switches are presented in the Supplementary Material. We apply the reference voltages as follows: V ref1 = 0 and V ref2 = V 0 is the supply voltage. The range of the input voltage, thus, is 0 ≤ V in ≤ V 0 . Applied input potential changes only some on-site energies of the system (in the shaded region in Fig. 2a). We take this into account in the following form: The electrostatic influence of the reference and output leads can also be taken into account in a way similar to (14). It can be shown that this influence only distorts the voltage transfer characteristic and taking it into account is not obligatory to illustrate the operation principles of the quantum interference inverters. Consider the following example: ε = = eV 0 40 . Figure 3a shows voltage transfer characteristics of the inverter for V 0 = 5 mV and Fig. 3b for V 0 = 10 mV (by dot-dashed lines in both cases). In the latter case the voltage transfer characteristic is obviously better because of higher negative gain achieved. Model of disjoint diradical. Another example we consider is a model of the divinylcyclobutadiene molecule, which is a disjoint diradical 32 . Schematically the quantum inverter structure composed of two such molecules is shown in Fig. 2b. Presented model corresponds to a simple tight-binding Hückel structure of the divinylcyclobutadiene molecule with all bonds treated as equal, providing equal tunnelling matrix elements τ between p-orbitals of carbon atoms. The transmission coefficient, phenomenological, and microscopical parameters of such switches are presented in the Supplementary Material. We assume that the applied input voltage changes only on-site energies in the shaded region in Fig. 2b, which is taken into account similarly to Eq. (14). Figure 3a shows voltage transfer characteristics of this inverter for V 0 = 5 mV and Fig. 3b for V 0 = 10 mV (by solid lines in both cases) for the following parameters: ε = = eV 0 0 , α = 0.5, Γ = 1 meV, and typical value of the hopping integral τ = 3 eV for conjugated hydrocarbons 40 . For higher supply voltage transfer characteristic of the inverter based on the disjoint diradical switches (solid lines in Fig. 3b) show higher maximum absolute value of the gain rather than for the inverter based on the non-disjoint diradical switches (dot-dashed lines in Fig. 3b). This is expectable as disjoint diradicals provide γ s /γ a = 1 and, thus, the "off "-state current of such switch becomes smaller (it differs from zero only due to the presence of the "background" transmission arising from remote resonance peaks). However, for smaller supply voltage (Fig. 3a), this "background" component may become high enough to cancel out the key benefit of the disjoint diradical (γ s /γ a = 1). Moreover, for disjoint diradicals the degeneracy is removed only in the second order of perturbation (i.e. k a , s become functions of δ). Hence, for lower supply voltages, the sensitivity to the gate voltage decreases compared to non-disjoint diradicals. In this case the transfer characteristic of a non-disjoint diradical based quantum inverter turns out to have slightly more gain (Fig. 3a). From Fig. 3 we see that for our particular parameters chosen only divinylcyclobutadiene can provide a working inverter at room temperature with 10 mV supply voltage. Discussion We have shown that there is a fundamental difference between resonant tunnelling through a non-degenerate and a doubly degenerate state. It arises from the effect of an antiresonance formation (because of destructive interference of electron flows through both degenerate states) and the coalescence of resonances, which can be well described by the concept of the PT -symmetry breaking 28 . Thus, one can utilize this phenomenon and use the degenerate quantum system spectrum to construct the quantum interference switch. In comparison with numerous other proposals of molecular interference transistors we should emphasize that our solution has important advantages: it is an all-electrical device (i.e. electric current is controlled by an applied voltage), it can possess extremely low operating voltage (even at room temperature) and our PT -symmetry based model provides a straightforward design rules for constructing such a transistor, which we demonstrated by the examples of specific disjoint and non-disjoint diradicals. Thus, this might be a way to dramatically lower the supply voltage, which now cannot be made lower than 0.5-1 V 41 for the conventional silicon electronic devices, even for promising tunnel field-effect transistors (FET) 42 . Advanced technology of FETs with carbon nanotube (CNT) channel 43 also provides a variety of advantages over the bulk Si electronics 10,44 , but sufficient reduction of the supply voltage is not among them. A separate approach is based on a development of non-electronic logical gates, based on, e.g., exciton 45,46 or even heat flow 47 control. Typically such devices has input and output signal of different nature and are designed for optoelectronic 45 or optomechanical 46 applications, rather than for large-scale integration. Performance of real devices is limited by noise. It is especially significant for low supply voltages. Noise in quantum systems is not distinguished into thermal and shot, it is always a superposition of both and it can be described by a closed expression 48 . Nevertheless, it is illustrative to discuss these contributions independently. Shot noise spectral power is proportional to the current through the system and, thus, it becomes negligible as the voltages and, correspondingly, the currents are scaled down. On the other hand, at finite (room) temperature thermal noise can influence the transport dramatically. Thus, thermal noise is one of the limiting factors of lowering the supply voltage 41 . The mean-square voltage uncertainty is Δ , where C is the capacitance of the load, which is typically the gate capacitance of the next switch. Therefore, using several molecules in parallel in the single switch and, consequently, a bigger gate contact, will increase the capacitance C and lower the noise. But, on the other hand, the greater C is, the worse switching rate ν ~ (τ) −1 = (RC) −1 can be achieved. Here the resistance R can be estimated from the current in the "on" state (see Supplementary Material) Assuming the gate is about 10 × 10 nm lying on 2 nm thick dielectric with k ≈ 5, we can estimate its capacitance to be about 2aF. This means that without noise taken into account we can expect switching frequencies to be of order of 500 GHz even for low conducting "on" state with R ≈ 1 MΩ (Γ ≈ 0.04 kT for room temperature). However, noise dramatically lowers the possible operating frequency. Indeed, restricting the minimum operating frequency ν min , one can estimate the minimum allowed supply voltage, which we take to be 8 times the noise voltage uncertainty to provide an error probability at about 10 −1541 . Finally we arrive at the following restriction: , where ν min is in GHz. Thus, sub-kT/e supply voltages seems to be possible up to ν ≈ 7 GHz. More detailed analysis of noise impact on operation of quantum interference gates will be presented elsewhere, as well as consideration of technological parameter variation resulting in asymmetry of the inverter structure. Our model is based on Hückel theory and does not account for electron repulsion. In ref. 17 it has been shown that DQI predicted by Hückel model does survive in alternant hydrocarbons even with Coulomb effects being taken into account. Diradicals, which we consider, are alternant, hence, we expect that DQI will persist in a more realistic model either. On the other hand, a role of electron repulsion (Coulomb blockade), which is detrimental in the case of weak coupling of a molecule with contacts, is less significant in the case of strong coupling. We have performed tight-binding simulations with increased value of electrode-molecule coupling Γ = 0.1 eV and Γ = 0.2 eV (see Supplementary). We have shown that such a value of Γ still retains switching properties of interference transistor but makes operating voltage larger. Thus, the supply voltage cannot be made arbitrary low, as can be provided by an idealized model, but its lower bound should be determined from the optimal value for Γ, which, on the one hand, prevents the Coulomb blockade and, on the other hand, provides sharp enough transmission peaks resulting in sufficient contrast of the switching current. By no means the study of the interplay of electrode-molecule coupling strength and electron repulsion in DQI sensitive molecular transport based on ab initio simulations will be of great interest. In our model DQI acts in concert with PT -symmetry breaking at the exceptional point, which is accompanied by coalescence of resonances and sharp decrease of electron transmission. PT -symmetry breaking in quantum conductor possesses a close resemblance with equilibrium phase transitions in condensed matter 28,30 . Hence, we expect that the model proposed in our manuscript should retain efficiency with Coulomb interactions included due to the fact that it is related to the symmetry properties of the system. But the general impact of Coulomb interactions on the behaviour of a quantum conductor near the PT -symmetry breaking transition at the exceptional point is quite a new interesting problem, which deserves special study. It is also important to admit that we consider an idealized model of the quantum transport process within this paper. Nevertheless, a few steps toward a realistic description can be made, e.g. (see Supplementary): going beyond the wide-band approximation and taking into account electrostatic influence of the reference voltage sources. It turns out that qualitatively our results are valid in more realistic situations as well. However, in order to verify them, one should provide a full numerical calculation for the electronic structure of the considered system, which we plan to perform in our further publications. At the moment practical realization of high scale integration of quantum molecular gates is beyond the reach of modern technology. However continuous progress in self-assembling methods and, especially, development of atomic precision lithography could make almost inevitable the implementation of molecular gates as building blocks of ICs. We hope that the idea of using degenerate energy levels (e.g. diradicals, but not obligatory) to create molecular switches could open a new field of research. Microscopic model: auxiliary Hamiltonian and its exceptional point. Conductance of a quantum conductor is defined by its tunnelling transmission coefficient 49 , which can be calculated by the standard formula 50 derived from the nonequilibrium Green's function formalism 34,51,52 : is the retarded/advanced Green function and Γ = † u u , , 53 is the coupling matrix (imaginary part of corresponding contact self-energy) to the left or to the right lead. Here u L,R are vectors, describing couplings of the states of the isolated system to the left/right lead. The traditional approach within the wide-band limit (neglecting real parts of the contact self-energy) leads to the following expression for the transmission 28 : is the Feshbach effective Hamiltonian. Following general formalism form ref. 28 , one can show that transmission (17) can be written in the form (1). For our microscopic model this can be easily checked using Eqs (5) and (7). Indeed, the following identity holds true: Hence, from Eq. (18) we see that within the wide-band limit, the transmission of our system can be written in the form (1) with . Real eigenvalues of the auxiliary Hamiltonian define the exact location of perfect transmission resonance and, being PT -symmetric it can experience PT -symmetry breaking, which results in resonance coalescence. This is accompanied by the symmetry breaking of electron occupation in the transmission maximum. The matrix of occupations per unit energy n can be calculated within NEGF formalism 34 : Here n ss , n sa , n as , and n aa are elements of the occupation matrix in the basis of symmetric and anti-symmetric states. If the sites i and j are mapped into each other by the mirror reflection σ LR (i.e. j = σ LR (i)), then corresponding components of the symmetric and anti-symmetric states must be: s i = s j and a i = −a j . Therefore, the difference between occupations of this sites is σ n n s a n n n n 2 ( ) where |Q| is given by Eq. (18). Thus, it is obvious, that at perfect transmission resonances (real zeroes of Q) electron occupation is distributed symmetrically (with respect to σ LR operation). Whereas, for energies, which correspond to complex roots of Q and transmission lower than 1, there is always asymmetric distribution of electrons. Therefore, PT -symmetry breaking at the EP (coalescence of two perfect resonances into one non-perfect) manifests itself by a symmetry breaking of electron distribution, that was shown for linear systems in ref. 30 . Quantum interference inverter transfer characteristic. For a given general structure of the inverter one can calculate all terminal currents if certain voltages are applied. To do so, the transmission coefficients between the leads T 1out , T 2out and T 12 should be determined first. Reference voltages V ref1 and V ref2 (assume V ref1 < V ref2 ) are given by some external ideal voltage sources, i.e. we treat them as constants. As the input lead is SCientifiC REPORtS | (2018) 8:15780 | DOI:10.1038/s41598-018-34132-0 isolated from the system, the voltage V in influences only transmission coefficients. For high-resistance loads the output voltage V out is derived from the condition I out = 0, where I out is the total current through the output lead, which is composed of the currents from the first and the second reference voltage leads (with appropriate sign). Let us consider an inverter composed of two identical quantum switches (PT -symmetric interference transistors). Assuming that resonance width is sufficiently small, we can approximate condition I out = 0 as follows: Here subscripts 1 and 2 correspond to the first and to the second quantum switches. Energies ε 1,2 are the energies of degenerate states in the first and in the second system respectively. Assume that they are adjusted to ε 1,2 = eV ref1,2 , i.e. to the biased Fermi level of each reference lead. The applied input voltage influences parameters δ 1,2 of the switches. The following model dependence of δ 1,2 on the input voltage V in provides a symmetrical transition from the "on"-mode to the "off "-mode of each quantum switch as V in varies in the interval [V ref1 , V ref2 ]: where 0 < α < 1 is an electrostatic lever arm of the input lead (common gate). One can substitute Eq. (24) into Eq. (23) and derive the implicit dependence V out = V out (V in ), which is then used to get an expression for the gain (12).
7,799.4
2018-06-18T00:00:00.000
[ "Physics", "Engineering" ]
Chromosome-level genome assembly and annotation of the Yunling cattle with PacBio and Hi-C sequencing data Yunling cattle is a new breed of beef cattle bred in Yunnan Province, China. It is bred by crossing the Brahman, the Murray Grey and the Yunnan Yellow cattle. Yunling cattle can adapt to the tropical and subtropical climate environment, and has good reproductive ability and growth speed under high temperature and high humidity conditions, it also has strong resistance to internal and external parasites and with good beef performance. In this study, we generated a high-quality chromosome-level genome assembly of a male Yunling cattle using a combination of short reads sequencing, PacBio HiFi sequencing and Hi-C scaffolding technologies. The genome assembly(3.09 Gb) is anchored to 31 chromosomes(29 autosomes plus one X and Y), with a contig N50 of 35.97 Mb and a scaffold N50 of 112.01 Mb. It contains 1.62 Gb of repetitive sequences and 20,660 protein-coding genes. This first construction of the Yunling cattle genome provides a valuable genetic resource that will facilitate further study of the genetic diversity of bovine species and accelerate Yunling cattle breeding efforts. Background & Summary Yunling cattle, a new hybrid breed of beef cattle, was bred by the Academy of Grassland and Animal Science in Yunnan, China.As the fourth beef cattle breed with fully independent intellectual property rights bred by Chinese scientific researchers, Yunling cattle has attracted more and more attention.The cattle represents not only the first meat cattle breed bred by three-way hybridization in China, but also the first new beef cattle breed adapted to the tropical and subtropical areas of southern China 1 .Its final genetic composition is from 50% Brahman cattle, 25% Murray Grey, and 25% Yunnan Yellow cattle.With their enhanced growth and high meat production rate from Murray Grey, good reproductive capacity from Yunnan Yellow cattle, and adaptation to high temperature and high humidity conditions from Brahman, Yunling cattle have become a crucial source of beef production in China 2 .Some studies have indicated that Yunling cattle have good fattening performance, notable physical proportions, increased meat yield, favorable carcass traits, and a desirable fatty acid composition in their meat 3 .However, the molecular mechanisms that are responsible for these phenotypic variations have not yet been fully elucidated 4 .Therefore, more research is needed to understand the basis of the development of good traits in Yunling cattle. In this paper, we constructed a chromosome-level genome of Yunling cattle by combining short reads, PacBio HiFi(high fidelity) reads, and Hi-C(High-throughput chromosome conformation capture) sequencing data.We extracted genomic DNA from heart tissue, constructed different libraries, and sequenced them using an appropriate platform.After quality filtering and trimming of the raw data, Hifiasm 5 software was employed to assemble the genome using clean HiFi reads.To further improve the accuracy of the assembly, the assembly was refined with Nextpolish 6 software using short reads with default parameters.Subsequently, we applied the PacBio HiFi reads and Hi-C data to generate a high-quality chromosome-level genome assembly of Yunling cattle.The final genome assembly(3.09Gb) was anchored to 31 chromosomes, containing 1119 contigs(N50 = 35.97Mb) and 826 scaffolds (N50 = 112.01Mb).A total of 1.62 Gb of repeat sequences were identified, representing 52.26% of the total genome, of which 99.80% were classified as known repeat elements.In addition, structural annotation of the genome yielded 20,660 genes, of which 92.8% (19,172) could be functionally annotated with at least one of the five protein databases (NR, SwissProt, KOG, GO and KEGG).The Yunling cattle genome assembled in this study provides a valuable genetic resource for future efforts to study Yunling cattle and further comparative analysis of genome biology among bovine species to promote breeding research. Methods sample collection. A four-year-old male Yunling cattle from the Chuxiong JingDa Farm in Chuxiong City, Yunnan Province, was used for genome sequencing and assembly.Pectoralis profundus muscle, Cervical part of the trapezius muscle, Latissimus dorsi muscle, Internal abdominal oblique muscle, Gluteobiceps muscle, lung, spleen, liver, and heart tissues were collected and rapidly frozen in liquid nitrogen.Heart tissues were used for DNA sequencing for genome assembly, while all tissues were used for transcriptome sequencing.library construction and sequencing.Genomic DNA from heart tissue was extracted using the standard phenol-chloroform extraction method for DNA sequencing library construction.The integrity of the genomic DNA molecules was checked using agarose gel electrophoresis. In addition to that two types of libraries were constructed,the BGISEQ DNBSEQ-T7 platform and the PacBio Sequel II platform (CCS mode) were applied for genomic sequencing to generate short and HiFi genomic reads, respectively.For the BGISEQ DNBSEQ-T7 platform (Shenzhen, Guangdong, China), a short-read paired-end sequencing library with an insert size of 350 bp was prepared according to the protocol provided by the manufacturer and sequenced using the BGISEQ DNBSEQ-T7 platform at GrandOmics Biosciences Co., Ltd.(Wuhan, China).This resulted in accurate short reads of 161.89 Gb (approximately 64x coverage of the estimated genome size, Table 1).These reads were further cleaned using the fastp 7 utility.Adapter sequences and reads containing more than 10% N bases or low quality bases (≤5) were removed from the raw sequencing data.After filter, 150.59 Gb of cleaned data were retained for the subsequent analysis.To attain adequate sequencing data for genome assembly, we constructed two 15 kb DNA libraries utilizing the extracted DNA and the standard Pacific Biosciences (PacBio, Menlo Park, CA) protocol, and fragments were chosen via the Blue Pippin Size-Selection System (Sage Science, MA, USA).The two libraries were sequenced using Single-Molecule Real-Time (SMRT) cells with the PacBio Sequel II platform (CCS mode) in GrandOmics Biosciences Co., Ltd.(Wuhan, China).After removing adaptors, we obtained 61.81 Gb of HiFi subreads (Table 1) for genome assembly.The genome sequencing data used for the subsequent genome assembly are summarized in Table 1. For Hi-C sequencing, we constructed a library based on the standard protocol of Belton et al. with some modifications 8 .Briefly, heart tissues were ground into small pieces and then vacuum infiltrated in a nuclei isolation buffer that was supplemented with 2% formaldehyde.Crosslinking was halted by the addition of glycine and further vacuum infiltration.Fixed tissues were ground into powder before being re-suspended in a nuclei isolation buffer to obtain a nuclei suspension.The purified nuclei was digested with 100 units of DpnII and labeled by incubation with biotin-14-dCTP.Biotin-14-dCTP from non-ligated DNA ends was eliminated due to the exonuclease activity of T4 DNA polymerase.The ligated DNA was fragmented into 300-600 bp fragments, followed by blunt-end repair and A-tailing.The DNA was then purified through biotin-streptavidin-mediated pull-down.Finally, the Hi-C libraries were quantified and sequenced via the BGISEQ DNBSEQ-T7 platform at GrandOmics Biosciences Co., Ltd.(Wuhan, China). RNA sequencing was conducted for the generation of transcriptome data to predict gene models.To incorporate as many tissue-specific transcripts as possible, various tissues were collected, as indicated in the sample collection section.TRIzol reagent (Invitrogen, USA) was used to extract separately RNA from all collected tissues, including Pectoralis profundus muscle, Cervical part of the trapezius muscle, Latissimus dorsi muscle, Internal abdominal oblique muscle, Gluteobiceps muscle, lung, spleen, liver, and heart tissues of Yunling cattle, according to the manufacturer's protocol.RNA quality was checked using a NanoDrop ND-1000 spectrophotometer (Labtech, Ringmer, UK) and a 2100 Bioanalyzer (Agilent Technologies, CA, USA).Next, RNA-Seq libraries were prepared using the MGIEasy RNA Sample Prep Kit (BGI, China) and sequenced using the BGISEQ DNBSEQ-T7 platform at GrandOmics Biosciences Co., Ltd.(Wuhan, China).In total, 121.67 Gb of short-read RNA-seq data were obtained (Table 1).These RNA-seq data were used for whole-genome protein-coding gene prediction. De novo assembly of the Yunling cattle genome. To understand the genomic characteristics of Yunling cattle, k-mer analysis using short paired-end reads was performed prior to genome assembly to estimate the genome size and heterozygosity.In brief, the quality filtered reads were subjected to a 27-mer frequency distribution analysis using the KMC 9 and GenomeScope 10 software.The following equation was used to estimate the genome size of the Yunling cattle: G = K-num/K-depth (where K-num is the total number of 27-mers, K-depth denotes the k-mer depth, and G represents the genome size).The genome size of the Yunling cattle was estimated from the frequency distribution to be 2.8 Gb.For de novo genome assembly, after obtaining the HiFi long reads, the genome was de novo assembled into a preliminary assembly using Hifiasm with HiFi long reads.To further improve the accuracy of the assembly, the preliminary assembly was refined with Nextpolish using short reads with default parameters through 4 rounds.Finally, the genome size was 3.10 Gb, composed of 1,129 contigs, and the contig N50 was 38.85 Mb (Table 2).The detailed statistical results are shown in the Table 2. Hi-C assisted scaffolding.The quality control of the Hi-C raw data was carried out with the HiC-Pro 11 software.First, low quality sequences (quality scores <20), adaptor sequences and sequences shorter than 30 bp were filtered out using fastp.Second, the clean paired-end reads were mapped to the assembly using bowtie2 12 (-end-to-end-very-sensitive -L 30) to obtain the unique mapped paired-end reads.Third,valid interaction paired reads were identified from unambiguously mapped paired-end reads and retained by HiC-Pro for further analysis.HiC-Pro filters out invalid read pairs such as dangling-end, self-cycle, re-ligation and dumped products.Then the scaffolds were further clustered, ordered, and oriented onto chromosomes by LACHESIS 13 .Finally, Juicebox 14 was used to manually correct large-scale inversions and translocations to obtain the final pseudochromosomes.As a result, the chromosome-level genome assembly was 3.09 Gb in length with contig and scaffold N50 values of 35.97 Mb and 112.01 Mb, respectively (Table 2).A heatmap was drawn to illustrate the interaction of each chromosome(Fig.1). To evaluate the quality of the assembled genome, the completeness and accuracy were assessed via BUSCO (Benchmarking Universal Single Copy Orthologs)analysis and short-read mapping.The completeness of the assembled Yunling cattle genome was assessed by using BUSCO 15 with the mammalia_odb10 database.We found that 8,837(95.78%) of the 9,226 conserved single-copy genes in mammals were present in our assembly (Table 3).We also aligned NGS short reads to the genome and found that 99.03% of the reads were reliably aligned, showing a high mapping ratio for the short-read sequencing data. repetitive element identification.We first annotated the tandem repeats by employing the software GMATA 16 and Tandem Repeats Finder(TRF) 17 .GMATA identified the simple repeat sequences (SSRs), while TRF detected all tandem repeat elements across the entire genome.Transposable elements (TEs) in the genome of Yunling cattle were then identified using both ab initio and homology-based methods.Briefly, an ab inito repeat library for genome of Yunling cattle was initially predicted using MITE-Hunter 18 and RepeatModeler 19 with default settings.The obtained library was aligned with TEclass Repbase (http://www.girinst.org/repbase) 20for the purpose of classifying the type of every repeat family.To identify repeats across the genome, RepeatMasker 21 tool was used to search for both known and novel TEs by mapping sequences against the de novo repeat library and Repbase TE library.Overlapping transposable elements of identical repeat classes were collated and merged.A total of 1.62 Gb repeat sequences which represent 52.26% of the entire genome, have been identified.Among these sequences, 99.80% have been classified as known repeat elements, as shown in Table 4. Protein-coding genes prediction.Three independent approaches, including ab initio prediction, homology search, and reference guided transcriptome assembly, were used for gene prediction in a repeat-masked genome, resulting in 20,660 genes (Table 5).In detail, the GeMoMa 22 software was utilised to align homologous peptides from related species to the assembly and infer the gene structure information.For RNA seq-based gene prediction, filtered mRNA-seq reads were aligned to the reference genome using STAR 23 with default parameters.The transcripts were assembled by using stringtie 24 and PASA 25 was used to predict open reading frames (ORFs).For the de novo prediction, the RNA-seq reads were assembled de novo using StringTie and analyzed with PASA, resulting in the generation of a training set.Augustus 26 with default parameters was then used for ab initio gene prediction on the training set.Finally, EVidenceModeler (EVM) 25 was utilized to generate an integrated gene set, of which genes with TE were eliminated using TransposonPSI 27 package (http://transposonpsi.sourceforge.net/)and the miscoded genes were further removed.Untranslated regions (UTRs) and alternative splicing regions were identified via PASA based on RNA-seq assemblies.We kept the longest transcripts for every locus, and regions outside of the ORFs were labelled as UTRs.The mean transcript length and coding sequence size were 41,167.48and 1,604.59bp, respectively, with an average of 9.32 exons per gene.Additionally, the average exon and intron lengths were 172.2 and 4,756.27bp, respectively (Table 5). Gene function annotation. Gene functions, motifs and protein domains were determined through comparison with public databases, including SwissProt, NR (Non-Redundant Protein Database), KEGG (Kyoto Encyclopedia of Genes and Genomes), KOG (Eukaryotic Orthologous Groups of proteins) and GO (Gene Ontology).The InterProScan 28 program with default parameters was used to identify putative domains and GO terms of genes.For the other four databases, Blastp was used to compare the EvidenceModeler-integrated protein sequences against the four well-known public protein databases with an E-value cutoff of 1e−05 and the results with the lowest E-value hit were retained.Results from the five database searches were concatenated.A total of 19,172 genes (92.80% of the predicted protein-coding genes) were annotated using the above multiple databases.Specifically, approximately 88.81%,91.96%,71.19%, 62.86%, and 65.85% were annotated in SwissProt, NR, KEGG, KOG, and GO, respectively (Table 6, Fig. 2). annotation of non-coding rNas (ncrNas). To obtain the ncRNA (non-coding RNA), two strategies were used: searching against database and prediction with model.Transfer RNAs (tRNAs) were identified through the use of tRNAscan-SE 29 with parameters specific to eukaryotes.MicroRNA, rRNA, small nuclear RNA, and small nucleolar RNA were identified by using Infernal cmscan to search the Rfam 30 database.The rRNAs and their subunits were predicted using RNAmmer 31 .The predicted non-coding genes include 891 miRNAs, 259,398 tRNAs, 3,659 rRNAs, and 737 snRNAs in the Yunling cattle genome (Table 7). technical Validation Quality assessment of the genome assembly.Contigs were scaffolded into 31 superscaffolds, accounting for 99.90% of the total genome size.As shown in the Hi-C heatmap (Fig. 1), the 31 superscaffolds in the Yunling cattle genome could be distinguished and perfectly represented by 31 chromosomes. To evaluate the completeness of our assembly, we carried out BUSCO(Benchmarking Universal Single Copy Orthologs) and CEGMA 47 (Core Eukaryotic Gene Mapping Approach) analyses.BUSCO results indicated that 8,837(95.78%) of the 9,226 conserved single-copy genes in mammals were present in our assembly, of which 8,580 were single, 257 were duplicated, and 114 fragmented matches (Table 3).CEGMA results indicated 237(95.56%)core genes of the 248 core eukaryotic genes were present in our assembly, of which 231(93.15%) were complete, It shows that the core gene in the genome is relatively complete. To evaluate the accuracy of the assembly, all the short paired-end reads were mapped to the assembled genome using BWA (Burrows-Wheeler Aligner) 48 and the mapping rate as well as genome coverage of sequencing reads were assessed using SAMtools 49 , we found more than 93.72% of the genome had >20-fold coverage, indicating high accuracy at the nucleotide level.Besides, the base accuracy of the assembly was calculated with bcftools 50 , the base accuracy of genomic is 99.999533% (Depth > = 5X).The results of GC-Depth analysis of the genome were shown in the figure (Fig. 3).The results show that the GC content is distributed in 20-40%, and the sequencing depth is concentrated in the 20-25X region, indicating that there is no exogenous pollution in the genome.These results have suggested our assembly has high quality and is quite complete. Gene prediction and annotation validation.Three independent approaches, including ab initio prediction, homology search, and reference guided transcriptome assembly, were used for gene prediction in a repeat-masked genome.The EVM 25 software was used to integrate the gene prediction results and generate a consensus gene set.In addition, the functional annotation of these predicted genes revealed that 92.8% (genes = 19,172) of them could be assigned to at least one functional term (Table 6, Fig. 2).These findings strongly suggest that the annotated gene set of the Yunling cattle genome is quite complete. Fig. 2 Fig. 2 Venn diagram of annotation results for each database. Table 1 . Sequencing data used for the Yunling cattle genome assembly.Note that the sequence coverage values were calculated based on the genome size estimated by the Kmer-based method. Table 2 . Assembly statistics for the Yunling cattle. In the present research work, a high-quality chromosome-scale genome assembly of the Yunling cattle was constructed by combining PacBio Hifi sequencing, short reads sequencing, and chromosome conformation capture (Hi-C) anchoring, which resulted in a genome approximately 3.09 Gb in length with contig and scaffold N50 values of 35.97 Mb and 112.01 Mb, respectively (Table2). Table 4 . Summary statistics of repetitive elements in the assembled Yunling cattle genome. Table 5 . Summary statistics of predicted protein-coding genes of Yunling cattle. Table 6 . Number of predicted genes of Yunling cattle functionally annotated by using indicated databases. Table 7 . Summary statistics of Non-coding RNA annotation results.
3,887.8
2024-02-23T00:00:00.000
[ "Agricultural and Food Sciences", "Biology" ]
Relaxation of the Plant Cell Wall Barrier via Zwitterionic Liquid Pretreatment for Micelle‐Complex‐Mediated DNA Delivery to Specific Plant Organelles Abstract Targeted delivery of genes to specific plant organelles is a key challenge for fundamental plant science, plant bioengineering, and agronomic applications. Nanoscale carriers have attracted interest as a promising tool for organelle‐targeted DNA delivery in plants. However, nanocarrier‐mediated DNA delivery in plants is severely hampered by the barrier of the plant cell wall, resulting in insufficient delivery efficiency. Herein, we propose a unique strategy that synergistically combines a cell wall‐loosening zwitterionic liquid (ZIL) with a peptide‐displaying micelle complex for organelle‐specific DNA delivery in plants. We demonstrated that ZIL pretreatment can enhance cell wall permeability without cytotoxicity, allowing micelle complexes to translocate across the cell wall and carry DNA cargo into specific plant organelles, such as nuclei and chloroplasts, with significantly augmented efficiency. Our work offers a novel concept to overcome the plant cell wall barrier for nanocarrier‐mediated cargo delivery to specific organelles in living plants. Fig. S1 Effects of ZIL and ILs on plant growth. p. S8 Fig. S2 AFM height images of cellulose microcrystals pretreated with various concentrations of ZIL. Fig. S3 Effects of ZIL pretreatment on A. thaliana cotyledon cell walls examined by electron microscopy. Fig. S5 Comparison of peptide-displaying micelle complexes (MCs) in terms of the transfection efficiency. Fig. S6 Effects of ZIL on hydrodynamic diameters of CPP-MC determined by DLS measurements. Fig. S8 Time course evaluation of CPP-MC-mediated nuclear GFP expression in A. thaliana cotyledons. Fig. S9 CLSM images showing GFP expression in roots 24 h after CPP-MC-mediated transfection with or without ZIL. Fig. S10 Validation of CPP-MC-mediated GFP expression in A. thaliana seedlings. Fig. S11 Time course evaluation of Nluc-based transfection efficiency of CPP-MC in A. thaliana cotyledons. Fig. S12 Comparison of transfection efficiency of CPP-MC between shoots and roots from A. thaliana seedlings with or without ZIL pretreatment. Fig. S14 Effects of ZIL on CPP-MC-mediated transfection in mature plant leaves. Fig. S16 Effects of ZIL on hydrodynamic diameters of CTP/CPP-MC determined by DLS measurements. Fig. S19 Validation of chloroplast-specific GFP expression in the transfected A. thaliana seedlings. Mature plants at 10 weeks after germination were also used for transfection experiments. Synthesis and characterization of zwitterionic liquid (ZIL) ZIL was synthesized by a modified protocol according to the previous report. [ medium plate, the viability of the seedlings was determined by the previously reported Evans blue assay. [ 1 ] Data were obtained from four biologically independent samples (each sample consisted of five infiltrated seedlings). The seedlings were photographed at 1, 10, and 20 days after treatment. Evaluation of the ability of ZIL to dissolve cellulose microcrystals by wide-angle X-ray diffraction (WAXD) and atomic force microscopy (AFM) analyses The synchrotron WAXD measurements of cellulose nanocrystal pellets were performed similarly as previously reported. in an epoxy resin block as previously described. [ 5 ] The serial ultrathin sections (70 nm and 100 nm per section for TEM and FE-SEM, respectively) were obtained from the resin blocks and stained with an uranyl acetate solution for 10 min and a lead citrate solution for 1 min. The samples were observed on a JEM 1400 Flash transmission electron microscope (JEOL) at 80 kV or a SU8220 scanning electron microscope (Hitachi) at acceleration voltage of 5.0 kV. Quantitative evaluation of cell wall permeability by quenching assay To quantitatively evaluate cell wall permeability, we performed a quenching assay reported by Liu et al. [ 6 ] The quenching assay was based on CLSM imaging and image analysis. Preparation and characterization of peptide-displaying micelle complexes The CPP-modified micelle complex (CPP-MC) was prepared as previously reported. analysis as previously described. [2] Briefly, the GFP band on the membrane was detected using a combination of a rabbit anti-GFP polyclonal antibody (NB600-308, Novus Biologicals) with a horseradish peroxidase (HRP)-conjugated goat antirabbit IgG polyclonal antibody (ab6721, Abcam), and imaged with a Fusion Solo Statistical analysis Statistical tests and graphs were generated with GraphPad Prism 9. Figure S1. Effects of ZIL and ILs on plant growth. Pictures of A. thaliana seedlings at 1, 10, and 20 days after pretreatment with ZIL, IL-1, IL-2, and water for 3 h. Scale bar, 10 mm.
973
2022-06-07T00:00:00.000
[ "Environmental Science", "Biology", "Materials Science", "Engineering" ]
Synthesis and Characterization of Hydroxyapatite Gel - Nanosilver - Clove Flower Extract ( Syzygium Aromaticum L.) as a Toothpaste Forming Gel : Dental caries is a disease caused by the bacteria Streptococcus mutans and Lactobacillus , and affects everyone regardless of age. One way to overcome dental caries is to clean the teeth with a toothpaste made from hydroxyapatite-nano silver-clove extract. Hydroxyapatite is a biomaterial that has biocompatibility, bioactivity, and osteoconductive so that it can remineralize teeth. Nano silver is an antibacterial component that is able to kill microorganisms that cause dental caries. The addition of clove extract as an antimicrobial and aromatic compound can increase the attractiveness of the product. This study aims to determine the physical and chemical characteristics of the hydroxyapatite-nano silver-clove extract gel formulation with varying concentrations of hydroxyapatite 1%, 2%, and 3%, and nano silver concentrations of 1 ppm, 5 ppm, and 10 ppm. The results of the homogeneity test, pH level, dispersion, and adhesion showed that the gel was in accordance with the test standard. The results of organoleptic tests and statistical analysis showed that variations in the concentration of hydroxyapatite and nano silver affected the color and aroma (P < 0.05), but did not affect the texture (P > 0.05) of the gel. FTIR spectrophotometric analysis showed that the formulation already contained OH, CO 3 2- , PO 4 3- groups derived from hydroxyapatite, C=O derived from nano silver, and OCH, CH 3 , C-C, =C-H derived from clove extract. have a of results indicate that the are not normally are needed of statistical tests to prove the hypotheses have The next statistical test is the Kruskal-Wallis test (H-test) to show significant differences between the independent and dependent variable groups in the sample. The data from the Kruskal-Wallis test can be seen from the probability significance value (P-Value) because the P-Value value is related to the proposed hypothesis. Value > 0.05 then the proposed hypothesis can be rejected. -1 , wave -1 aryl 1 , group (CH 3 ) wave number of -1 , group (typical of fingerprint with a cis position at a wave number of cm -1 trans at a wave number of -1 ,63 The spectrum of the hydroxyapatite-nano silver-clove extract gel showed an OH group at a wave number of 3307.22 cm -1 , an alkane group at a wave number of 2981.89 cm -1 , a methyl group (CH 3 ) at a wave number of 1642.37 cm -1 , a CO 32- group at wave numbers 1459.31 cm -1 and 1417.58 cm -1 , the PO 43- group with stretching vibrations at wave numbers 1136.56 cm -1 and 1078.84 cm 1 , and the functional group =C-H ( typical of eugenol) in the fingerprint area with the cis position at wave number 835.95 cm -1 , while the trans position at wave number 1041.44 cm -1 and 990.89 cm -1 is a shift in wavelength, increase and decrease in intensity, as well as the appearance and disappearance INTRODUCTION Dental disease has become the 6 th highest patient rate in Indonesia due to the lack of awareness in cleaning their oral and dental so that caries or cavities become a menace to its health. Dental health, eventually, affects the quality of human life. Dental abnormalities and disease could interfere with mastication function and cause infection. Moreover, the lack of awareness to do dental checkups has worsened the patient rate in Indonesia [1]. Another factor that causes tooth decay is the constant eating of sweet and sticky foods. This kind of food is the main source of life for bacteria that have the potential to damage our teeth. The higher the caries, the greater possibility of having a toothache. Everyone could experience toothache with pain symptoms in their oral area, therefore, they found it difficult to identify the cause of toothache [2]. The high number of tooth decay cases has increased the need of using biomaterial to form hard tissues on teeth. Biomaterial is a synthetic material that interacts with the biological system of living things. Biomaterial could be used to make an implant to substitute organs with certain processes and mechanisms. Biomaterial consists of several types, for instance, composites, polymer, ceramics, and metals [3]. Hydroxyapatite is a type of biomaterial that is widely applied in the medical field regarding both bones and teeth regeneration and implants. This, because hydroxyapatite has several properties that are in accordance to the nature of tissue: namely biocompatibility (the ability to adapt to the body), bioactivity (the interaction with the body), and good osteoconductive (able to increase hard tissue cells) [4]. In addition, biomaterials can be applied as autograft (replacement of body parts with other body parts in one individual), allograft (replacement of human bones with bones from other humans), and xenograft (replacement of human bones with bones from animals) where all three have the ability to chemically bonded to living human tissue [5]. Hydroxyapatite is a biomaterial that is included in the apatite mineral complex compound [M10(ZO4)6X2] with the chemical formula Ca10(PO4)6(OH)2. This compound is a combination of tricalcium phosphate (Ca3PO4) and calcium hydroxide (Ca(OH)2). Hydroxyapatite synthesis can be carried out by mixing calcium precursors with phosphate precursors [6]. Hydroxyapatite can be synthesized by various methods, one of which is using the calcination method. This method uses the principle of removing the content of organic compounds and water in the material to produce a hydroxyapatite composite. The advantage of this method compared to other methods is that the material can decompose thermally and eliminate organic components, thereby increasing the level of safety of biological factors significantly in its use [7]. Tooth decay is not limited to solutions to replace damaged or lost tooth enamel. However, it is necessary to add other compounds as antibacterial to support hydroxyapatite to be of maximum benefit, such as silver nanoparticles (nano silver). Nano silver has antimicrobial properties so it is applied in the medical field as wound dressings, cotton fibers, and antiseptic mixtures to inhibit bacterial growth as well as an air sterilizer [8]. Synthesis of nano silver was carried out by various methods, namely physical, chemical, and biological methods. One of the common methods carried out by various studies is the chemical reduction method by carrying out a reduction reaction on silver salts such as silver nitrate by reducing agents to accelerate the formation of nanoparticles. This method uses sodium citrate as a reducing agent and produces nano silver compounds with a size of 30-40 nm [9]. Besides, the gel product for toothpaste preparation requires additional aromatic compounds as an allure in its use. There are many aromatic compounds available in nature, e.g., eugenol. Clove (Syzygium aromaticum L.) is an example of a natural ingredient with eugenol content >90% and the rest is the total content of other components, such as caryophyllene and benzoic acid [10]. According to Sofihidayanti and Wardatun (2021), the clove plant has 3 main parts that can be used for its essential oil: leaves (1-4%), stalks (5-10%), and flowers (10-20%) with the main content of which is eugenol and other ingredients in the form of caryophyllene, ethyl nitrobenzene, and methyl ethyl benzoic acid. The essential oil of the clove plant has the characteristics of a yellowish color, thick, has a spicy taste, and has a distinctive aroma. Several methods to obtain essential oil from clove plants include extraction, water distillation, steam distillation, and soxhlation methods [11]. In Prasetyo (2018), the clove extraction method produces yields ranging from 61-87% with a variety of solvents. Essential oils are needed in various industries, such as raw materials for food flavors and fragrances, cosmetics, pharmaceuticals, and insecticides. This is because the eugenol compound in clove extract has biological activity as an antioxidant, antifungal, antibacterial, and antiseptic, and is even used as an anesthetic for fish. Cloves also have advantages where the price is cheaper and the extraction process is relatively easy. The properties and characteristics of eugenol that have been mentioned are in accordance with the needs of current research. Based on the aforementioned basis, researchers will develop a gel as a toothpaste preparation with basic ingredients of hydroxyapatite derived from bovine bone (Bos taurus), nano silver, and clove extract (Syzygium aromaticum L.) with concentrate variation of hydroxyapatite and nano silver then afterwards will be carried out characterization using several tests, including homogeneity, pH level, spread ability, adhesion, organoleptic, and identification of functional groups using FTIR spectrophotometer. Synthesis and Preparation of Hydroxyapatite Solution The synthesis of hydroxyapatite was carried out by the calcination method. Beef bones (Bos taurus) were pre-soaked using a 3% H2O2 solution for 2x24 hours with a change of solution every 24 hours. Then, the beef bones were heated at 900˚C for 6 hours and mashed using a pestle mortar. Hydroxyapatite solutions with concentrations of 1%, 2%, and 3% were prepared by dissolving 1 gram, 2 grams, and 3 grams of beef bone in a solution of phosphoric acid. After that, it was stirred until homogeneous and transferred to a 100 mL volumetric flask for storage. Synthesis and Preparation of Nano silver Solution Synthesis of nano silver was carried out by chemical reduction method. A total of 5 mL of 1% Na3C6H5O7 solution was added little by little to 50 mL of 0.001 M AgNO3 solution while stirring and heating. The heating and stirring process was stopped when the color changed to greenish yellow. Nano silver solutions with concentrations of 1 ppm, 5 ppm, and 10 ppm were prepared by dissolving 0.7 mL, 3.5 mL, and 7 mL of nano silver solution into 100 mL of distilled water. After that, it was stirred until homogeneous and transferred to a 100 mL volumetric flask for storage. Homogeneity Test Homogeneity test was carried out by smearing the sample on a flat transparent glass object. Then, it is observed and declared homogeneous if it does not show the presence of coarse grains [12]. pH Level Test The pH level test was carried out using a calibrated pH meter by inserting it into the sample until the pH value displayed was constant. Spread Ability Test Spread ability test was carried out by placing 0.5 grams of the sample between 2 flat transparent glass objects. Then, the glass is pressed with a load of 50 grams and 100 grams for 5 minutes. After that, the diameter of the sample spread across and longitudinally was measured [13]. Adhesion Test The adhesion test was carried out by placing 0.5 grams of the sample between 2 flat transparent glass objects. Then, the glass is pressed with a load of 1 kg for 5 minutes. After that, the slide is released with a load of 100 grams and the time is calculated until the two glasses are released [13]. Organoleptic Test Organoleptic tests were carried out using the senses of sight, smell, and touch by observing the color, smelling the aroma, and feeling the texture of the sample [12]. Functional Group Characterization using FTIR Spectrophotometer Functional group characterization was carried out by placing the sample in the FTIR spectrophotometer holder set in the wave number range of 4000-550 cm -1 . The functional group can be identified based on the absorption peak of a certain wave number through infrared radiation that is passed through the sample. The analysis can be done by determining the functional groups based on the peaks produced by IR spectra between wavenumber and transmission [14]. ANALYSIS AND DISCUSSION Synthesis and Preparation of Hydroxyapatite Solution The synthesis of hydroxyapatite was carried out by the calcination method. This method has advantages compared to other methods because it is able to remove organic components and disease genomes in bones as to increase the safety of biological factors [7]. The beef bones are first soaked in a 3% H2O2 solution. Hydrogen peroxide is an irreversible acceptor capable of binding electrons and producing OHions, resulting in the formation of hydroxyl radicals to oxidize organic compounds. The more hydroxyl formed; the more organic compounds are degraded so that the organic components will decrease [15]. Immersion using a 3% H2O2 solution is carried out for 2x24 hours with the replacement of the solution every 24 hours so that the effectiveness of this solution as an oxidizing agent, bleaching agent, and antiseptic can be maintained [16]. Then, the beef bones are heated using a furnace at a temperature of 900˚C. This temperature is the optimum temperature because at that temperature calcium carbonate will be formed. If the temperature is below or above 900˚C, then the phase formed is tricalcium phosphate (TCP) [17]. At this stage, full evaporation of water occurs when the temperature is 250˚C, then the organic component content will be fully oxidized at a temperature of 450˚C and there is a complete decomposition of CaCO3 into CaO at a temperature of 750-1,000˚C as evidenced by a change in the color of the bones to white [18]. Based on the research of Afifah and Cahyaningrum (2020), the use of high temperatures in the calcination process can produce purer hydroxyapatite composites and increase the value of the degree of crystallinity of the composite. Then, the bovine bone is reacted with a solution of phosphoric acid which serves as a source of phosphate and forms a hydroxyapatite composite with the reaction: 10CaO(s) + 6H3PO4(l)  Ca10(PO4)6(OH)2(s) + 8H2O(l) Synthesis and Preparation of Nano silver Solution The synthesis of nano silver was carried out using a chemical reduction method by reacting silver nitrate (AgNO3) and sodium citrate (Na3C6H5O7) which functioned as reducing agents with the reaction: 4Ag + + Na3C6H5O7 + 2H2O  4Ag 0 + C6H5O7H3 + 3Na + + H + + O2 The method with sodium citrate or sodium borohydride as a nitrate salt reducing agent is the most frequently used method because it has a simple process step when compared to other methods, such as electrochemistry, ultrasonic irradiation, photochemistry, and sonochemistry [19]. The addition of Na3C6H5O7 solution into the AgNO3 solution was carried out drop by drop constantly until the color changed to greenish yellow which is the typical color of nano silver (AgNPs) [20]. The color change is caused by a collective oscillating reaction of electrons in the conduction band which indicates the formation of nano silver compounds. The higher the AgNO3 concentration, the faster the reaction will occur so that the resulting nano silver color will be darker [19]. Clove (Syzygium aromaticum L.) Extraction Clove used must be dried first to reduce the water content so as to produce extracts in large quantities during the extraction process. Then, the cloves are cut to the smallest size possible so that the extract can be taken up completely while accelerating the rate of solvent evaporation when separated from the extract [21]. Then, the cloves were macerated in distilled water for 3x24 hours which was considered to have higher safety, compared to other solvents, such as ethanol, n-hexane, and benzene, especially if the extraction results were applied to living things. In addition, distilled water is also stable and volatile, thus accelerating the solvent evaporation process [22]. Then, solvent evaporation is carried out using an evaporator at a temperature of 100˚C because the boiling point of distilled water is at 100˚C, while the boiling point of clove extract in the form of oil is at 253˚C so that it can make the water evaporate completely and leave a very thick clove extract, dark brown in color, and has a strong distinctive aroma [21]. Making Toothpaste Forming Gel The distilled water is heated at 80˚C to speed up the reaction that occurs between the components because the reaction will tend to run faster if it is carried out at high temperatures. Temperature is one of the causes of the increase in the rate of reaction, in addition to catalyst, volume, and concentration [23]. The temperature of 80˚C was also chosen because distilled water has a boiling point of ±100˚C. If the temperature is too high then solvent evaporation occurs and if the temperature is too low then the solution is not hot enough so that the reaction that occurs will run slowly. Then, the main ingredients in the form of hydroxyapatite, nano silver, and clove extract were added, as well as complementary materials in the form of CMC as a gelling agent to increase the viscosity because it has good stability in both acidic and alkaline conditions (pH 2-10) by forming a network structure capable of maintaining the gel system [24]. The added propylene glycol and glycerin function as humectants to maintain stability and moisture so that the gel does not dry out easily because humectants will maintain water content by absorbing moisture and reducing the evaporation process [25]. Homogeneity Test Homogeneity test was carried out to prove that the components of the gel were well mixed. Based on the quality standard of SNI No. 12-3524-1995, the requirements for a good toothpaste are homogeneous, not coarse, there are no air bubbles, lumps, or separate particles [26]. Homogen Based on the data in Table 2, the characteristics of each gel formulation are homogeneous. This indicates that the components that make up the gel have been mixed well and evenly so that it looks homogeneous, there are no air bubbles, and there are no coarse grains [27]. pH Level Test The pH level test is a test to determine the degree of acidity of the sample to suit the conditions of the oral mucosa. The pH value is related to the stability of the active substance, effectiveness, and the state of the compound so that it can be classified as acidic, basic, or neutral [27]. Based on the quality standard of SNI No. 12-3524-1995, the pH value of toothpaste ranges from 4.5-10.5 [26] and the pH value of the oral mucosa ranges from 6.5-7.5 [28]. Based on the data in Table 3, the pH values of F3, F6, and F8 have met the quality standard, while the pH values of F1, F2, F4, F5, F7, and F9 have not met the quality standard. A low pH value can facilitate the growth of acidogenic bacteria that grow in an acidic environment (pH 4.5-5.5), such as Streptococcus mutans and Lactobacillus. A pH value below 4.5 is also able to increase the occurrence of demineralization and tooth enamel damage [26]. A high pH value has the opposite effect with a low pH which is able to suppress the growth of Streptococcus mutans and Lactobacillus bacteria while strengthening the tooth enamel layer so that tooth remineralization occurs and prevents the potential for dental caries [29]. Spread Ability Test The spread ability test was carried out to determine the ability of the gel to spread when used on the tooth surface. The gel has a semi-solid texture that is able to spread easily at the site of administration and will provide product user comfort. The easier the gel is to apply, the wider the contact surface area, thereby optimizing the absorption of the active substance. A good gel has a standard spread of 5-7 cm [25]. Table 4, each formulation met a good gel dispersion scale. The spreading area increases which is directly proportional to the increase in the mass of the load and affects the object because the reaction occurs faster. The occurrence of increase and decrease in dispersion is caused by the consistency of the gel which is related to the viscosity value. If the viscosity is low, the dispersion will be wider and vice versa because the low viscosity makes the gel flow smoother and spreads well [13]. According to Rohmani and Kuncoro (2019), the dispersion is due to the stability of CMC as a gelling agent so that it affects the strength of the gel matrix. Therefore, the dominant factor that determines the dispersion response is the concentration and amount of CMC components involved. The temperature and storage packaging that are less impermeable also affect the viscosity of CMC because the gel will absorb water vapor and then release Na + ions which are replaced by H + ions to form HCMC compounds with a higher viscosity [30]. Other gel components, such as propylene glycol and glycerin with a liquid predominant phase, also reduce the viscosity of CMC which affects the spread ability of the gel [25]. Adhesion Test The adhesion test was carried out to see the strength of the gel when it adhered to the tooth surface. A good gel has a standard of stickiness ranging from 1-6 seconds [30]. Based on the data in Table 5, each gel formulation met the criteria for good adhesion. High adhesion indicates that the gel texture is denser, elastic, and easy to adhere, but has a poor spreadability. The low adhesion indicates that the gel texture is more liquid, less elastic, and does not adhere well, but has a very good spread ability [31]. Adhesion also affects the absorption of the active ingredients that make up the gel. The longer it sticks, the greater the active substance is absorbed [27]. Organoleptic Test The organoleptic test aims to observe the gel visually if there is a phase separation or color change so that the product can be accepted by consumers [24]. This organoleptic test involved 10 respondents to assess the characteristics of the color, aroma, and texture of each gel. The increase in each characteristic is represented by adding a sign (+) at each increase. The results of the organoleptic tests are shown in the graph attached below. Based on the data in Figure 1, each respondent gave an assessment of the visual characteristics of the visible color. At F1, F2, and F3, ten respondents stated that the gel was brown. At F4, F5, and F6, ten respondents stated that the gel was brown (+). At F7, F8, and F9, ten respondents stated that the gel was brown (++). These results indicate that increasing the concentration of hydroxyapatite and nano silver affects the color of the gel. The higher the concentration, the more the color characteristics appear. Based on the data in Figure 2, each respondent gave an assessment of the visual characteristics of the aroma produced. In F1, F2, and F3, ten respondents stated that the gel had a distinctive clove scent (++). In F4, F5, and F6, ten respondents stated that the gel had a distinctive clove scent (+). At F7, F8, and F9, ten respondents stated that the gel had a distinctive clove scent. These results indicate that the increase in the concentration of hydroxyapatite and nano silver affects the aroma of the gel. The higher the concentration, the lower the characteristic aroma produced. Based on the data in Figure 3, each respondent has given an assessment of the shape characteristics of the resulting texture. In F1, most of the respondents stated that the formulation was gel textured. In F2, F4, F5, and F6, most of the respondents stated that the formulation was gel textured (+). In F9, most of the respondents stated that the formulation was gel textured (++). In F3, most of the respondents stated that the formulation was gel or gel textured (++). In F7, most of the respondents stated that the formulation was gel (+) or gel textured. In F8, most of the respondents stated that the formulation was gel (+) or gel (++). These results indicate that the difference in gel texture is influenced by the consistency of the CMC mass. The greater the mass of the gelling agent, the denser the texture of the gel because CMC affects the viscosity of the gel preparation. Statistical tests were also carried out to determine the effect of variations in the concentration of hydroxyapatite and nano silver on the color, aroma, and texture of the gel preparation so that the hypothesis proposed was "there is an effect of the concentration of hydroxyapatite and nano silver on the color, aroma, and texture of the gel preparation". The data analysis was carried out by using the Shapiro-Wilk normality test first, if the data were not normally distributed, it was continued with non-parametric statistical tests. The Shapiro-Wilk test is a statistical test to determine whether the distribution of a data is normal or not. This test is generally applied in regression analysis examining random data of small size or amount (N < 50). Based on the data in Table 6, there are data that have a significance value of < 0.05. These results indicate that the data are not normally distributed so that further tests are needed in the form of non-parametric statistical tests to prove the hypotheses that have been made previously. The next statistical test is the Kruskal-Wallis test (H-test) to show significant differences between the independent and dependent variable groups in the sample. The data from the Kruskal-Wallis test can be seen from the probability significance value (P-Value) because the P-Value value is related to the proposed hypothesis. Value > 0.05 then the proposed hypothesis can be rejected. Based on the data in Table 7, the color and aroma factors have a P-Value value < 0.05 while the texture has a P-Value value > 0.05 which means that variations in the concentration of hydroxyapatite and nano silver affect the color and aroma but have no effect on the texture of the gel preparation. Characterization of Functional Groups using FTIR Instruments FTIR (Fourier Transform Infrared) spectrophotometer is a chemical characterization method to analyze functional groups in samples qualitatively based on the absorbance value of infrared light. This method is characterized by a shift in wavelength in the wavenumber area of 4000-550 cm -1 . Functional group analysis is known based on increasing or decreasing intensity, as well as the appearance and disappearance of spectra [7]. Typical functional groups of hydroxyapatites are the OH group in the 3400-2700 cm -1 area, the CO3 2group in the 1700-1400 cm -1 area, and the phosphate group (PO4 3-) in the 1150-560 cm -1 area [32]. The spectrum of the hydroxyapatite gel shows the OH group at a wave number of 3307.32 cm -1 due to the vibration of the H-O-H molecule, the CO3 2group at a wave number of 1634.53 cm -1 which is produced during the calcination process where the carbonate group binds to calcium to form calcium carbonate, and the PO4 3group which consists of stretching vibrations at wave numbers 1161.42 cm -1 , 1075.67 cm -1 , 1003.73 cm -1 and medium vibrations at wave numbers 939.59 cm -1 [33]. The nano silver gel spectrum showed OH groups at 3307.72 cm -1 and a carbonyl group
6,174
2022-07-06T00:00:00.000
[ "Materials Science" ]
Approximation and characterization of Nash equilibria of large games We characterize Nash equilibria of games with a continuum of players in terms of approximate equilibria of large finite games. This characterization precisely describes the relationship between the equilibrium sets of the two classes of games. In particular, it yields several approximation results for Nash equilibria of games with a continuum of players, which roughly state that all finite-player games that are sufficiently close to a given game with a continuum of players have approximate equilibria that are close to a given Nash equilibrium of the non-atomic game. Introduction Models with a continuum of agents are viewed as a tractable idealization of situations involving a large but finite number of negligible agents. This view requires considerations of the relationship between results in models with a continuum of agents and models with a large but finite number of them. In the context of general equilibrium theory, this issue was taken up e.g. by Hildenbrand (1970), Hildenbrand and Mertens (1972), Hildenbrand (1974), Emmons and Yannelis (1985) and Yannelis (1985), amongst others. Analogous results have been obtained in the context of game We wish to thank three anonymous referees for helpful comments. Financial support from Fundação para a Ciência e a Tecnologia is gratefully acknowledged. theory by Green (1984), Housman (1988), Khan and Sun (1999), Khan et al. (2013) and He et al. (2017). In this paper, we revisit these latter results. In Carmona and Podczeck (2020a, b), we have recently shown that, under an equicontinuity assumption on players' payoff functions, equilibria of games with a continuum of players are the limit points of equilibria of games with a large but finite number of players. These results bear some surprise: It is well known that there is often an "explosion" of Nash equilibria in the limit, which suggested that limits of pure strategy Nash equilibria in sequences of finite-player games converging to a given non-atomic game could form a proper subset of the equilibrium set of the limit non-atomic game. As our results in those previous papers show, this intuition is incorrect. However, this discussion also means that, while there is at least one sequence of finite-player games converging to a given game with a continuum of players and having equilibria converging to a given equilibrium of the game with a continuum of players, not all sequences of finite-player games converging to a given game with a continuum of players have equilibria converging to a given equilibrium of the game with a continuum of players. The goal of this paper is to investigate whether or not this latter property is true in terms of approximate equilibria. Previously, in Carmona and Podczeck (2009), we have shown, among other things, that the existence of equilibrium in games with a continuum of players is equivalent to the existence of approximate equilibria in games with a large finite number of players (relative to some class of games). In the present paper, we take a closer look at this equivalence. To describe our results informally, given any game with a probability space of players and any strategy profile, let us call the distribution of payoff functions the characteristics distribution, and the joint distribution of payoff functions and actions the characteristics/actions distribution. Our first result provides a general characterization of equilibria in continuum games, saying that a strategy profile in a game with a continuum of players is a Nash equilibrium if and only if the corresponding characteristics/actions distribution can be approximated by characteristics/actions distributions of a sequence of finite player games where the number of players increases to infinity and the strategy profiles are asymptotically Nash (see the equivalence between conditions 1 and 2 in Theorem 1). 1 Our second result (the equivalence between conditions 1 and 3 in Theorem 1) says that, for a game G with a continuum of players, a given strategy profile f is a Nash equilibrium if and only if the following is true: If G is any game with a finite but large enough number of players and f is any strategy profile in G such that the characteristics/actions distribution induced by G and f is close to that induced by G and f , then f is close to being a Nash equilibrium of G . Based on our general characterization result, we also provide approximation results which roughly say that all finite-player games with a characteristics distribution sufficiently close to that of a given game with a continuum of players have approximate equilibria that are close to a given Nash equilibrium of the non-atomic game. We remark that the notion of a strategy profile being close to being a Nash equilibrium can be formalized in two ways: all players choose actions that are close to being optimal responses, or most players choose actions that are close to being optimal responses. We provide approximation results for equilibria in continuum games under both of these formalizations (Theorems 2-4). The results with the former formalization (Theorems 3, 4) require, respectively, an equicontinuity and a compactness assumption on the set of the payoff functions appearing in a given continuum game. Our approximation result with the second formalization (Theorem 2), however, holds without such assumptions. The results of our paper are formulated in the setting of games where players have a common (compact metric) action space and each player's payoff function depends on his action and on the distribution of actions chosen by all players. This setting provides a sufficiently general framework where our questions can be addressed and our results can be established using (somewhat) elementary arguments. It is likely that these arguments can be extended to obtain more general results, for instance, by allowing the action space to differs across players or by allowing each player's payoff function to depend on the choice of the others in some more general way. 2 We remark that the characterization result provided by (1) ⇔ (3) of Theorem 1 is related to the characterization result for equilibrium distributions of non-atomic games in Lemma 5 of Carmona and Podczeck (2009). However, whereas the latter result applies only to non-atomic games with finite actions and with finitely many characteristics, the characterization result of the present paper holds for general nonatomic games. The paper is organized as follows. Section 2 provides a literature review and Sect. 3 a motivating example. In Sect. 4, we introduce our notation and basic definitions. We present our characterizations in Sect. 5 and our approximation results in Sect. 6. Some auxiliary results are in the "Appendix". Literature review Two papers that are closely related to ours are Green (1984) and Housman (1988). Both consider the upper hemicontinuity of the Nash equilibrium correspondence and, in particular, obtain limit results for sequences of equilibria of games with a large but finite number of players that converge to any given game with a continuum of players. In contrast, our characterization result is concerned not only with sufficient conditions for a strategy profile to be a Nash equilibrium of a game with a continuum of players, but also with conditions that are, in addition, necessary for this property. Housman (1988) shows, in addition to the above, that in the space of convex games (i.e., games where actions space are convex subsets of some vector space and payoff functions are quasi-concave in the owner's action), every Nash equilibrium of a game with a continuum of players induces a distribution that, for every game close to the given one, is close to the distribution of an approximate equilibrium of the latter game, provided the payoff functions involved satisfy some equicontinuity condition. The latter result in Housman (1988) thus amounts to some sort of lower hemicontinuity result. Actually, the argument in Housman (1988) does not require convexity assumptions on a non-atomic limit game but it requires convexity assumptions on approximating finite-player games. In contrast, our results do not require such convexity assumptions. In particular, our results apply to setting where there is no linear structure on the action sets. Our work is also related to Khan and Sun (1999) who show, using arguments from non-standard analysis, that the existence of Nash equilibrium for games where the space of players is an atomless Loeb probability space implies the existence of approximate equilibria of games with a large finite number of players. Apart from this, there are no further results concerning the characterization of equilibria in non-atomic games. For literature addressing the relationship between games with a large finite number of players and games with a continuum of players in more specific contexts, see, e.g., the work of Dubey et al. (1980) on market games, of Mas-Colell (1983) and Novshek and Sonnenschein (1983) on Cournot competition, and of Allen and Hellwig (1986a) and Allen and Hellwig (1986b) on Bertrand competition. 3 As noted before, related questions were addressed before in the context of general equilibrium theory. As in these latter papers, we relate continuum games and large finite-player games in terms of distributions of agents' characteristics. Because there is no reasonable topology on sets of players, a connection between these classes of games cannot be made in terms of graphs of maps from sets of players to a set of players' characteristics, but only in terms of distributions of these maps. Motivating example We consider Example 1 of Carmona and Podczeck (2020b). As in there, consider a large population of individuals who face a coordination problem. The optimal choice of any individual depends on the choice of all others through their influence on how popular each of the two options is. Specifically, assume that each individual in the population has to choose one of two options, 0 or 1; thus, there is a common action set A = {0, 1}. The relative frequencies with which the options are chosen are described by the vector π = (π 0 , π 1 ) in the unit simplex of R 2 ; π is referred to as the action distribution. Half of the population has preferences that are maximized when the action chosen matches the most frequent action. The payoff function of each of these players is denoted by u c and is defined by setting, for each a ∈ A and π ∈ , The remaining half of the population has preferences that are maximized when the action chosen matches the least frequent action. The payoff function of each of these players is denoted by u d and is defined by setting, for each a ∈ A and π ∈ , We have specified this example without being explicit about the population of players. Suppose now that there is a continuum of players, described by an atomless probability space (T , , ϕ), playing this game and let In contrast, as we have shown in Carmona and Podczeck (2020b), when there are n ∈ N players, with n ≥ 4 and even, the resulting game, denoted by G n , has no Nash equilibrium. However, as we show in this paper and illustrate here, for all ε > 0, there is a δ > 0 such that if f a Nash equilibrium of G, then, whenever 1/n < δ and n ≥ 4 and even, G n has an ε-equilibrium f n such that the distribution it induces together with the players' payoff functions is within ε of the distribution ϕ • (V , f ) −1 in the game with a continuum of players. To provide more detail regarding the above, let V n (i) be the payoff function of player i ∈ {1, . . . , n} in the n-player game G n and ϕ n be the normalized counting Then the distribution of payoff functions and actions induced by V n and a strategy f n is ϕ n • (V n , f n ) −1 . Since the set of possible payoff functions and actions is finite, we say that To see that the above claim holds, write τ u,a for ϕ . Thus, f n is a 2/nequilibrium. Hence, for each ε > 0, we can simply let δ = ε/2 to prove the above claim. Notation and definitions We consider games where all players have the same action space S and where each player's payoff depends on his choice and on the distribution of actions induced on S by the choices of all players. The formal setup of the model is as follows. The action space S common to all players is a compact metric space. We let M(S) denote the set of Borel probability measures on S endowed with the narrow topology, 4 and C the space of bounded, continuous, real-valued functions on S × M(S) endowed with the sup-norm. Note that since S is a compact metric space, M(S) is compact and metrizable, and hence C is a complete separable metric space. The space of players is described by a probability space is the probability space of players, S is the action space and V is a measurable function from T to C; V (t) is the payoff function of player t, with the interpretation that V (t)(s, γ ) is player t's payoff when he plays action s and faces a distribution γ in M(S) induced by the actions of all players. We will consider only games G = ((T , , ϕ), V , S) where either (T , , ϕ) is atomless and complete, or T is finite, = 2 T and ϕ is the uniform distribution on T (i.e., ϕ({t}) = 1/|T | for all t ∈ T ). The former case will be referred to as a non-atomic game, and the latter as a finite-player game. By a strategy profile f in a game G = ((T , , ϕ), V , S) we mean a measurable function f : T → S. Measurability of a strategy profile f ensures that the distribution of f is defined in M(S), so that f can be evaluated by players' payoff functions. Of course, measurability does not impose any restriction on strategy profiles if G is a game with finitely many players. Note that, in any case, if f : T → S is measurable, then so is any function f : T → S which differs from f in only one point of S, so the notion of strategy profile we employ captures individual deviations from any given strategy profile. In the sequel, given any game G = ((T , , ϕ), V , S) and any strategy profile f in G, the distribution of f is denoted by ϕ • f −1 , and player t's payoff by (1) Further, f \ t s denotes the strategy profile obtained if player t changes his choice from , in a non-atomic game no player has any impact on the distributions of actions. For all ε ≥ 0, the set {t ∈ T : Indeed, this is clear for a game with finitely many players. If the space (T , , ϕ) of players is non-atomic then, by the previous paragraph, this set is just For all real numbers ε, η ≥ 0, we say that a strategy profile f is an (ε, η)-equilibrium of the game G = ( (T , , ϕ) (2) Thus, in an (ε, η)-equilibrium, only a fraction of players smaller than η can gain more than ε by deviating from Let X be a metric space. The Borel σ -algebra of X is denoted by B(X ). For all x ∈ X , 1 x denotes the Dirac measure at x, i.e., 1 If Y is a metric space and τ ∈ M(X × Y ), τ X (resp. τ Y ) denotes the marginal distribution of τ on X (resp. Y ). Characterization of Nash equilibria of non-atomic games The main result of this section states two characterizations of Nash equilibria of nonatomic games in terms of (ε, η)-equilibria of games with finitely many players. One characterizations shows that a strategy profile in a non-atomic game is a Nash equilibrium if and only if the corresponding characteristics/actions distribution can be approximated by the characteristics/actions distributions of a games with finitely many players and approximate equilibria in these games with the level of approximation being as small as desired if the number of players is sufficiently large. This the content of (1) ⇔ (2) in Theorem 1. The second characterization, which is (1) ⇔ (3) in that theorem, says that, given a non-atomic game G, a strategy profile f is a Nash equilibrium in G if and only if for any finite-player game G and any strategy profile f in G such that the characteristics/actions distribution induced by G and f is close to that induced by G and f , it is true that f is an approximate equilibrium of G , where the level of approximation is as small as desired provided that the number of players in G is sufficiently large. The first of these two characterizations may be seen as a limit result for approximate equilibria of large finite-player games whereas the second may be seen as a result on asymptotic implementation of equilibria of games with a continuum of players. Theorem 1 Let G = ((T , , ϕ), V , S) be a non-atomic game, and f a strategy profile in G. Then the following are equivalent. 1. f is a Nash equilibrium of G. 2. There are sequences {G n } n , { f n } n and {ε n } n , where G n = ((T n , n , ϕ n ), V n , S) is a finite-player game and f n is an (ε n , ε n )-equilibrium of G n for each n ∈ N, player game and f n is a strategy profile of G n for each n ∈ N, such that |T n | → ∞ and ϕ n • (V n , f n ) −1 → ϕ • (V , f ) −1 , then there is a sequence {ε n } n in R + such that ε n → 0 and f n is an (ε n , ε n )-equilibrium of G n for all sufficiently large n. Before we state the proof, we remark that (2) ⇒ (1) of this theorem is already contained in Lemma 11 of Carmona and Podczeck (2009). This latter paper also contains a result that is analogous to (1) ⇒ (3) but covers only the special case of finite action spaces and players' characteristics belonging to a finite set. Because of this, the argument in the proof of this latter result does not apply to the more general setting treated in Theorem 1. Proof of Theorem 1 (1) ⇒ (3) For all We need to show that lim n ε n = 0. Set γ = ϕ • f −1 , and for each n ∈ N, set γ n = ϕ • f −1 n . By hypothesis, γ n → γ . For each n ∈ N, let B n = {γ ∈ M(S) : ρ(γ n , γ ) ≤ 1/|T n |}. Then for each n ∈ N, B n is compact and for each t ∈ T n and s ∈ S, we have ϕ • ( f n \ t s) −1 ∈ B n , by definition of the Prohorov metric. Note also that since |T n | → ∞, we have γ n → γ whenever {γ n } n is a sequence with γ n ∈ B n for each n ∈ N. Since S and the sets B n are compact, we can define continuous functions h and h n from C × S to R + by setting h(u, x) = max{u(y, γ ) : y ∈ S} − u(x, γ ) and h n (u, x) = max{u(y, γ ) : y ∈ S, γ ∈ B n } − u(x, γ n ). Using the compactness of S and the fact that γ n → γ whenever γ n ∈ B n for each n ∈ N, it is straightforward to check that h n → h uniformly on compact subsets of C × S. Set τ = ϕ • (V , f ) −1 , and for each n ∈ N, set τ n = ϕ n • (V n , f n ) −1 . Since f is a Nash equilibrium of the non-atomic game G, we have On the other hand, note that for each n ∈ N and each t ∈ T n , we have because ϕ n • ( f n \ t s) −1 ∈ B n for each s ∈ S. Consequently, given ε > 0, for each n ∈ N we have Since τ n → τ by hypothesis, since h and, for all n ∈ N, h n are continuous, and since h n → h uniformly on compact subsets of C × S, we must have τ n • h −1 n → τ • h −1 by Hildenbrand (1974, p. 51, 38). In particular, we have that Hence, by (3) and (4), ϕ n {t ∈ T n : U n (t)( f n ) ≤ sup s∈S U n (t)( f n \ t s) − ε} < ε for all n sufficiently large. This implies that ε n ≤ ε and, since ε is arbitrary, that lim n ε n = 0. (3) ⇒ (2) Recall the standard fact that if G is a non-atomic game and f a strategy profile in G, then a sequence {G n } n = {((T n , n , ϕ n ), V n , S)} n of finite-player games together with a sequence { f n } n of strategy profiles for the G n 's such that ϕ n • (V n , f n ) −1 → ϕ • (V , f ) −1 and |T n | → ∞ does exist. 6 In view of this fact, it is clear that (3) implies (2). To this end, for each n ∈ N, set τ n = ϕ n • (V n , f n ) −1 and γ n = ϕ • f −1 n . Let S n ⊆ T n be given as S n = {t ∈ T n : U n (t)( f n ) < sup s∈S U n (t)( f n \ t s) − ε n }. Set A n = (V n , f n )(S n ) and note that τ n (A n ) = ϕ n (S n ) for each n ∈ N. Thus τ n (A n ) → 0 by hypothesis. Consider any (u, x) ∈ supp(τ ). Since τ n → τ by hypothesis, by Lemma 12 in Carmona and Podczeck (2009), we may find a subsequence {τ n k } k of {τ n } n and, for each k ∈ N, a point (u k , x k ) ∈ supp(τ n k )\A n k so that (u k , x k ) → (u, x). In particular, then, for each k ∈ N, (u k , x k ) = (V n k , f n k )(t k ) for some t k ∈ T n k \S n k . Pick any y ∈ S and, for each k ∈ N, let γ k = ϕ n k • ( f n k \ t k y) −1 . Then for each k ∈ N we must have u k (x k , γ n k ) ≥ u k (y, γ k ) − ε n k because t k ∈ T n k \S n k . Note also that γ n k → γ and hence, since 1/|T n | → 0, γ k → γ as well. Since ε n k → 0 by the hypotheses of the lemma, it follows that u(x, γ ) ≥ u(y, γ ). As y ∈ S was chosen arbitrarily, we may conclude that u(x, γ ) = max y∈S u(y, γ ). Approximation of Nash equilibria of non-atomic games In this section, we present several approximation results for Nash equilibria of games with a continuum of players. The motivation for these results arises from the characterization of Nash equilibria of non-atomic games presented in Theorem 1, which can roughly be described as establishing the approximate continuity of the equilibrium correspondence in the following sense. The equilibrium correspondence maps games, represented by their distribution over payoff functions and number of players, to the distributions over payoffs and actions induced by the game and its Nash equilibria. As shown in Green (1984) and Housman (1988) [see also (2) ⇒ (1) in Theorem 1], the equilibrium correspondence is upper hemicontinuous for some appropriate topologies. Here, we focus on properties that correspond to the lower hemicontinuity of the equilibrium correspondence. The properties we focus on here require, in particular, all finite-player games, with a sufficiently large number of players and a distribution over payoff functions sufficiently close to the one of the given non-atomic game, to have a Nash equilibrium such that the distribution induced by it and by the players' payoff function is close to the one induced by the given Nash equilibrium of the non-atomic game. Although this property does not hold in general, Theorem 2 shows that it holds for (ε, ε)-equilibria in general games. Furthermore, Theorems 3 and 4 show that it also holds for ε-equilibrium when special assumptions, such as compactness and equicontinuity, are added. In the following results, we consider the case of non-atomic and finite-player games whose players' payoff functions belong to a given equicontinuous set of payoff functions. As Theorem 3 below shows, this allow us to strengthen the conclusion of Theorem 2 from (ε, ε)-equilibrium to ε-equilibrium. The assumption of equicontinuous payoff functions allows us to change the actions of those players who are not ε-optimizing in the (ε, ε)-equilibrium obtained via Theorem 2. In fact, since the fraction of these players is small, such change has a small impact on the distribution of actions, and due to equicontinuity, on players' payoffs. In the statement of Theorem 3 and in its proof, given a subset K of C, B δ (K ) denotes the set {u ∈ C : inf v∈K u − v < δ} (where · means the sup-norm on C). = ((T , , ϕ), V , S) a non-atomic game with V (T ) ⊆ K , and f a Nash equilibrium of G. Then, for all η, ε > 0, there is δ > 0 such that for all finite-player games G = ((T , , ϕ ) Theorem 3 Let K ⊆ C be equicontinuous, G Proof Fix η, ε > 0. By the equicontinuity of K , we can choose a number θ > 0 such that |v(s, τ ) − v(s , τ )| < ε/4 whenever v ∈ K , d(s, s ) ≤ θ for s, s ∈ S and ρ(τ, τ ) ≤ θ for τ, τ ∈ M(S). Clearly, we can choose θ so as to have in addition θ < min{ε/4, η/2}. By Theorem 2, there is a 0 < δ < min{θ, 8ε} such that if G = ((T , , ϕ ) and a (θ, θ ) Asf is an (θ, θ )-equilibrium of G , the fraction of players t in G for whicĥ f (t) is not within θ of a best reply to ϕ •f −1 is smaller than θ . Thus the fraction of players t in G for whichf (t) differs from f (t) is smaller than θ . This implies These facts, together with the fact that V (T ) ⊆ B δ (K ), imply that f is an ε-equilibrium as follows. Let t ∈ T and v ∈ K be such that Theorem 4 below improves over Theorem 3 by providing an uniformity over the non-atomic games with V (T ) ⊆ K and its Nash equilibria. This is achieved by assuming that players' payoff functions are contained in a set in C which is not only equicontinuous but also bounded. Theorem 4 Let K be a compact subset of C. Then, for all ε > 0, there is a δ > 0 such that if G = ((T , , ϕ), V , S) is a non-atomic game with V (T ) ⊆ K , and f a Nash equilibrium of G, then every finite-player game G = ((T , , ϕ ) Using the fact that every τ ∈ M(K × S) can be represented as τ = ϕ • (V , f ) −1 for some mapping (V , f ) from an atomless probability space (T , , ϕ) to K × S, arguments analogous to that in the proof of (2) ⇒ (1) in Theorem 1 show that Z is closed in M(K × S), hence compact because compactness of K × S implies that M(K × S) is compact. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. A Appendix In this appendix, we collect the lemmas that are used in the proof of our main results. As before, convergence of measures on metric spaces is always understood with respect to the narrow topology. For finite sets, the consequences of convergence of probability measures are easy to understand because it implies that the probability of each point in the set converges to the corresponding limit probability. This property does not hold for general separable metric spaces. However, Lemma 1 shows that, for every probability measure, the space can be partitioned into a countable collection of measurable subsets with a small diameter such that the probabilities of those sets converge to the corresponding limit probability. Lemma 1 Let X be a separable metric space, and μ a Borel probability measure on X . Then given any ε > 0 there is a countable partition (E i ) i∈N of X into Borel sets, each with diameter less than ε, such that whenever {μ n } n is sequence in M(X ) with Proof Recall the following facts, denoting by ∂ A the boundary of a subset A of a topological space Z . is the open r -ball around x. As T is a base, there is V ∈ T such that x ∈ V ⊆ B r (x), and such set V must have diameter less than ε.) (c) If Z is any topological space and A and B are subsets of Z , then ∂(B\A) ⊆ ∂ B ∪ ∂ A; if A 0 , . . . , A n are finitely many subsets of Z , then ∂( n i=0 A i ) ⊆ n i=0 ∂ A i . Let μ be a Borel probability measure on X and fix ε > 0. Since X is second countable, using (a) and (b) it follows that there is a countable family (B i ) i∈N of open subsets of X with ∞ i=0 B i = X such that, for each i ∈ N, the diameter of B i is less than ε and μ(∂ B i ) = 0. Define a family (E i ) i∈N of Borel sets of X by setting E i = B i \ i−1 j=0 B j for each i ∈ N. Use (c) and the fact that the union of finitely many null sets is a null set to see that μ(∂ E i ) = 0 for each i ∈ N. Hence, the conclusion follows from the Portmanteau Theorem. Lemma 2 considers a sequence of functions converging in distribution and shows that both the limit function and the ones in the sequence can be closely approximated by functions having a finite range. By finite probability space we mean a probability space (T , , ϕ) where T is finite, = 2 T , and ϕ is the uniform distribution. Proof Fix ε > 0 and let (E i ) i∈N be a partition of X chosen with respect to ϕ • g −1 and ε according to Lemma 1. We can find anī such that ϕ • g −1 ( i≥ī E i )) < ε. For each i ≤ī, pick a point x i ∈ E i and let F = {x i : 1 ≤ i ≤ī}. Defineḡ : T → F by settingḡ(t) = x i if g(t) ∈ E i for i <ī andḡ(t) = x¯i if g(t) ∈ i≥ī E i . Analogously, for each n ∈ N, defineḡ n : T n → F by settinḡ g n (t) = x i if g n (t) ∈ E i for i <ī, andḡ n (t) = x¯i if g n (t) ∈ i≥ī E i . By choice of (E i ) i∈N , we have ϕ n • g −1 n ( Consequently ϕ n • g −1 n ( i≥ī E i ) < ε for all sufficiently large n, whence, since the diameter of E i is at most ε by choice of (E i ) i∈N , we have ϕ n ({t ∈ T n : d X (ḡ n (t), g n (t)) ≤ ε}) > 1 − ε for all sufficiently large n. Similarly, we have ϕ({t ∈ T : d X (ḡ(t), g(t)) ≤ ε}) > 1−ε. Lemma 4 considers a lower hemicontinuity property of the correspondence that assigns to each game its set of strategy profiles. This property is analogous to the one considered in our approximation results, in the sense that it is only established for non-atomic games and only finite-player games are considered as approximations. Lemma 3 is used in its proof and, although stated abstractly, it considers a special case of Lemma 4, namely that of non-atomic games with finitely many actions and payoff functions. Lemma 3 Let X and Y be finite sets, and τ a probability measure on X × Y . If {(T n , n , ϕ n )} n∈N is a sequence of finite probability spaces with |T n | → ∞, and for each n ∈ N, g n : T n → X is such that ϕ n • g −1 n → τ X , then there is a mapping f n : T n → Y for each n ∈ N such that ϕ n • (g n , f n ) −1 → τ .
8,288
2020-10-15T00:00:00.000
[ "Economics" ]
Mimicking Neural Stem Cell Niche by Biocompatible Substrates Neural stem cells (NSCs) participate in the maintenance, repair, and regeneration of the central nervous system. During development, the primary NSCs are distributed along the ventricular zone of the neural tube, while, in adults, NSCs are mainly restricted to the subependymal layer of the subventricular zone of the lateral ventricles and the subgranular zone of the dentate gyrus in the hippocampus. The circumscribed areas where the NSCs are located contain the secreted proteins and extracellular matrix components that conform their niche. The interplay among the niche elements and NSCs determines the balance between stemness and differentiation, quiescence, and proliferation. The understanding of niche characteristics and how they regulate NSCs activity is critical to building in vitro models that include the relevant components of the in vivo niche and to developing neuroregenerative approaches that consider the extracellular environment of NSCs. This review aims to examine both the current knowledge on neurogenic niche and how it is being used to develop biocompatible substrates for the in vitro and in vivo mimicking of extracellular NSCs conditions. Introduction Stem cells are characterized by their extensive potential for proliferation and differentiation, as well as their major role in homeostasis and tissue regeneration. Although stem cells are a promising source for cell replacement therapies and cell regeneration after injury or disease, their use is still limited because there are several factors that must be taken into account, such as survival, tissue integration, specific differentiation, and functionality. In order for them to be considered within regenerative medicine, it is imperative to understand their in vivo biology and microenvironment, or niche. In recent years, the use of in vitro models that simulate various components of the niche has helped the understanding of the role of the various factors that compose it and even the design of artificial models that recapitulate microenvironment conditions [1,2]. In that sense, biocompatible substrates are an alternative for the incorporation of different physical and chemical properties that can modulate the biology of stem cells and improve their manipulation [3]. This paper will review some of the main extrinsic characteristics of the neurogenic niche and how current knowledge about it is being used to design biocompatible substrates that mimic the microenvironment of neural stem cells in order to regulate their biology, as well as the impact this may have on the future of tissue regeneration therapies. Embryonic and Adult Neural Stem Cells Neural stem cells (NSCs) originate the main cell types in the central nervous system (CNS) during development and adulthood. These cells are able to self-renew through cell division and have the capacity to generate specialized cell types. NSCs generate other NSCs, which maintain their differentiation potential and their proliferation or self-renewal capacity, and/or originate transit-amplifying cells or neural progenitor cells (NPCs), which display decreased proliferative potential and limited capacity to differentiate into neurons, astrocytes, and oligodendrocytes. From early embryonic development up to early postnatal stages, neurons are the main cell types generated, while late embryogenesis is characterized by the production of both astrocytes and oligodendrocytes, which continues during postnatal stages and throughout adult life [4]. The process of generating functional neurons or glial cells from precursors is defined as neurogenesis and was thought to occur only during the embryonic and perinatal stages in mammals. Currently, it is widely accepted that neurogenesis takes place in the adult brain and that the neural stem cells of this organ are descendants of their embryonic counterparts. A number of significant questions remain regarding the biology of embryonic and adult neural stem cells. How is the fate of NSCs determined? What determines whether NSCs remain in their stem stage or differentiate into one of the three mature phenotypes? Over the last few years, it has become clear that NSCs are sensitive to multiple signals during development, including extracellular matrix proteins, growth and transcription factors, or even the interaction with different cell types in their proximity [5,6]. Although apparently of the same nature as their embryonic counterparts, adult NSCs show different responses to the same regulators. At the same time, these cells are mostly quiescent in the adult brain with a low neuron production rate in contrast to the high proliferative rate of the embryonic NSCs. Additionally, neuronal maturation is accomplished at a slower rate in the adult brain than in the embryo. Although the reason for these differences is not clear, it has been reported that the acceleration of the maturation rate sometimes leads to the aberrant integration of newborn neurons in the adult hippocampus [7]. It has been suggested that, besides intrinsic differences, changes in the microenvironment surrounding neural stem cells during both development and adult life modulate their biological response [7]. During early embryogenesis, NSCs are not specifically localized and are instead organized as a single layer of proliferating neuroepithelial cells in the neural tube. Early in the neural tube formation, cells at the junction of the tube form the neural crest cells, which migrate out of the tube to form the neurons and glia of the peripheral nervous system as well as other non-nervous system cells, such as melanocytes, chondrocytes, and craniofacial osteocytes [86]. Neuroepithelial cells in the neural tube divide symmetrically, generating two identical daughter cells. Once this population has increased, they switch to a new form of asymmetrical division, producing two distinct daughter cells, the typical self-renewing stem cell and the neuroblast, with the former transforming into radial glial cells (RGCs) that exhibit neuroepithelial and glial proteins, extending a long cell process towards the outer neural tube region or pial surface (Figure 1(a)). As brain development proceeds, the proliferation of RGCs and neuroblasts generate several layers that surround the interior face of the neural tube, leading Figure 2: Neural stem cell niche in the adult dentate gyrus and subventricular zone. (a) Sagittal section view of an adult rodent brain showing the two main restricted regions where active adult neurogenesis is present, the dentate gyrus in the hippocampal formation and the lateral ventricle, from which type A cells migrate to form the rostral migratory stream (RMS) toward the olfactory bulb. (b) Neural stem cell niche in the subventricular zone (SVZ). Three types of progenitor cells are found close to the ependymal cell layer in the SVZ: a population of radial glia-like cells (type B cells) have the potential to serve as adult neural stem cells (NSCs) and generate transit-amplifying nonradial NSCs (type C cells), which later give rise to neuroblasts (type A cells). The SVZ includes several ECM components (yellow), called fractones (inset), which make contact with all the cell types, including the blood vessels and astrocytes in this region. (c) In the adult subgranular zone (SGZ), a population of radial glia-like cells (type 1 cells), along with nonradial glia-like cells (type 2 cells), generate neuroblasts. These neuroblasts then migrate into the granule cell layer and mature into neurons. CSPG, chondroitin sulfate proteoglycan; FGF2, fibroblast growth factor 2; GCL, granular cell layer; ML, molecular layer. to the first local neural niche called the ventricular zone (VZ) (Figure 1(b)). The neuroblasts and glioblasts with high proliferating capacity also termed intermediate progenitor or "transit-amplifying progenitor cells" (TAPCs) generate postmitotic cells that finally differentiate into neurons and glial cells. In the forebrain region, the TAPCs accumulate above the VZ, forming a second germinal zone, the subventricular zone (SVZ) (Figure 1(c)). All these populations are also in close contact with cells of nonneural origin, such as endothelial cells from the blood vessels, microglia, and pericytes [5,87]. Interactions among all these cells, together with the temporal and spatial synthesis of soluble and insoluble factors during CNS development, result in the establishment of the intricate neural network that will support the function of this system in postnatal life [88]. After the embryonic phase of CNS development, some NSCs populations remain in specific neurogenic niches throughout the lifespan of the brain. Two specific and welldescribed neurogenic regions remain in the adult brain after the embryonic phase, the subgranular zone (SGZ) in the dentate gyrus (DG) of the hippocampus and the subventricular zone (SVZ) of the lateral ventricles ( Figure 2). While the neurogenesis in these neurogenic sites results in the generation of new neurons in the brain, there are differences in the type of neurons generated. Neurogenesis produces dentate granule cells in the SGZ of the DG of the hippocampus, while neurogenesis in the SVZ of the lateral ventricles produces interneurons that migrate to the olfactory bulb ( Figure 2(a)). The production of mature neurons in these neurogenic sites comprises several steps that resemble embryonic neurogenesis. Neurogenesis of the adult SVZ begins with the activation of quiescent radial glia-like cells (termed type B cells) in the subventricular zone in the lateral ventricle and continues with the proliferation of transit-amplifying progenitor cells (type C cells), resulting in an increase in the neuroblast population (type A cells) or glia (oligodendrocytes or astrocytes) (Figures 2(a) and 2(b)). In the rostral migratory stream (RMS), type A cells form a chain and migrate toward the olfactory bulb through a tube formed by astrocytes. Upon reaching the olfactory bulb, immature neurons leave the RMS and migrate radially toward the glomeruli, where they differentiate into different subtypes of interneurons that, finally, are synaptically integrated [89] (Figure 2(a)). Type B cells are in close contact with the ependymal cell layer through a thin apical process and with SVZ vasculature through a basal process. This structural polarity allows type B cells to be in simultaneous contact with both vascular and cerebrospinal fluid (CSF) compartments [7] (Figure 2(b)). In other cases, hippocampal neurogenesis in the DG begins, as in SVZ neurogenesis, with the proliferation of radial (type I cell) and nonradial (type II cell) precursors that give rise to intermediate progenitors, which in turn generate neuroblasts (Figures 2(a) and 2(c)). Unlike in SVZ neurogenesis, immature neurons are not required to migrate long distances in order to initiate the differentiation process. The new immature neurons move into the inner granule cell layer and differentiate into dentate granule cells in the hippocampus. Within days, newborn neurons extend dendrites toward the molecular layer and project axons through the hilus toward the CA3. New neurons follow a stereotypic process for synaptic integration into the existing circuitry [89]. Besides the classic neurogenic sites, there is evidence indicating the presence of other neurogenic sites in the brain that are more evident after injury or growth factor stimulation, such as the walls of the third and fourth ventricle and the circumventricular organs. All of these sites are close to blood vessels [90], although their detailed characterization and contribution after brain damage are still the subject of intense study. Neural Stem Cell Niche: Characteristics and Relevance The anatomical distribution of many stem cells has been a troublesome task due to the low accessibility and restricted areas where they are located. Currently it is accepted that the tissue areas where stem cells lie are specialized microenvironments with specific cellular, chemical, and physical properties [2]. As mentioned above, embryonic and adult NSCs are influenced by their microenvironment. The microenvironment concept is related to the presence of extracellular matrix proteins and soluble factors such as hormones and growth factors in the extracellular space (some of which are summarized in Table 1). However, these are not the only factors that can influence the biology of the NSCs. It has been evident that interactions with neighboring cells are also a relevant modulator of the biology of these cells in both the embryo and the adult. Endothelial cells, astrocytes, ependymal cells, microglia, mature neurons, and the progeny of adult neural stem cells are additional regulators of the fate of the NSCs. All of these elements in the cellular microenvironment constitute the neurogenic niche that anatomically houses stem cells and functionally controls their development in vivo [90]. As excellent and extensive reviews have been published on the neurogenic niche [5,88,[91][92][93][94], this section is only a brief summary, which includes examples of the different factors that could be taken into account in biocompatible substrate design. The neurogenic niche determines whether a NSCs divides or remains as a quiescent cell as well as whether it survives, dies, proliferates, migrates, or differentiates into different neural cells [6,95,96]. During the development of the CNS, the VZ of the neural tube is mainly comprised by the proliferative cells of the neuroepithelium. Segmentation and regionalization of the neural tube modify and restrict the neurogenic areas during the developmental stages, leading to a niche that is spatially, chemically, and cellularly variable [97,98]. During the changes, stem cells and progenitors in the VZ and SVZ remain in contact with specific extracellular components such as growth factors, ECM components, and cells that modulate their division and differentiation. During the development of the neocortex, for example, neuroepithelial cells proliferate to form several layers of cells surrounding the lumen of the neural tube. As mentioned above, the inner cells transform into RGC, whose asymmetric divisions generate self-renewing cells that stay in the VZ, and progenitor cells that migrate to the SVZ. After several divisions, the progenitors go from the SVZ to their destination through radial and tangential migration and differentiate into neurons and glia [5,96,99]. Several transcription factors, proteins related to cell polarity, such as cadherins and nectin, as well as signaling components are all activated in a complex that promotes the dissolution of cell adherens junctions and the reorganization of the actin cytoskeleton to favor progenitor migration [100]. The vascularization of the neural tube is closely related to neural development. The timing of angiogenesis is similar to the neurogenesis. The RGCs secrete several growth factors, such as vascular endothelial growth factor (VEGF), transforming growth factor beta two (TGF -2), and fibroblast growth factor two (FGF2), which, in turn, induce vasculature development, while the growth factors secreted by endothelial cells, such as VEGF and Jagged-1 (a Notch ligand), influence neurogenesis [5]. Furthermore, nonneural cells are also involved in establishing and supporting the neurogenic niche [5]. For example, it has been shown that, during brain development, pericyte cells synthesize the sonic hedgehog (Shh) protein, which plays an important role in the proliferation of the neuroepithelial cells that conform the VZ [101]. In the postnatal and adult brain, the main neurogenic niches in the SVZ and the DG play a role in maintaining the balance between stemness and NSCs differentiation. Stem Cells International The vasculature also emerges as an important and integral component in the adult stem cell niches of both the hippocampus and SVZ [102,103]. There is increasing evidence showing a dense network of blood vessels in the hippocampus that spans beneath the RGCs and dorsal to the SVZ in the lateral ventricles. This network is closely associated with the NSCs (including the TAPCs) and has long processes oriented along the neuroblast chain and within the microglia cells [104]. It has been shown that vasculature interactions promote the neuroproliferation and neuroprotection of the NSCs and the migration of astroglial cells through the secretion of regulatory factors in an autocrine or paracrine manner [5,105]. Transcriptome analysis of endothelial cells of the CNS shows the presence of several factors involved in neurogenic niche [106]. VEGF and TGFbeta1 are synthesized and secreted by endothelial vascular cells in the SVZ [36] and VEGF is also secreted by NSCs in the hippocampal neurogenic niche [107]. A great variety of factors active in the NSCs niche modulate the components of this site, with some of them preventing terminal differentiation and preserving the NSCs pool. A highly conserved secreted protein with a determinant role during the dorsoventral patterning of the neural tube, Shh, is an example of such a factor. Its mutation during embryo development leads to a reduction of telencephalon and diencephalon [108] and its ectopic expression increases the generation of oligodendrocytes [109]. On the other hand, Shh is involved in maintaining the pool of progenitor cells in the postnatal brain [110]. Another factor, Notch1, is a transmembrane protein of the Notch family of proteins playing a variety of roles during development. Notch1 induces the expression of transcriptional repressor genes such as Hes1, leading to the repression of proneural gene expression and the maintenance of the NSCs [105]. The importance of the Notch signaling requirement has been shown in inducible knockout mice where stem cell self-renewal and expansion are disrupted leading to neural stem cells depletion [105]. Bone morphogenetic proteins (BMPs), members of the TGF superfamily first identified by their role in bone induction [111], and the highly conserved Wnt proteins are all secreted proteins identified as morphogens due to their concentration dependent role during development. However, they also play a critical role in maintaining adult NSCs niches, activating the proliferation of type B astrocytes, the transit-amplifying type C cells in the SVZ, and neurogenesis in the SVZ [7]. The basal lamina and ECM provide both structural shape and mechanical support for the developing and adult nervous systems [93]. These components of the neurogenic niche act as a scaffold for the incorporation of a variety of ECM molecules and growth factors. Some of the most important ECMs that play a role in the regulation of the biology of the NSCs and progenitors cells are laminin, collagen IV, nidogen, perlecan, the glycoprotein tenascin C, the chondroitin sulfate proteoglycans (CSPG), and the heparan sulfate proteoglycans (HSPG) [112]. Laminin is an ECM heterotrimeric protein located in the lateral ventricular wall of the SVZ. In the mammalian brain and spinal cord, the basal lamina in the neurogenic site forms branches, or "finger-like processes," called fractones, that extend from the ependymal cells and blood vessels. Laminin and its receptor 6 1 integrin have been detected in these structures in the NSCs of the SVZ that lie near the vascular cells [87]. Additionally, heparan sulfate, collagen IV, nidogen, and perlecan have been described as components of the fractones that make contact with the transit-amplifying cells, suggesting that, in combination with growth factors, these structures have a role in adult neurogenesis. Specifically, HSPG, CSPG, and perlecan bind to growth factors such as FGF-2, a potent mitogenic factor for the NSCs, suggesting that these ECM components promote growth factor activity in the NSCs niche [113]. Secreted by type B cells, the astrocytes surrounding migratory type A cells, and the RGC in embryos, tenascin C is a major component of the ECM in the adult SVZ [114]. Tenascin C regulates the expression of EGF receptors in the embryonic NSCs and has been reported to alter cell response to mitogenic growth factors by enhancing sensitivity to FGF-2 and promoting EGF acquisition [115,116]. Tenascin C also regulates oligodendrocyte precursor proliferation, while the isoform tenascin R induces the maturation of oligodendrocyte precursors [114]. Reelin is a large ECM glycoprotein that plays an important role not only in neuronal migration during cortex development [117] but also in the NSC niche, where it is expressed in the SGZ in adult hippocampus and regulates NSCs maintenance and migration [82,118]. Proteins that act as neuronal guidance factors are also associated with the regulation of the adult neurogenic niche. The Eph/ephrin receptor-ligand complex is large class of membrane associated receptors and ligands that are involved in axon guidance [119] and mediate the cell-to-cell signaling that promotes the proliferation of the NPC in the SVZ [119,120]. Netrins are a laminin-related family of proteins that act as a guidance cue for neuronal projection and have a role in inducing the migration of NSCs during cerebellar development [121]. Recently, Netrin-4 was found to interact with components of the ECM in a complex that is able to control the proliferation of the adult NSCs and their migration to the mouse olfactory bulb [122]. Altogether, this evidence suggests that ECM components and soluble proteins regulate the biology of the NSCs in the neurogenic niche. Biocompatible Substrates for Mimicking the Neural Stem Cell Niche The growing body of evidence supporting the influence of the extracellular environment on stem cells raises the question as to whether in vitro culture conditions have the optimal characteristics for growing stem cells outside the body. Evidence is beginning to show that constructing in vitro microenvironments that incorporate some of the niche elements where stem cells lie in vivo could be advantageous to the understanding of stem cell biology and possible applications in regenerative medicine [2,90,[123][124][125]. Evidence has shown that extracellular environmental characteristics such as protein composition, protein anchoring density, stiffness, and topography are important parameters to consider [2,126]. Polymeric biomaterials can be designed and modified to obtain compatibility characteristics with cells and tissues and to provide substrates, cells, and proteins. Compatible materials can be functionalized to provide bioactive proteins and peptides that signal cells to attach, proliferate, or differentiate and can modulate physical characteristics such as stiffness or topography, even at a nanometric scale [93,127]. There are two main approaches for the use of biomaterials: as delivery vectors for proteins and growth factors with important effects in stem cell biology and as scaffolds for the manipulation of cell characteristics or for improving viability. More recent studies are using both approaches to design more accurate scaffolds or substrates, such as bioactive polymers with multivalent ligands, or 3D substrates with several crosslinking densities that are functionalized using active peptides [3,93,128,129]. Bioactive Factors Coupled to Compatible Substrates As mentioned before, in vivo growth factors are coupled to the ECM of the neurogenic niche, thus forming site specific regions of active factors. Growth factors are coupled by electrostatic interactions with ECM proteins such as glycosaminoglycans, which are one of the most abundant components of the ECM, thus regulating growth factor accessibility to cells [130]. Polymeric materials are usually inert and do not have chemical interactions with either cells or proteins, a property which improves biocompatibility in that it impairs the adsorption of nonspecific proteins, avoiding the recognition by innate immunity system [131,132]. However, bioactive peptides or proteins can be coupled to polymeric materials to support cell adhesion, viability, stemness, or differentiation and can be used as either delivery vectors or scaffolds through the coupling of specific cell adhesion proteins or peptides in 2D or 3D cultures [123,128,[133][134][135]. The use of biomaterials for the delivery of growth factors offers the possibility of controlling the place and rate of delivery and avoiding unspecific or pleiotropic effects. An important issue to consider is whether to deliver growth factors by releasing them or coupling them to the polymer. It has been shown that EGF, covalently attached to a substrate, leads to a greater expansion of human NPC as compared to soluble EGF [136], while platelet derived growth factor (PDGF), coupled to an agarose hydrogel, induces NPC differentiation into oligodendrocytes. However, the degree of the expression of myelin oligodendrocyte glycoprotein (MOG) is higher when the NPCs are exposed to soluble PDGF [137]. Covalently linked growth factors cannot be internalized by cells, meaning that their activity and function could be disrupted. Some actively released scaffolds are being developed in response to these problems, such as the heparin functionalized poly(ethylene glycol) (PEG) hydrogels [138,139] and fibrin gels with growth factors coupled via heparin binding [140]. In this latter approach, the simultaneous release of neurotrophin-3 (NT-3) and PDGF improved neural induction and decreased astrocyte differentiation, indicating that a biomimetic scaffold could be designed to use several growth factors with specific kinetic release and dosages [129]. The potential of biomimetic scaffolds that provide and expose cells to growth factors for in vivo application has been recently shown using hyaluronic acid gels functionalized with ephrin B2 and Shh. Interestingly, a multivalent polymer was designed to cluster ephrin receptors which substantially increased the quantity of new neurons formed in an adult neurogenic zone, such as the hippocampus, and in nonneurogenic zones, such as the cortex and striatum. Neurogenic activity was also induced in geriatric rodents, with decreased neurogenesis in the hippocampal region [3]. Although there remain many aspects to consider before the clinical application of this strategy, it is evident that the characterization of neurogenic niches and their application in compatible bioactive materials could be a promising approach. Cell-material interaction is crucial for the modulation of cell behavior. Polymeric materials can also be functionalized to expose peptides, adhesion molecules, or chemical groups that mimic those in cells and the ECM and exert their function through specific ligand-receptor recognition, or electrostatic interaction. The Arg-Gly-Asp (RGD) tripeptide motif present in ECM components, such as laminin, can modify adult hippocampal NSCs proliferation and differentiation [126,141], with -SO 3 H exposed groups favoring the differentiation of embryonic NSCs into oligodendrocytes, while -NH 2 exposed groups induce neuronal differentiation [142]. Functionalized polymers can be used as scaffolds to improve NSCs transplantation and thus support their survival and integration into brain tissue, especially when large tissue deficits are present, such as after traumatic brain injury, cerebral ischemia, or a transected spinal cord [1,143,144]. Stiffness, Topography, and Neural Stem Cells The physical properties of the extracellular environment also influence stem cell behavior. Stiffness is defined as the resistance of a material to deformation when force is applied and is mainly related to material composition and structure, while topography refers to the tridimensional shape and relief of a material, in this case at micro-or nanoscale. Evidence has shown that physical properties can modify cell behavior and modulate stem cell differentiation capabilities by modulating gene expression, integrin clustering, the formation of cell adhesion, and cytoskeleton regulation [8,[145][146][147]. Although brain stiffness has been difficult to measure and despite the variation in the data reported according to the technique used, it has been accepted that the brain is one of the softest tissues of the body. Brain stiffness can change according to age and the area of the brain. The adult brain is stiffer than the juvenile brain (∼0.040 kPa in postnatal 10 rat brain samples versus ∼1.2 kPa in adult rat brain samples). Interestingly, most of the cortical subregions are stiffer than the dentate gyrus and CA1 regions in the hippocampus [148]. These differences are related to the water, protein, and lipid content. Water content significantly decreases with age, while lipid and protein content increase [148]. Another important factor is the composition of the ECM, as sulfated glycosaminoglycan increases with age [97]. In a developing brain, there are also important changes in their mechanical properties, with, for example, a gradual increase in stiffness in the VZ and SVZ being closely related to neurogenic stage, neuron maturation, and ECM composition changes during development [97,149]. Previous studies have demonstrated that mechanical properties of the substrate, in conjunction with cell adhesion ligands, can preserve the undifferentiated state of human embryonic stem cells [150] or induce differentiation in several cell phenotypes depending on the degree of stiffness [145]. In the case of neural stem cells, Leipzig and Shoichet (2009) have shown that softer substrates comprising photo cross linkable methacrylamide chitosan (MAC) hydrogels functionalized with laminin induce higher proliferation and neuronal differentiation levels for the NPCs obtained from the SVZ region of the forebrains of adult rats. In contrast, the proliferation rate decreases for the NPCs in stiffer gels, which differentiate preferably into oligodendrocytes [151]. Similarly, studies using NPCs from adult rat hippocampi showed higher proliferation rates for the NPCs in hydrogels of ∼0.1 to ∼0.5 kPa than in softer substrates (∼0.01 kPa), reaching a proliferation peak at 1 to 4 kPa. In addition, the NPCs preferentially differentiate into the neural phenotype in soft substrates (∼0.1-0.5 kPa), while glial phenotypes are predominant in stiffer substrates (∼1-10 kPa) [152]. Notably, the influence of soft substrates in neuronal differentiation is maintained in 3D cultures when the NPCs from adult hippocampi are grown in alginate hydrogels [153]. An interesting fact is that NPCs reach high proliferation and neuronal differentiation levels in substrates with low stiffness that are similar to those reported for brain tissue. However, the influence of soft gel substrates on embryonic NPC differentiation seems to be different, in that it has been shown that glial differentiation is enhanced in soft polydimethylsiloxane gels (PDMS), while neuronal differentiation is not affected by soft gels, as previously described [154]. These differences could be attributed to the origin or stage of development of the NPCs, or even the characteristics of the substrate, as reported previously by Trappmann et al. (2012) [147]. The effects of substrate stiffness can be mediated by the modulation of gene expression, as reported earlier using mesenchymal stem cells (MSCs). When MSCs are grown for longer periods of time on stiffer substrates, the proosteogenic genes are expressed in a nonreversible way. However, when they are grown for short periods of time on the same substrate, the MCSs are still able to reverse the expression of proosteogenic genes and begin to express neurogenic genes, which shows evidence of a sort of mechanical memory that could have important implications for the way stem cells are being grown and expanded in vitro [8]. The micro-and nanoscale topography of a substrate have been shown to be another important factor for the manipulation of stem cell behavior. Topography can be altered, using pillars, grooves, pits, or fibers to modify cell orientation and cytoskeleton arrangement. The microscale distribution of cell adhesion points and, therefore, the manipulation of cell shape has been shown to be crucial to the direction of MSC differentiation toward osteogenic or adipogenic lineage. These effects are mediated by actin-myosin tension [155]. Fiber diameter can also influence NSCs differentiation; laminincoated electrospun polyethersulfone (PES) fiber meshes of 283 nm increased oligodendrocyte differentiation by 40%, while 749 nm fibers increased neuronal differentiation by 20% as compared with culture plates [156]. Aligned fiber substrates of 480 nm upregulate neuronal differentiation through the induction of Wnt/ catenin signaling, which is a crucial pathway during neurogenesis in embryos and adults and is more favorable to the survival of neuronal cells as compared to the oligodendrocytes [124]. It has also been shown that micropatterned substrates with aligned microgrooves functionalized with laminin align the direction of hippocampal NSCs growth and facilitate their differentiation into neurons when they are cocultured with astrocytes [157]. Although the multiple approaches to the physical and chemical manipulation of biocompatible substrates show their capability to manipulate NSCs behavior, there are still several characteristics that must be considered, such as the complexity of the interrelation of multiple signals and how these can affect NSCs behavior depending on their origin and the intrinsic stem cell characteristics. Conclusions The study of the numerous elements present in the neurogenic niche and how they interact with stem cell behavior contributes to understanding the importance of extrinsic signals for NSCs destiny. This knowledge is being used to mimic the neurogenic niche for in vitro and in vivo applications. Although the results obtained up to now show promise, a more accurate biomimetic substrate for in vitro studies and regenerative medicine is still a long way off. Several factors must be taken into account, such as the soluble factors and ECM components present in the niche and the physical properties of the substrate. The type and the origin of the stem cells intended to provide a niche must also be considered. The ideal biomimetic scaffold should, therefore, incorporate some of the main factors that control stem cell behavior. Multidisciplinary approaches to developing the most accurate niche-like substrates and to understanding their biological implications are a fascinating field that will help develop stem cells knowledge.
7,256.2
2016-01-06T00:00:00.000
[ "Engineering", "Materials Science", "Medicine" ]
Fabrication Methods and Luminescent Properties of ZnO Materials for Light-Emitting Diodes Zinc oxide (ZnO) is a potential candidate material for optoelectronic applications, especially for blue to ultraviolet light emitting devices, due to its fundamental advantages, such as direct wide band gap of 3.37 eV, large exciton binding energy of 60 meV, and high optical gain of 320 cm−1 at room temperature. Its luminescent properties have been intensively investigated for samples, in the form of bulk, thin film, or nanostructure, prepared by various methods and doped with different impurities. In this paper, we first review briefly the recent progress in this field. Then a comprehensive summary of the research carried out in our laboratory on ZnO preparation and its luminescent properties, will be presented, in which the involved samples include ZnO films and nanorods prepared with different methods and doped with n-type or p-type impurities. The results of ZnO based LEDs will also be discussed. Introduction Zinc oxide (ZnO) based materials are potential candidates for optoelectronic applications, especially for blue to ultraviolet light emitting devices, due to its fundamental advantages, such as direct wide band gap 3.37 eV, large exciton binding energy 60 meV, and high optical gain 320 cm −1 at room temperature. Intensive research efforts have focused on ZnO and related devices for many decades. The quality of ZnO in the forms of both bulk and thin film has been improved substantially. The nanostructured ZnO materials have attracted great attention and various dimensions and shapes have been prepared. On the other hand, the luminescent properties of the ZnO related materials have OPEN ACCESS been extensively investigated. The ZnO based LEDs of various device structures have been demonstrated. In addition, the exploration of new preparation methods is still in progress. In terms of the ZnO related investigations, we can say that great progress has been achieved, but there are still some obstacles to be overcome for realizing its wide optoelectronic applications, which remains a hot research direction. Large numbers of researchers are steadily working to explore this potential and promising field, and thousands of research papers are published each year. It is noted that various aspects of the investigation have been reviewed, such as in the comprehensive review papers [1,2]. In this paper, instead of a comprehensive review, we will emphasize the recent progress in a few selected topics we are interested in. For each topic, recent progress will be briefly reviewed and then the research activity conducted in our lab will be introduced in detail. This paper is arranged as follows. Section 2 is devoted to the progress in ZnO material preparation methods, especially the vapor cooling condensation method newly developed in our lab. Section 3 comments on the recent progress in the investigation on the luminescence of native defects, a widely observed phenomenon in various ZnO materials. The theoretical study of unintentional n-type conductivity is reviewed in Section 4. The exploration on p-type ZnO carried out in our lab is introduced in Section 5. The research of nanostructured ZnO materials, both on their preparation and properties, are summarized in Section 6. In Section 7, after a brief review, the recent progress in the research of ZnO based light emitting diodes conducted in our laboratory is presented, followed by a short summary in Section 8. Rf sputtering is one of the most popular growth techniques for ZnO thin films. Although the earlier sputtered materials were polycrystalline or even amorphous, some research [16] has reported the high-quality single-crystal ZnO films deposited on sapphire (0001) by rf magnetron sputtering. They found that a high substrate temperature was essential to improve the crystal structure, but rf power had to be adjusted for the appropriate growth rate. We have extensively investigated the relationship between the crystal structure, doping concentration and properties of the sputtered films and the sputtering conditions [18][19][20]. Figure 1 shows a schematic of the rf magnetron co-sputtering system used in our lab. In this system [40], a dual rf power supply with synchronized phase is employed to control the rf power at the ZnO target and the dopant target, respectively. The substrate holder is rotating during the deposition to improve the uniformity of the thickness and the doping content of the deposited films. The substrate holder temperature can be controlled too. Figure 2 shows the FE-SEM images of (a) undoped ZnO film and co-sputtered Al-ZnO films, with the rf power on ZnO target fixed and the power supplied to Al target to be (b) 26 W, (c) 40 W, and (d) 55 W [18]. In these conditions the corresponding theoretical Al atomic ratios [Al/(Al+ Zn) at %] in the co-sputtered ZnO films were evaluated to be (b) 7.5, (c) 10, and (d) 12.5 at % according to the individual deposition rates of the undoped ZnO and metallic Al films. All the SEM images show a grainy morphology. It was found that the grain size decreased as the Al atomic ratio in the co-sputtered film increased, indicating that the grain growth was inhibited by introducing Al into the ZnO matrix. The inhomogeneous wedge-like grains showed an evidence of lateral growth that resulted in the random growth orientation. The evolutions of the grain size and shape observed by FE-SEM coincided with the results derived from the XRD diffraction patterns. The effects of working pressures on the electronic and optical properties of undoped ZnO films deposited on Si substrates were studied [19]. It is found that the resistivity of the deposited ZnO films decreases with the working pressure, and the resistivity of 4.3 × 10 −3 Ωcm can be obtained without post annealing ( Table 1). The optical transmittance measurements gave a value above 90% at a wavelength longer than 430 nm and about 80% at the wavelength of 380nm for the deposited film. The time-resolved PL measurement showed that the carrier lifetime increases with the working pressure, indicating a reduction of nonradiative recombination rate. It can be attributed to the decrease of oxygen vacancies in the ZnO films deposited at a higher working pressure. This result is verified by the photoluminescence measurements (Figure 3), in which the UV PL intensity increases with the working pressure. Besides, with increasing the working pressure, the absorption coefficient decreased and the associated optical energy gap of ZnO thin films increased. The contents of sputtering gas are also important for the deposition, which will be discussed later. Figure 3. Photoluminescence spectra of ZnO films deposited under various working pressures. Reprinted from [19] with permission. Figure 4 shows the XRD patterns of undoped ZnO and AlN codoped ZnO films, showing the effect of post-annealing [20]. An apparent diffraction peak of ZnO (002) phase was observed in the diffraction pattern of as-deposited undoped ZnO film, while the crystalline structure of as-deposited AlN codoped ZnO film was more disordered, as seen from Figure 4(a). However both the codoped and undoped ZnO films annealed at 400 °C under nitrogen ambient exhibited polycrystalline structures with the dominated diffraction peaks of ZnO (002) and (101) phases in the diffraction patterns. In addition, except for the ZnO-related diffraction peaks, a weak diffraction peak determined as Zn 3 N 2 (222) was observed from the annealed Al-N codoped ZnO film diffraction pattern. The appearance of the zinc nitride phase was believed to be the nitrification reaction of the excess Zn and N atoms in the codoped films, indicating the excitation of the N ions after thermal annealing. P-type conductive behavior of AlN codoped ZnO was obtained after an additive post-annealing treatment at temperatures ranging from 400-600 °C under nitrogen ambient for 30 min, which will be discussed in Section 5. We have developed a vapor cooling condensation method [39], with which high quality intrinsic ZnO (i-ZnO) films can be fabricated at low temperature. The schematic vapor cooling condensation system is shown in Figure 5. By heating the tungsten boat loaded with 0.85 g of ZnO powder, a 300 nm thick i-ZnO film was deposited on the substrate cooled by liquid nitrogen. An electron concentration of 7.6 × 10 15 cm −3 and mobility of 2.1 cm 2 /Vs of the deposited i-ZnO film at room temperature were obtained. The room temperature PL spectrum of the resultant i-ZnO film is shown in Figure 6, which consists of a strong ultraviolet emission band centering at 382 nm with a full width at half maximum of 13 nm. The absence of visible emission implies a very low defect concentration owing to the low temperature growth. The system was also used for n-ZnO films deposition with pure ZnO and In as source materials. The electron concentration and mobility of the deposited ZnO:In films were 1.7 × 10 20 cm −3 and 3.7 cm 2 /Vs, respectively. The PL spectrum of the n-ZnO:In films is also presented in Figure 6. In order to realize modern devices, modulation of the bandgap of the material is required, which has been demonstrated by the development of Zn 1-x Mg x O [26,[41][42][43] and Zn 1-z Be z O [44,45] alloys for the larger bandgap material and Zn 1-y Cd y O alloy for the smaller bandgap material [46,47], showing a wide tuning range for ZnO-based materials. In our lab sol-gel method was used to perform the band gap engineering [37,38]. Zn 1−x Mg x O (x = 0.027, 0.042, and 0.060) films were prepared by the sol-gel method and spin coating technique, using Zn(CH 3 COO) 2 ·H 2 O and Mg(CH 3 COO) 2 ·H 2 O as start materials and methanol as solvent. After depositing by spin coating, the films were dried at 300 °C for 10 min on a hotplate to evaporate the solvent and remove organic residuals. The procedures from coating to drying were repeated many times. The films were finally annealed in air at 500 °C for 4 h. XRD profiles of the resultant Zn 1−x Mg x O films suggest the formation of wurtzite structure with a preferred c-axis orientation. And no evidence of MgO phase was seen, confirming the monophasic nature of these compositions. Figure 7 is the BEL (Band-edge luminescence) spectra of Zn 1−x Mg x O (x = 0.027, 0.042 and 0.060) films [38]. Obvious blue shift was observed when the content of Mg increased, which resulted from an increasing of the band gap. Photoluminescence (PL) of Native Defects in ZnO The origin of luminescent transitions is always one of central investigation topics for luminescent materials. Large amount of research efforts have been devoted to the luminescence properties of ZnO for more than half a century, including both intrinsic and extrinsic luminescence. However, in comparison with the study on the extrinsic luminescence of ZnO, the exact origin of its intrinsic luminescence due to native defects is still controversial. In this section, we will review the new progress obtained in recent years on the luminescence of native defects in ZnO, especially the progress in theoretical study of the defect states. A typical low-temperature PL spectrum of the nominally undoped ZnO contains sharp and intense excitonic lines in the UV region of the optical spectrum, with one or more broad bands in the visible region. In spite of numerous reports on red, orange, yellow, and green broad bands, observed mostly at room temperature, their likely geneses are still speculative at this time. These bands are attributed to a variety of native defects such as oxygen vacancies V O , zinc vacancies V Zn , and oxygen interstitials O i , but very little is known about the properties of defects that cause various bands in the luminescence of undoped ZnO. For example, the well-known green luminescence (GL) band centering at about 2.5 eV in undoped ZnO usually dominates the defect-related part of the PL spectrum. However, the nature of it remained controversial for decades. Özgür et al. [1] pointed out that while similar in position and width, these PL bands may actually be of different origins: the GL band with a characteristic fine structure is most likely related to the copper impurities [48], whereas the structureless GL band with nearly the same position and width may be related to native point defects such as V O or V Zn . Meanwhile, some works [49][50][51][52] speculated the oxygen vacancies, some works [53,54] speculated the zinc vacancies cause the GL band. Especially Leiter et al. [55,56] have identified V O as the defect responsible for the structureless GL band in ZnO and demonstrated striking similarities of this defect to the anion vacancy in other ionic host crystals: BaO, SrO, CaO, and MgO (F centers). In their model the two-electron ground state of the neutral V O is a diamagnetic singlet state, absorption of a photon transforms the system into a singlet excited state, followed by a nonradiative relaxation into the emissive, paramagnetic state (S=1) which can be detected by ODMR, then the optical cycle is closed by radiative recombination back to the S = 0 ground state. In a recent work [57], Hofmann et al. concluded that the PL and DLTS (deep level transient spectroscopy) experimental results suggested a correlation between the GL and a donor level 530 meV below the conduction band, which is attributed to the 0 / O V  transition of oxygen vacancies. The GL bands were often found in observed PL spectra of samples prepared in our lab and they usually were attributed to oxygen vacancy defects. For example, a green-band emission was observed both for the undoped ZnO film and the p-type AlN codoped ZnO films [58], and the intensities were smaller for the latter, which implies that less oxygen vacancies existed in them. However, in a work on ZnO-on-GaN heterostructures grown using the vapor cooling condensation system [39], compared to undoped ZnO, a remarkable broad green emission band in ZnO:In films was observed, and it was considered to be induced by the oxygen vacancies. Moreover, in our work on ultraviolet (UV) emission of In-doped ZnO nanodisks grown by carbothermal reduction at 1000 °C [59], air-cooled ZnO nanodisks showed a strong green emission, while furnace cooling in conjunction with introducing O 2 , around 1.0%, into flowing Ar during the growth significantly enhanced the growth rate and UV emission of ZnO nanodisks (while the green emission was significantly suppressed). The causes were attributed to the reduction of oxygen vacancies and surface defects. Besides, for the Zn 1−x Mg x O films prepared by the sol-gel technique [38], a GL was observed and its intensity decreased (while the intensity of the band-edge luminescence was seen to increase) with the increase of Mg content, which was partly attributed to a decrease in the number of V O -related defects. However, deferent results were also reported. Recently Kappers et al. [60] studied ZnO crystals grown by the seeded chemical vapor transport method. The samples were irradiated at room temperature with 1.5 MeV electrons to create oxygen and zinc vacancies, and then characterized using optical absorption, PL, and electron paramagnetic resonance (EPR). It was found that no correlation existed between the green emission and the presence of oxygen and/or zinc vacancies. Similarly, Vlasenko and Watkins [61] also found that electron irradiation produced O vacancies and other defects in ZnO, leading to a reduction in GL and an increase in PL bands near 600 and 700 nm. These works questioned the assumption that GL is related to V O . In addition, we noticed that the formation energy of zinc vacancies is as high as 4 eV in n-type ZnO [62], which implies that the concentration of V Zn in ZnO would be too low to cause the observed green band in the PL spectrum. So are the other native defects, for example, zinc interstitials [63] and oxide antisite defects [64,65], as have been suggested previously. Recently, Reshchikov et al. [66] systematically investigated the PL of defects in ZnO. They analyzed carefully the PL spectra obtained in wide ranges of excitation power densities and sample temperatures, as well as the spectra at various time delays after a pulsed excitation. They resolved a number of PL bands in the visible region of the PL spectrum, as seen in Figure 8 and Table 2 (some bands could not be observed at either 10 or 300 K) in which E A is the activation energy, Tcr is the critical temperature above which the quenching begins, both are quantities describing quenching behavior of the PL band. Based on the careful analyses of the spectra they reached the following conclusion. The majority of the defects remained unidentified and transitions responsible for particular bands need to be verified. The most recognizable PL bands are the OL band peaking at 1.96 eV at 10 K (assigned to transitions from shallow donors to a deep acceptor level about 0.5-0.6 eV above the valence band [67]) and the Cu-related GL4 band with the characteristic fine structure. Theoretically, there were also some first-principles calculations on electronic structure of defects in ZnO [68,69] based mainly on the local density functional approximation (LDA) or added with various corrections (LDA+U, LDA/GGA, etc.). However, these works always gave the fundamental band gap value E g < 1 eV (compared to experiment value 3.4 eV) for several tens of years, therefore the defect levels within the gap can only be estimated by means of various empirical correction. Recently, based on the hybrid density functional computation program, a few authors [62,70,71] have made ab initio calculation of the band gap, atomic and electronic structure (especially of the defects) for ZnO. Their work gave correct band gap (E g = 3.4 eV), and so the calculated results for defect states and levels within the band gap are much more reliable. To understand PL related native defect states in ZnO, Hu and Pan [71] O (occupying octahedral sites) with q = 0,+1or −1,+2 or −2. As part of the results, their table 1 listed all the calculated single-electronic energy levels (Kohn-Sham levels) at the Gama point of the Brillouin zone within the band gap of a modeling defect-crystal (MDC). In the calculation, they used a 72-atom supercell containing one isolated native defect of above kinds and the atom-configuration was completely relaxed. Their calculation is spin-polarized, namely, the same orbital state with spin-up or spin-down has different energy generally. This is obvious for supercell of odd-number electrons, in which the number of spin-up electrons must be different from the number of spin-down electrons, like the one containing defect with q = +1 or −1, since the exchange potential in Kohn-Sham equation is different for -up or -down single electron. However, for the 0 Zn V -containing MDC here, the calculated result is also spin-polarized. As pointed out by C.H. Patterson [70], the neutral Zn vacancy contains two "dangling holes". It is noted that the hole-orbitals (unoccupied electron-orbitals) mainly consist of 2p orbitals of the four oxygen atoms around the vacancy. The "total-energy minimization" produces a spin-split single-electron energy level scheme where spin-up levels descend and are totally occupied and only two high-energy spin-down orbitals are left to be empty. Usually each MDC has several defect-states, the number of defect-levels within the band gap and the total number of electrons occupying each level can be different from each other. Therefore, the positions of the Fermi levels and the level-occupation under temperature T = 0 are also different for different MDC. For the reader's convenience, in Figure 9 we plot an energy level diagram based on the data in Table 1 of their paper [71]. Figure 9, the arrows "↑", "↓", and "↑↓" standing next to each level indicate that the corresponding levels are "spin up", "spin down", and "spin degeneration", respectively. The double line "═" represents the doubly degenerate orbital. The solid sphere "•" represents an electron that occupies the level beneath it. The figure represents the occupation scheme under the temperature T = 0 for the ground state of the studied system containing the defect presented explicitly at the bottom of the figure It should be noted that the electron-phonon (or electron-lattice) interaction may be strong for localized electronic states of defects. That is, the relaxed atom-configurations of different charge states of a defect are, in general, different from each other. The defect V O in ZnO gives a typical example, for which the different charge states have very different relaxed atom-configurations. For the neutral defect 0 O V , the four Zn neighbors move inward, with the distances between these Zn atoms contracting by about    0 9% (or −12% [72]) relative to that in perfect ZnO, while for the charged 2 O V  , the distances expanding by about   2 19% (or 23% [72]). The origin of these large lattice relaxations can be interpreted by the change in electron distribution in different charge states of V O . The removal of an oxygen atom from the lattice breaks four bonds. The four "dangling bonds" on the surrounding Zn atoms combine to form a fully symmetric single-electron state a 1 of energy in the band gap (and three almost degenerate states located above the CBM). In the neutral charge state, the a 1 state is occupied by two electrons. The energy of the state is lowered as the four Zn atoms approach each other. At the same time, the Zn-O bonds are stretched. Finally the gain in electronic energy balances the cost to stretch the Zn-O bonds surrounding the vacancy at the corresponding equilibrium configuration, and the resulting a 1 state lies in the gap near the top of the valence band. In the O V  charge state, the a 1 state is occupied by one electron. In this case, the competition of the electronic energy with the strain energy results in the four Zn atoms displacing slightly outward (   1 2%), correspondingly the a 1 state moves towards the middle of the band gap. In the 2 O V  configuration, the a 1 state is empty, which leads the four Zn atoms strongly relaxed outward, and the empty a 1 state lies near and below the conduction band edge. Recently, refined results have been reported, in which Oba et al. [62] executed a first-principles calculation of the electronic structures of both charged and uncharged supercells containing 192 atoms by hybrid functional approach combining with careful finite-size corrections. The resultant band structures for perfect bulk ZnO and defect ( 0 Figure 10. As for the relaxation of atomic configuration around defects, they found However, it should be pointed out that the above calculation is based on a single electron approximation, in which even roughly from the point of view of Hartree approximation, the Kohn-Sham energy states are states of a single electron moving in an averaged field induced by interaction between electrons in a system. It is often to regard simply an optical transition as a transition of one electron between two Kohn-Sham levels of the system, calculating the difference between these levels as the transition energy. Actually, the optical transition energy of a multi-electron system is the difference of the total energies of the initial and final multi-electron states. The total energy of a multi-electron state contains the interaction between the electrons that occupy different single electron (Kohn-Sham) levels. Therefore the transition energy is generally different from the difference between the (transition-involved) two single electron energy levels within a single set of Kohn-Sham levels. In addition, different states of a multi-electronic system, in general, have different single-electron (Kohn-Sham) energy levels due to the different averaged field. In other words, the energies of all other electrons, and the interaction energy between them, are also changed after the studied one-electron optical transition, especially for transitions between different charge states of a localized multi-electron system as that in the case of defects. Then for properly describing the defect-related optical transition, a so-called optical transition level (see Section 4 for detail), an effective single electron energy level pertinent to the studied defect-related optical transition, has to be introduced into the band energy diagram instead of Kohn-Sham levels. Determination of the transition level is of great interest, but has not yet been systematically calculated as we know. Obviously the expected calculation will greatly push forward the study on the PL of defects in ZnO. Theoretical Study of Unintentional N-Type Conductivity The electronic properties of ZnO were traditionally explained by invoking intrinsic defects -the native and unintentionally introduced point defects in the past decades of years. In particular, the frequently observed unintentional n-type conductivity was often attributed to oxygen vacancies [73,74]. However, previous theoretical works related the defect O V with rather deep levels below CBM [72,75], indicating that further exploration of the origin of the unintentional n-type conductivity of ZnO has to be conducted. Recently, the native defects and the hydrogen impurity in ZnO were reinvestigated by Oba et al. [62], in which they calculated formation energy (total-energy-difference between defect and perfect systems) and the related transition levels for each kind of defects. The results were self-consistent and well consistent with the experimental ones. Here, we first give a brief introduction to the concept of the transition level. The transition level is related to the total energies of the initial and final defect states of a particular type of transition [76]. As an example, we consider the transition 0 From the equation we can get Figure 11(a) [62] presents the formation energies as a function of the Fermi energy. The Fermi energy here is taken as the energy (chemical potential) of the reservoir from (in) which electrons are removed (placed) to form a charged defect [76], which is just the ( ) The zinc interstitials, Zn i(o) and Zn i(t) , have transition levels located at 0.05 and 0.1 eV below the CBM, respectively. These levels are regarded here as the transition levels among the three charge states, (2+/+/0), because the difference between the (2+/+) and (+/0) transition levels is comparable to the accuracy of the calculations. The possession of the (2+/+/0) transition levels near the CBM indicates that the zinc interstitials are single or double shallow donors, which is consistent with the experimental report (donor energy: 0.03 eV [77]). However, in these cases the formation energies are as high as 4 and 5 eV even at the oxygen-poor limit. Therefore, the zinc interstitials are unlikely to form with a substantial concentration in n-type ZnO. For the hydrogen impurity, both H i and H O show (+/0) transition levels locating nearly on the CBM, as reported in all the studies. The formation energies are 1.2 eV or lower, depending on the Fermi energy, which are close to an experimental estimate of 0.8 eV [78] and as low as that of V O at the hydrogen-rich (oxygen-poor) limit. Therefore, their results support the proposed role as a shallow donor for the hydrogen impurity [79]. Since V Zn has high formation energy (between 3.5 and 7.1 eV), it is not expected to exert significant effects on the composition and carrier concentration, therefore a strong preference for the donorlike defects over acceptorlike V Zn under oxygen-poor conditions is found. This tendency is consistent with experimental observation of the nonstoichiometric and n-type behavior of ZnO, but has not been reproduced by the previous calculations other than applying a posteriori band gap corrections to the LDA/GGA, as suggested in Ref. [75]. In Coulomb potential, which is a delocalized, hydrogen-like shallow state. Consequently, the photoexcited electrons occupy the lower energy PHS (rather than the DLS), thus a conducting configuration constituting n-type PPC is established. This light-induced configuration is metastable against the depopulation of electrons from the PHS into the deep ground state with small d Zn-Zn across an energy barrier. Figure 12 shows , the thermodynamic and optical ones. The former is used for describing thermodynamic equilibrium processes and so is defined as the difference of the total (or formation) energies of two charge states (thermodynamic equilibrium states) of a same kind of defect, which in it's definition are corresponding to the respective relaxed atomic configuration, and mostly have been calculated for ZnO [62]. The latter is used for optical transition, therefore both the total energies of the initial and final defect states are corresponding to the relaxed atomic configuration of the initial state [therefore the absorption P-Type Doping The n-type ZnO is easily achieved even without intentional doping. However, the p-type ZnO is very difficult to prepare due to the compensation effect originating from native defects or background impurities, as well as the limited solubility and inactivation of the acceptor dopants in the ZnO films. Due to the lack of reliable, reproducible and stable p-type ZnO the progress in ZnO-based devices has been limited. Intensive studies are continuously devoted to overcome this bottleneck. It is well known that the acceptor dopants in ZnO include group-I elements such as Li [83][84][85], Na, and K [86,87], group-Ib elements Cu [88] and Ag [89][90][91], and group-V elements such as N, P, As and Sb. Although it is theoretically suggested that group-I elements substituting for Zn possess shallow acceptor levels, it also appears that group-I elements, especially small size impurity such as Li, tend to occupy the interstitial sites (e.g., Li i ) which act as donor defects [92]. Much effort was focused on the controllable doping of group-I elements. Lu et al. [84,85] deposited Li-doped ZnO films by PLD with △H [eV] or without using ionization source. They found that Li atoms had a larger possibility to form Li Zn than Li i, and this tendency was very strong under the oxygen-rich growth condition. The best result was obtained for ZnO:Li at a 0.6 at % Li content, which had the hole concentration of 6.04 × 10 17 cm -3 , hall mobility of 1.75 cm 2 /Vs, and resistivity of 5.9 Ωcm. Wu and Yang [86] prepared K-doped p-ZnO thin films using rf magnetron sputtering technique under oxygen rich atmosphere. The optimal p-ZnO:K films possessed a higher hole concentration of 8.92 × 10 17 cm -3 and a lower resistivity of 1.8 Ωcm. Ag-doped ZnO films were fabricated by PLD [89,90] and reaction sputtering [91], using Ag 2 O as the silver dopant, and p-type ZnO could only be obtained within a narrow deposition temperature window. A neutral acceptor bound exciton emission peak of 3.317 eV was observed at 11 K [90]. Recently, several researchers have put efforts into the exploration of p-ZnO film by doping group-V elements as p-type dopants such as N [27,35,93], P [94][95][96] As [97][98][99], and Sb [100,101]. For group-V elements, Park et al. [92] have also given a prediction, by using first-principles pseudopotential method, that P and As are amphoteric: substitutional defects P O and As O are deep acceptors, but due to the size mismatching to O, donor-like antisite defects P Zn and As Zn are more likely to form to avoid the build-up of local strains near the O site. They concluded that among these acceptors nitrogen dopant with a shallow acceptor level is a promising candidate to substitute for oxygen atoms as N O in the ZnO films because of the similar ionic radius and smallest ionization energy. To date, N-doped ZnO films have been prepared using various deposition methods such as chemical vapor deposition [27,[102][103][104], spray pyrolysis [105], pulsed-laser deposition [106], implantation [107], and sputtering technology [108][109][110] using different nitrogen sources such as N 2 , NH 3 , N 2 O, Zn 3 N 2 , and MMH y (monomethyl hydrazine). However, the reliability and reproducibility in obtaining p-type ZnO:N is still controversial. Because of the much higher chemical activity of O compared to that of N, Zn is prone to combine with O rather than N, resulting in the N atoms being difficult to be introduced into ZnO films. To solve this problem, Yamamoto and Yoshida [111] studied the "unipolarity" doping problem in ZnO crystal, based on the results of ab initio electronic band structure calculations and they found that while p-type doping using the N species leads to an increase in Madelung energy n-type doping using Al, Ga or In causes a decrease in Madelung energy. Then they proposed a codoping method, which use simultaneously nitrogen acceptors and reactive III-group donors as dopants complex such as Ga-N, In-N, and Al-N, to increase the solubility of N atoms in the ZnO films and lower the acceptor level in the band gap due to strong interaction between N acceptors and reactive donor codopants. In recent years, p-type ZnO films were comprehensively achieved by using the codoping method [35,[112][113][114][115][116][117][118][119]. Compared with Ga and In atoms, Al is more suitable as reactive donors for their superior advantages such as low cost and near containment-free material as well as the superior stability for the strong Al-N and Al-O bonds. In view of the formation of Al-N codoped p-ZnO sensitive to the deposition condition, we proposed a controllable and well-configured rf magnetron co-sputtering method to prepare Al-N codoped ZnO films by using hexagonal ZnO and AlN as targets [20]. The rf magnetron co-sputtering system used in our lab is equipped with a dual rf power supply that generate two different rf powers with synchronized phases, which has been described in Section 2. With this method the doping concentration in the deposited ZnO films could be easily controlled by the co-sputtering rf power on each target. It is found that the measured Al atomic ratio [Al/ (Zn+Al) at %] in the co-sputtered film decreases from 9.22-2.46 at. %, when the rf power supplied on the ZnO target increases from 40 to 410 W and that on AlN target is fixed at 85 W. The measured ratios are much smaller than the value theoretically evaluated from the deposition rates of the two targets, indicating the difficulty of incorporating AlN into ZnO films. The true atomic ratio of nitrogen to aluminum [N/Al in at %] in the AlN-ZnO co-sputtered films as a function of the rf co-sputtering power on the ZnO target is shown in Figure 13. The N atomic concentrations in these Al-N codoped ZnO films are higher than the Al atomic concentrations, especially for the films prepared at elevated co-sputtering powers on the ZnO target. In spite of the higher ratio [N/Al], as-deposited Al-N codoped ZnO films still show n-type conductive behavior, indicating the inactivity of N acceptor dopants in these Al-N codoped films deposited at room temperature. It is found that only after an adequate post-annealing treatment the N-related acceptor dopants are activated, and the resulting ZnO films exhibit p-type conductive behavior (Table 3). Annealing at 300 °C, the Al-N codoped ZnO film performed n-type conduction with a slightly higher electron concentration than the as-deposited film. This indicated that 300 °C was too low to activate the N-related acceptors and more donors were generated. As the annealing temperature reached 400 °C, the annealed Al-N codoped ZnO film with a hole carrier concentration of 5.04 × 10 18 cm −3 , mobility of 2.35 cm 2 V −1 s −1 , and resistivity of 0.527 Ωcm was obtained, whereas an undoped ZnO film annealed under the same conditions showed an electron concentration of 2.57 × 10 17 cm −3 and Hall mobility of 5.47 cm 2 V −1 s −1 . This implied that large amounts of N-related acceptors were effectively activated and predominated over the donors in the codoped film under this annealing condition. By further increasing the annealing temperature, more donor-related defects such as V O and V N were prone to be produced, resulting in a decrease in the hole concentration. In addition, the room temperature PL spectrum also showed the features related to the activated N-related acceptors: the 60 meV redshift of the shallow level transition (from 3.07 eV in undoped ZnO film to 3.01 eV) and the suppression of oxygen-related deep level emission (2.11 eV) (Figure 14). In order to improve the quality of p-type ZnO codoped with AlN and study the function of nitrogen in these films in more detail, we [120] prepared a series of AlN codoped ZnO films using the same radio frequency (rf) magnetron cosputtering system, but under various N 2 /(N 2 + Ar) flow ratios and with lower rf power. The rf powers of AlN and ZnO were fixed at 25 and 100 W, respectively, and the N 2 /(N 2 + Ar) flow ratios of 0%, 4%, 8%, and 12% were used with the total flow rate kept at 50 sccm Then the prepared samples were post-annealed at 400, 450, and 500 °C for 10 min in a N 2 ambient to activate the doping impurities. The electronic properties of samples deposited in various atmospheres and annealed at different temperatures are listed in Table 4. The highest electron concentration of samples B would be attributed to the effective substitution of Zn sites by Al atoms provided by cosputtering AlN, which also implied that at the deposition condition of B group (under pure Ar ambient) not enough N acceptors were formed. In other words, the formation of Zn-N bonds was not preferential in an insufficient N atmosphere. The sample with a lower annealing temperature of 400 °C in group C exhibited n-type conductive behavior. With the same annealing temperature, the samples in groups D and E, deposited under higher N 2 /(N 2 + Ar) flow ratios, exhibited a high resistivity and ambiguous carrier type. However, with the intermediate post-annealing temperature at 450 °C, the deposited films of groups C, D, and E were converted from n-type into p-type conduction, indicating that the N-related acceptor dopants were activated properly by the annealing treatment at this temperature. The fact that samples D and E deposited under a higher N 2 /(N 2 + Ar) flow ratio have a lower hole concentration was due to the self-compensation induced by the higher (N 2 ) O concentration within ZnO films. Moreover, the phenomenon that the carrier type of samples in C, D, and E, annealed at a higher temperature of 500 °C, changed from p-type to n-type was assumed to the dissociation of Zn-N bonds and the formation of native defects, such as oxygen vacancies. All these suppositions were confirmed by the results of XRD, low temperature PL spectra (LTPL) and time-resolved PL spectra (TRPL) measurements. Figure 15(a) shows the LTPL spectra of undoped and AlN codoped ZnO films post-annealed at 450 °C in UV range. The emission peak at 3.362 and 3.315 eV of sample A and B were assigned as the neutral donor bound exciton (D°X) and donor-acceptor pair (DAP) transition, respectively. A new strong peak at 3.332 eV clearly observed for samples C, D, and E was attributed to the neutral acceptor bound exciton (A°X), while the peak at 3.278 eV was labeled as the recombination emissions of free electron to acceptor hole level (FA) due to the nitrogen in the oxygen site N O . LTPL in deep level emission range of ZnO, AlN-ZnO [N 2 / (N 2 + Ar) = 0%, 4%, 8%, 12%] films annealed at 450 °C are shown in Figure 15(b). Two emission bands at 2.35 and 1.89 eV, corresponding presumably to oxygen vacancies and oxygen interstitials, respectively, in sample B were weaker than that in sample A, implying less oxygen defects existed in sample B. This fact suggested that the higher electron concentration of sample B (listed in Table 4) might be attributed to the substitutional Al atoms on Zn sites instead of oxygen vacancies. Compared to the case of the sample C, more N 2 on O substitution sites (N 2 ) O simultaneously existed in samples D and E, which resulted in the larger lattice constants as identified by the XRD experimental results (not shown here). This was consistent with the fact that samples D and E were deposited under higher N 2 /(N 2 + Ar) flow ratios. Furthermore, the fact that the intensity of 2.35 eV emission band in samples D and E is higher than that in sample C implies more O vacancies existed in samples D and E, which can be tentatively interpreted as the result of compensation for the existence of more (N 2 ) O in the film. Because both O vacancies and (N 2 ) O serve as donors in ZnO film, N acceptors formed in samples D and E were partly compensated by them and resulted in a lower hole concentration as seen in Table 4. Figure 15(c) is the LTPL spectra of sample C ([N 2 /(N 2 +Ar) = 4%]) annealed at 400, 450, and 500 °C, respectively. The broad blue-green emission band around 2.4-2.8 eV was originated from the deep level caused by the dopant-induced defects. The other broad deep level emission at 2.15 eV was regarded as the oxygen-related emission. As shown in Figure 15(c), the emission induced by the dopant increased and the oxygen-related emission decreased, when the annealing temperature increased from 400 to 450 °C for sample C. This implies that more nitrogen atoms occupied the oxygen vacancy sites to form N O acceptors, which interprets the stable p-type behavior for the sample in group C annealed at 450 °C. The degradation of the p-type behavior for samples annealed at higher temperatures was attributed to the dissociation of Zn-N bonds and Zn-O bonds. The results discussed above lead us to the conclusion that high quality p-type ZnO films can be obtained by co-sputtering of ZnO and AlN under an adequate N 2 /(N 2 + Ar) flow ratio of 4% and post-annealing at 450 °C. Nanostructured ZnO Materials It is well known that nanometer semiconductor may have superior optical and electrical properties than bulk crystals owing to quantum confinement effects. Great efforts have been devoted to the nano-science and technology in the world. Nanostructured ZnO is also a hot topic in the research community, especially one-dimentional (1D) ZnO nanostructures. Large amount of research reports on 1D nanostructured ZnO of different shapes have been published and comprehensively reviewed [121]. Here we will not review all the aspects of the attracting field but just focus on limited topics. Many methods have been used to obtain ZnO Nanowires or nanorods. For example, Lee et al. [122] have used MOCVD to grow well-aligned, single-crystalline ZnO nanowires on GaAs substrates. Yuan et al. [123] reported that well-aligned ZnO nanowire (NW) arrays with durable and reproducible p-type conductivity were synthesized on sapphire substrates via vapor-liquid-solid growth by using N 2 O as a dopant source. Low-temperature growth routes for ZnO nanorods have also been reported [124,125]. 2D ZnO nanostructures have large surface area and thus are suitable for potential applications in nanoscale optoelectronics, sensors, dye-sensitized solar cells, and light emitters etc. In our lab, several deposition methods have been used to grow nanostructured ZnO layers, such as low pressure vapor phase transport process [126], carbothermal reduction, deposition using vapor cooling condensation through porous Al template, and hydrothermal method. We fabricated In-doped ZnO nanodisks ( Figure 16) by carbothermal reduction [59], and found that the In doping seems to increase the activation energy required for the growth unit joining with the (0002) surface, resulting in the suppression of the growth along the [0001] direction. But the air-cooled (cooled in air after growth by taking the samples off the furnace) ZnO nanodisks showed a very strong green emission, which can be attributed to the defects produced by In doping. However, furnace cooling (naturally cooled to room temperature in the furnace) in conjunction with introducing O 2 , around 1.0%, into flowing Ar during growth significantly enhanced the growth rate and UV emission of ZnO nanodisks, while the green emission was significantly suppressed when the oxygen concentration was increased from 0.5 to 1%. The latter can be attributed to the reduction of oxygen vacancies and surface defects in ZnO nanodisks. It is noticed that when the oxygen concentration was increased to 10%, the intensity of UV emission was considerably reduced. It is conceived that at a higher oxygen partial pressure many Zn vacancies may form in the ZnO nanodisks, resulting in suppression of UV emission. ZnO nanorod arrays with different densities and sizes were also fabricated by using a chemical vapor transport process in our lab [31]. The source material, Zn powder (purity 99.5%), was placed at the sealed end of quartz tube in a furnace. The system was maintained at a pressure of about 30 Torr. Argon gas was used as the carrier gas with a constant flow rate of 200 sccm. Oxygen gas with 30 sccm flow rate was introduced into the furnace at 450 °C. Samples A, B, C, and D were placed at ~13, 18, 23, and 28 cm from the center of furnace with their temperatures being ~640, 630, 560, and 470 °C, respectively. The Zn source was maintained at 650 °C for 30 min. Then the furnace was cooled down to room temperature naturally. (1 sccm) and then subjected to furnace cooling. Reprinted from Ref. [59] with permission. Figure 17 shows SEM images of the samples A, B, C, and D. Sample A consists of micropyramids on the Si substrate. Short nanorods formed only on the tips of some of micropyramids. Sample B consists of micropyramids with thin nanorods on their tips. Sample C shows high-density ZnO nanorod arrays on the tips of small micropyramids. However, the morphology of sample D differs from the other samples. The tip size of the nanorods grown on the pyramids is larger than the root size. The growth mechanism of nanorod array was analyzed. In brief, the population of ZnO nanorods was controlled by the secondary growth of nanorods on the tips of primarily grown pyramids. The hexagonal prisms and consequent pyramids are attributed to the difference in the growth rate of different crystal planes. The overgrowth of ZnO nanorods on the micropyramidal tips is due to the orientation adhesion. The density and size of these nanorods are influenced by the difference of saturation pressure at different temperature and the transport of source material by the carrying gas. The dense nanorods with small size exhibited excellent field emission properties. We also fabricated vertically well-aligned ZnO nanowires by spin-on-glass technology on ZnO:Ga/glass templates [127], which has been used for constructing photodetectors. In addition to the preparation of the nanostrutured ZnO, their optical properties are another important issue to be exploited. Generally the optical properties of nanostructured semiconductors depend strongly on their size and shape [128], which is a crucial issue in nanophysics due to their role in carrier relaxation [129] and very important for many applications such as single photon source [130] and high frequency acoustic emitter etc [131]. For example, Fonoberov et al. studied theoretically the optical properties of zero-dimensional ZnO nanostructures or quantum dots (QDs). They found that depending on the fabrication technique and ZnO QD surface quality, the origin of UV photoluminescence in ZnO QDs with diameters from 2-6 nm was due to the recombination of either confined excitons or surface-bound ionized acceptor-exciton complexes. In the latter case the Stokes shift of about 100-200 meV was observed in the photoluminescence spectrum. They also found that the radiative lifetime of the confined exciton and that of the ionized donor-exciton complex were almost the same. The lifetimes in both cases decreased with QD size and were about an order of magnitude less (for QD of radius R~2 nm) than the bulk exciton lifetime. For the QD with diameter 5 nm the lifetime of 38 ps was obtained, in agreement with the conclusion of Bahnemann et al. [132]. This is beneficial for optoelectronic device applications. On the other hand, the radiative lifetime of the ionized acceptor-exciton complex increased with QD size very fast, particularly for QD of R~2 nm, the lifetime was about two orders of magnitude larger than the bulk exciton lifetime. Thereby they proposed to use the exciton radiative lifetime as a probe of the exciton localization. Laser action from nanostructured ZnO is another interesting topic that has been studied widely. As an example, X.H. Han et al. [133] demonstrated the lasing from high density well-aligned ZnO nanorod arrays synthesized by an improved vapor transport method on ZnO thin films. Short wavelength luminescence of nanostructured ZnO is of great interest due to its potential optoelectronic application. Because of its fundamental importance we studied exciton related interaction in the ZnO nanostructure. We successfully fabricated high quality ZnO nanowires with a diameter of about 70 nm and 1000 nm in length on Si (100) substrates by a low pressure vapor phase transport process [127], and then studied the free excitonic transition in individual ZnO nanowires. Figure 18 Reprinted from Ref. [126] with permission. ZnO Based Light Emitting Diodes Due to the progress in preparation of ZnO materials, especially p-ZnO materials, more research efforts have been devoted to the exploration of various ZnO-based short wavelength light-emitting diodes. The progress in the aspect goes ahead continuously and new improvements have been demonstrated in recent years, as summarized in a recent publication [134]. Firstly, the exploration of ZnO p-n homojunction LEDs is still an attractive research topic. The progress achieved in this direction is closely related to the improvement in preparation of p-type ZnO materials. It is not difficult to list a few newly reported examples of ZnO p-n junction LEDs, where p-ZnO was obtained by using different dopants, such as NO [135], N 2 O [136], N 2 -O 2 [137], N-Al [138], N-In [139], and Sb [101]. Various deposition methods, including plasma-assisted molecular beam epitaxy, dc reactive magnetron sputtering, MOCVD, ultrasonic spray pyrolysis and so on were used to deposit the ZnO layers. In most cases, the resulting ZnO p-n junction LEDs emitted blue-violet electroluminescence under forward bias at low temperature, which was attributed to donor-acceptor pair recombination in p-ZnO. Some works also reported the emission from deep defect levels. It is noted that device performance improved, but the EL was still quenched substantially at room temperature. This can be ascribed to the imperfection of the p-ZnO. In view of the fact that the high quality p-ZnO is still not available, many efforts have continuously focused on an alternative type of device structures, the n-ZnO based heterojunction (e.g., [140]), in which the active n-ZnO layer is deposited on other p-type materials to form n-ZnO based p-n junction LEDs. Rogers et al. [141] fabricated a n-ZnO/ p-GaN:Mg heterojunction LED on c-Al 2 O 3 substrates using pulsed laser deposition for the ZnO and metal organic chemical vapor deposition for the GaN:Mg. Room temperature electroluminescence of the LED showed an emission peaked at 375 nm, the same as the photoluminescence of the ZnO layer, indicative of a near band edge excitonic emission. Over the injection current range from 500-875 mA the light-current (L-I) characteristic of spectral intensity versus current appears fairly linear. Long et al. [142] reported that ultraviolet light-emitting diodes based on ZnO/NiO heterojunctions were fabricated on commercially available n + -GaN/sapphire substrates using a radio frequency magnetron sputtering system. Near band edge emission of ZnO peaking at 370 nm was achieved at room temperature when the devices were under sufficient forward bias. It is also demonstrated that the device performance was improved by inserting an electron blocking i-Mg 1−x Zn x O (0 < x < 1) layer. Titkov, Delimova et al. [143] demonstrated white electroluminescence (EL) from ZnO/GaN structures fabricated by pulsed laser deposition of ZnO:In onto MOCVD-grown GaN:Mg/GaN structures on Al 2 O 3 substrates. The white EL was produced by superposition of two strongest emission lines, a narrow blue line at 440 nm and a broad yellow band around 550 nm, which was attributed to the high density of structural defects at the n-ZnO/p-GaN interface. Bayram et al. [144] reported a hybrid green light-emitting diode comprised of n-ZnO/ (InGaN/GaN) multi-quantum-wells/p-GaN, which was grown on semi-insulating AlN/sapphire using pulsed laser deposition for the n-ZnO and metal organic chemical vapor deposition for the other layers. LEDs showed a turn-on voltage of 2.5 V and a room temperature EL centered at 510 nm. Besides the development of various device structures, the surface passivation was also used to improve the performance of the devices. For example, Yu-Lin Wang et al. [145] investigated the passivation effects of PECVD-deposited SiO 2 and SiN x on ZnO-based heterojunction p-i-n LEDs. In recent years, p-ZnO has also been used to construct hetrostructure LEDs in combination with other materials. However, many of the devices yield deep-level related emissions with none or small UV emissions. Mandalapu et al. [146] reported the Sb-doped p-ZnO/n-Si heterojunction diode fabricated by MBE. Near-band edge and deep-level emissions were observed from the LED devices at both low temperatures and room temperature, which is due to band-to-band and band-to-deep level radiative recombination in ZnO, respectively. Different devices of complex structure have been reported. Kim et al. [147] fabricated ZnO-based LEDs using ZnO:P/Zn 0.9 Mg 0.1 O/ZnO/Zn 0.9 Mg 0.1 O/ZnO:Ga p-i-n heterostructures, which showed EL coming from deep level emission at low forward bias and near band edge ultraviolet emission at high bias. Ryu et al. [45] fabricated ZnO-based ultraviolet LEDs comprised of a ZnO/BeZnO MQW active layer between n-type and p-type ZnO and Be 0.3 Zn 0.7 O layers. Arsenic and gallium were used as dopants for p-type and n-type layers. The EL emission was attributed to the active layer. This kind of device structure was also used for laser diode [148]. The lasing mechanism is inelastic exciton-exciton collision. Chu et al. [149] also reported ZnO based quantum well diode lasers, where Sb-doped p-type ZnO/Ga-doped n-type ZnO with an MgZnO/ZnO/MgZnO quantum well embedded in the junction was grown on Si by molecular beam epitaxy. The diodes emit lasing at room temperature with a very low threshold injection current density of 10 A/cm 2 . The lasing mechanism is exciton-related recombination. In addition, ZnO based LEDs and laser diodes of MIS or MOS structure were also demonstrated, for example, a MIS structure LED [150], and the electrically pumped UV random lasing in the c-axis oriented ZnO-based MOS (Au/SiO x (x < 2) /ZnO) device [151]. Finally, it should be of interest to mention the progress in nanostructured ZnO LEDs. Yang et al. [152] demonstrated efficient UV and red EL at room temperature from ZnO nanorod array light-emitting diodes, where the p-type ZnO was formed by As + ion implantation into the as-grown ZnO nanorods. Jeong et al. [153] reported ZnO nanowire-array-embedded n-ZnO/p-GaN heterojunction light-emitting diodes, showing electroluminescence emission of the wavelength of 386 nm come out from ZnO nanowire. Bao et al. [154] constructed a single nanowire light-emitting diode. A hybrid light-emitting p-n junction diode has been produced using ZnO nanorods as the n-type material [155]. Zimmler et al. [156] recently fabricated ZnO nanowire LEDs on a heavily doped p-type silicon substrate ( Figure 19). The electroluminescence of the single wire LED was attributed to bound-and free-exciton related recombination, together with their LO phonon replicas. Figure 19. Schematics of the n-ZnO/ p-Si nanowire LEDs. a. Top view, b. cross-sectional view. Reprinted from Ref. [156] with permission. The random lasing action was also demonstrated by Leong and Yu [157] in p-SiC(4H)/i-ZnO-SiO 2 nanocomposite/n-ZnO:Al heterojunction (p-i-n) laser diodes. The intrinsic layer, inserted between the n-and p-injection layers, consisted of ZnO powder embedded in a SiO 2 matrix, prepared by the sol-gel technique. The lasing was due to the ZnO clusters. It is noted from the above discussion that defects in ZnO, including undoped, manifest itself frequently in the luminescence. In order to fabricate ZnO based UV LED, ZnO of very few deep level defects has to be used for device fabrication. We have discussed above that the vapor cooling condensation method can deposit ZnO layer with much less oxygen vacancies. Using the method to deposit both doped and undoped ZnO layers, we have developed two kinds of n-ZnO based hybrid LEDs: the p-GaN/n-ZnO:In (p-n) and p-GaN/i-ZnO/n-ZnO:In (p-i-n) heterojunction light-emitting diodes [39]. The structures of both types of diodes are shown schematically in Figure 20. The room temperature current-voltage characteristics clearly demonstrated a rectifying diodelike behavior for both devices. The EL spectra of the fabricated p-n and p-i-n devices under forward bias are shown in Figure 21 The EL spectrum of p-GaN/n-ZnO:In heterojunction LED consists of a broad emission band at 432 nm, which is attributed to the transition from the conduction band to the acceptor level in the Mg doped p-GaN when electrons are injected from the n-ZnO:In into the Mg-doped p-GaN. It is interesting that the EL spectrum of p-i-n device showed a pure UV band at 385 nm, which is consistent with the PL spectrum of the ZnO layer deposited with the vapor cooling condensation method. It indicates that the EL emission of the p-i-n LEDs comes from ZnO layer of the heterostructure, the undoped i-ZnO layer inserted between n-ZnO:In and p-GaN. It is due to the fact that the holes from p-GaN and electrons from n-ZnO:In are injected into the i-ZnO layer and radiatively recombined there. It is clear that the UV EL of thus developed p-i-n heterostructure is closely related to the fact that the inserted i-ZnO layer, deposited by the vapor cooling condensation method, has much less luminescent defects. It is also worth while to point out that the developed p-i-n LED is of potential applications in UV LEDs for high-temperature operation. [39] Figure 20. Schematic of hybrid (a). p-n and (b). p-i-n heterojunction LEDs. Reprinted from Ref. [39] with permission. Figure 21. The room-temperature EL spectra of the p-n and p-i-n. Reprinted from Ref. [39] with permission. In a recent investigation, we succeeded in fabricating purely ZnO based p-i-n UV LEDs [158]. Figure 22 shows the schematic diagram of the fabricated p-ZnO:AlN/i-ZnO/n-ZnO:In LED. The AlN codoped p-ZnO layer was deposited on sapphire substrates by using the radio frequency (RF) magnetron co-sputtering system with a dual RF power supply. The i-ZnO and n-type ZnO:In films were fabricated by the vapor cooling condensation method. The resultant LEDs showed a UV EL peaked at ~3.2 eV, operated under different injection currents at room temperature as shown in Figure 23. The observed EL band was also attributed to the radiative recombination in the i-ZnO layer where the deep-level defects were very few as demonstrated previously. We also succeeded in fabricating ZnO nanorod-based heterostructured ultraviolet LEDs [159]. Figure 24 shows the schematic of the built p-GaN layer/i-ZnO nanorod array/n-ZnO:In nanorod array heterostructured LEDs. Both i-ZnO and n-ZnO:In nanorod arrays in the structure were grown on a p-GaN layer through an anodic alumina membrane (AAM) template using the vapor cooling condensation method. Figure 25 shows the room temperature EL spectra of the ZnO nanorod based p-i-n heterostructured LEDs operated at 15 A and 35 A. In particular, the p-i-n devices exhibit a UV EL emission centering at 386 nm, which becomes dominant emission at higher injection current of 35 A. This UV band is attributed to the near-band-edge emission of the i-ZnO nanorod array of the p-i-n structure. p  i n nanorodheterostructured L ED s Nanostructure can also been used to improve the light extraction efficiency. In a recent work of our laboratory [160], Al-doped ZnO (AZO) films with Al nano clusters embedded were deposited by magnetron co-sputtering system, serving as the transparent conductive layer (to replace ITO film) of GaN-based light-emitting diodes, as shown schematically in Figure 26. The fabricated LEDs with AZO film prepared at a higher DC Al co-sputtering power (10 W) show an increase of 20% in the light output power compared to the conventional LEDs, which is 10% more than that of LEDs with AZO prepared at Al co-sputtering power of 7 W. The extra increase in the light output power was attributed to the formation of Al nanoclusters in AZO under higher Al sputtering power, which scattered the emitted light outward and hence increased the light extraction ( Figure 27). Summary ZnO related materials have been attracting intensive investigation due to inherent advantages and potential applications. In this paper, the recent progress on some aspects of the active research field, related to applications for the light-emitting diodes, has been reviewed in combination with the works carried out in our lab. One can feel that great progress has continuously been achieved, but there is still a long way to go for realizing extensive practical applications. Further efforts should be continued in various aspects of investigation for reaching the final goal.
14,047.6
2010-03-24T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
A Second-Order Sufficient Optimality Condition for Risk-Neutral Bi-level Stochastic Linear Programs The expectation functionals, which arise in risk-neutral bi-level stochastic linear models with random lower-level right-hand side, are known to be continuously differentiable, if the underlying probability measure has a Lebesgue density. We show that the gradient may fail to be local Lipschitz continuous under this assumption. Our main result provides sufficient conditions for Lipschitz continuity of the gradient of the expectation functional and paves the way for a second-order optimality condition in terms of generalized Hessians. Moreover, we study geometric properties of regions of strong stability and derive representation results, which may facilitate the computation of gradients. Introduction We study bi-level stochastic linear programs with random right-hand side in the lowerlevel constraint system. The sequential nature of bi-level programming motivates a setting where the leader decides nonanticipatorily, while the follower can observe the realization of the randomness. A discussion of the related literature is provided in the recent [1]. A central result of [1] states that evaluating the leader's random outcome by taking the expectation leads to a continuously differentiable functional if the underlying probability measure is absolutely continuous w.r.t. the Lebesgue Communicated by René Henrion. B Matthias Claus<EMAIL_ADDRESS>1 University Duisburg-Essen, Essen, Germany measure. This allows to formulate first-order necessary optimality conditions for the risk-neutral model. The main result of the present work provides sufficient conditions, namely boundedness of the support and uniform boundedness of the Lebesgue density of the underlying probability measure, that ensure Lipschitz continuity of the gradient of the expectation functional. Moreover, we show that the assumptions of [1] are too weak to even guarantee local Lipschitz continuity of the gradient. By the main result, second-order necessary and sufficient optimality conditions can be formulated in terms of generalized Hessians. As part of the preparatory work for the proof of the main result, we in particular show that any region of strong stability in the sense of [1, Definition 4.1] is a finite union of polyhedral cones. This representation is of independent interest, as it may facilitate the calculation or estimation of gradients of the expectation functional and thus enhance gradient descent-based approaches. The paper is organized as follows: The model and related results of [1] are discussed in Sect. 2, while the main result and a variation with weaker assumptions are formulated in Sect. 3. Sections 4 and 5 are dedicated to geometric properties of regions of strong stability and related projections that appear in the representation of the gradient. Results of these sections play an important role in the proof of the main result that is given in Sect. 6. A second-order sufficient optimality condition is formulated in Sect. 7. The paper concludes with a brief discussion of the results and an outlook in Sect. 8. Model and Notation Consider the optimistic formulation of a parametric bi-level linear program where z ∈ R s is a parameter and the data comprise a nonempty polyhedron X ⊆ R n , vectors c ∈ R n , q ∈ R m and the lower-level optimal solution set mapping Ψ : is real valued and Lipschitz continuous on the polyhedron if dom f is nonempty. Let Z : → R s be a random vector on some probability space ( , F, P) and denote the induced Borel probability measure by μ Z = P • Z −1 ∈ P(R s ). Furthermore, we introduce the set If dom f is nonempty and we impose the moment condition is well defined and Lipschitz continuous by [1,Lemma 2.4]. In a situation where the parameter z in (1) is given by a realization of the random vector Z that the follower can observe while the leader has to decide x nonanticipatorily, the upper-level outcome can be modeled by F(x). If we assume X ⊆ F Z and the leader's decision is based on the expectation, we obtain the risk-neutral stochastic program The following is shown in [ is well defined, Lipschitz continuous and continuously differentiable at any x 0 ∈ int F Z . We shall discuss some key ideas of the proof and introduce the relevant notation: Setq and then, f admits the representation Remark 2.1 The subsequent analysis does not depend on the specific structure ofq,d and and applies whenever (3) holds with some matrix satisfying rank = s. As the rows of are linearly independent, the set A := { B ∈ R s×s : B is a regular submatrix ofÂ} of lower-level base matrices is nonempty. A base matrix B ∈ A is optimal for the lower-level problem for a given (x, z) if it is feasible, i.e., B N is nonnegative. Furthermore, for any optimal base matrix B ∈ A, there exists a feasible base matrix B ∈ A satisfyinĝ and assume dom f = ∅, and then, A key concept is the region of strong stability associated with a base matrix B ∈ A * given by the set on which f coincides with the affine linear mapping Under the assumptions of Theorem 2.1, we have and the gradient of Q E admits the representation where D := {q B −1 B T : B ∈ A * }, and the set-valued aggregation mappings W, W : R n × D ⇒ R s are given by imply continuity of the weight functional M Δ : R n → R, for any Δ ∈ D. Main Result We shall first show that the assumptions of Theorem 2.1 are too weak to guarantee Lipschitz continuity ∇Q E . Example 3.1 Consider the case wherê The feasible set of the lower-level problem is compact for any parameters in the polyhedral cone F = {(x, z) ∈ R × R 2 : z 1 ≥ 0, x + z 2 ≥ 0}, which implies that dom f coincides with F for anyq ∈ R 4 . As the objective function is constant, any feasible base matrix is optimal for the lower-level problem. Denote the elements of A = A * by 1 , . . . , 6 , and let be the set of parameters for which i is feasible for the lower-level problem. A straightforward calculation shows that we havê , and letq i denote the part of upper-level objective function that is associated with i . We havê and a straightforward calculation yields Let the density δ Z : R 2 → R of Z be given by , and it is easy to see that hold true whenever x ∈]0, 1] (Fig. 1). Fig. 1, the darker square depicts the intersection of supp μ Z and W( 1 holds for any x ∈ [0, 1] and a simple calculation shows that is not locally Lipschitz continuous at x = 0. Our main result is the following sufficient conditions for Lipschitz continuity of ∇Q E : Note the density in Example 3.1 is not bounded. The proof of Theorem 3.1 requires some preliminary work and will be given in Sect. 6. If the support of μ Z is unbounded, we still obtain a weaker estimate for the gradients: On the Geometry of Regions of Strong Stability In view of (4) and (5), the gradient ∇Q E (x) is given by a weighted sum of the probabilities of the sets W(x, Δ) or W(x, Δ) for Δ ∈ D. As these sets are defined using regions of strong stability, we shall first study properties of the sets S( B ) witĥ A B ∈ A * . Proof Let (x, z) be an arbitrary point of some region of strong stability S( B ). The n-dimensional kernel of (T , I s ) contains some nonzero element (x 0 , z 0 ), and we have Our main result on the structure of S( B ) is the following: Theorem 4.1 Assume dom f = ∅, then any region of strong stability is a union of at most (s + 1) |A * | polyhedral cones and at most (s + 1) |A * |−1 of these cones have a nonempty interior. Moreover, the multifunction S : A * ⇒ R n × R s is polyhedral, i.e., gph S is a finite union of polyhedra. Before we get to proof of Theorem 4.1, we will establish the following auxiliary result: Proof The inclusion cl W ⊆ W is trivial. Moreover, for any ξ 0 ∈ W = int W and ξ ∈ W the line segment principle (cf. [6, Lemma 2.1.6]) implies [ξ 0 , ξ) ⊆ W and thus ξ ∈ cl W. We are now ready to prove Theorem 4.1. Corollary 4.2 Assume dom f = ∅, then any region of strong stability is star shaped and contains the n-dimensional kernel of (T , I s ). Proof Radial convexity is an immediate consequence of Theorem 4.1, as any region of strong stability contains the line segments from the origin to any feasible point. The second statement directly follows from Proposition 4.1. Two-stage stochastic programming can be understood as the special case of bi-level stochastic programming where the objectives of leader and follower coincide. In this case, any region of strong stability is a polyhedral cone and thus convex: Proposition 4.2 Assume dom f = ∅ andq = αd for some α > 0. Then, any region of strong stability is a polyhedral cone. Proof We shall use the notation of the proof of Theorem 4.1 and denote the part ofd associated with i byd i . Fix any (x, z) ∈ F and consider any base matriceŝ A i , j ∈ A * that are feasible and thus optimal for the lower-level problem. Aŝ both base matrices are also optimal with respect to the upper-level objective function. Thus, S( i ) coincides with the polyhedral cone Θ i . = (0, 0, 0, 0) holds in Example 3.1, we see the assumptionq = αd for some α ∈ R in Proposition 4.2 cannot be replaced with the weaker condition that {q,d} is linearly dependent. Properties of the Aggregation Mappings We shall now study the aggregation mappings W and W defined in Sect. 2. The following result is the counterpart of Theorem 4.1: Δ) is a finite union of polyhedra for any (x, Δ) ∈ R n × D. The proof of Theorem 5.1 will be based on the following auxiliary result: C 1 , . . . , C l ⊆ R k be closed and convex. Then, Proof As the sets C 1 , . . . , C l are closed and the interior of a union is contained in the union of the interiors, we have i=1,...,l: where the first equality is due to the fact that the closure of a union equals the union of the closures and the second equation is a direct consequence of the line segment principle. Thus, For the reverse inclusion, suppose that there is some By definition, there are sequences {x n } n∈N ⊂ R k and { n } n∈N ⊂ R >0 satisfying x n → x and B n (x n ) ⊆ i=1,...,l C i for all n ∈ N. As i=1,...,l: int C i =∅ C i is closed, there exists some N ∈ N such that x n / ∈ i=1,...,l: int C i =∅ C i for all n ≥ N . Together with the previous considerations, the strong separation theorem (cf. [9, Theorem 11.4]) yields the existence of some δ N ∈ (0, N ] such that As any C i with int C i = ∅ is contained in an affine subspace of dimension strictly smaller than k (cf. [2, Section 2.5.2]), we obtain the contradiction which completes the proof. Corollary 5.1 Let C ⊆ R k be a finite union of polyhedra (polyhedral cones). Then, cl int C is a finite union of polyhedra (polyhedral cones). Proof The above statement is an immediate consequence of Lemma 5.1. Proof (Proof of Theorem 5.1) As D is finite, it is sufficient to consider the multifunctions W(·, Δ) : R n ⇒ R s for fixed Δ ∈ D. We have which is a finite union of polyhedra by Corollary 5.1. Similarly, W(x, Δ) admits the representation By Theorem 4.1 and Corollary 5.1, the set is the intersection of a finite union of polyhedral cones and the affine subspace {(x , z ) ∈ R n × R s : x = x} and thus a finite union of polyhedral cones for any x ∈ R n and any B ∈ A * . The following result on W is a simple consequence of the fact that the constraint system describing a region of strong stability only imposes conditions on (T x + z). Proof Fix any x, x ∈ R n , z ∈ R s and set z = z + T (x − x ), then T x + z = T x + z and thus Similarly, for any B ∈ A * , (x, z) ∈ S( B ) holds if and only if 1. there exists some y ∈ R m such that holds for any Δ ∈ D, which completes the proof. Proof of the Main Result We are finally ready to prove Theorem 3.1 based on the results of Sects. 4 and 5 as well as the two following auxiliary results: Lemma 6.1 Assume dom f = ∅, and let μ Z ∈ P(R s ) be absolutely continuous w.r.t. the Lebesgue measure, then holds for any x ∈ R n , Δ ∈ D and t ∈ R s . Proof By the arguments used in the proof of [1, Lemma 4.2], we have where N x ⊂ R s is contained in a finite union of hyperplanes. Consequently, holds for any fixed Δ ∈ D. As both and there exists a finite upper bound α ∈ R for the Lebesgue density of μ Z , we have where λ s denotes the s-dimensional Lebesgue measure. By Theorem 5. by Cavalieri's principle, which completes the proof. Proof (Proof of Theorem 3.1) Continuous differentiability on int F Z is a direct consequence of [1,Corollary 4.7]. Fix any x, x ∈ int F Z ; then, (4) and Lemma 6.2 yield and thus the desired Lipschitz continuity. Proof (Proof of Theorem 3.2) Fix any κ > 0. As μ Z is tight by [3, Theorem 1.3], there exists a compact set C(κ) ⊂ R s such that μ Z [R s \ C(κ)] < κ. Combining this with the estimate from the first part of the proof of Lemma 6.2 and using the same notation established therein, we see that holds for any Δ ∈ D. Thus, We therefore have and choosing κ = 2|D| yields the desired estimate. denote the set of points at which ∇Q E is differentiable, then generalized Clarke's Hessian of Q E at some x ∈ int F Z is the nonempty, convex and compact set Let the feasible set of (2) be given by X = {x ∈ R n : Bx ≤ b} with some B ∈ R k×n and b ∈ R k . The following second-order sufficient condition is based on [7]: Theorem 7.1 Assume dom f = ∅, X ⊆ int F Z and let μ Z be absolutely continuous w.r.t. the Lebesgue measure and have a bounded support as well as a uniformly bounded density. Moreover, let (x,ū) be a KKT point of (2), i.e., and assume that any H ∈ ∂ 2 Q E (x) is positive definite on h ∈ R n : e i Bh = 0 ∀i :ū i > 0 e j Bh ≤ 0 ∀ j :ū j = e j Bx = 0 . Then,x is a strict local minimizer with order 2 of (2), i.e., there exist a neighborhood U ofx and a constant L > 0 such that holds for any x ∈ X ∩ U . Proof This is a straightforward conclusion from [7, Theorem 1]. Remark 7.1 There are various other approaches for optimization problems with data in the class C 1,1 , which consists of differentiable functions with locally Lipschitzian gradients. For instance, second-order optimality conditions can also be formulated based on Dini (cf. [5,Section 4.4]) or Riemann (cf. [8]) derivatives. Conclusions We have derived sufficient conditions for Lipschitz continuity of the gradient of the expectation functional arising from a bi-level stochastic linear program with random right-hand side in the lower-level constraint system. Invoking the structure of the upper level constraints, we used this result to formulate a second-order sufficient optimality condition for the risk-neutral bi-level stochastic program in terms of the generalized Hessian of Q E . Moreover, the main result on the geometry of regions of strong stability and its counterpart for the aggregation mapping W may facilitate the computation or sample-based estimation of gradients of the expectation functional, which enhances gradient descent-based methods. As any region of strong stability is a finite union of polyhedral cones, a promising approach is to employ spherical radial decomposition techniques to calculate ∇Q E (cf. [4,Chapter 4]). The details are beyond the scope of this paper but shall be addressed in future research.
4,030.6
2020-11-10T00:00:00.000
[ "Mathematics" ]
Comparison of Galdieria growth and photosynthetic activity in different culture systems In the last years, the acidothermophilic red microalga Galdieria sulphuraria has been increasingly studied for industrial applications such as wastewater treatment, recovery of rare earth elements, production of phycobilins. However, even now it is not possible an industrial cultivation of this organism because biotechnological research on G. sulphuraria and allied species is relatively recent and fragmented. Having in mind a possible scale-up for commercial applications, we have compared the growth and photosynthetic performance of G. sulphuraria in four suspended systems (Inclined bubble column, Decanter Laboratory Flask, Tubular Bioreactor, Ultra-flat plate bioreactor) and one immobilized system (Twin Layer Sytem). The results showed that G. sulphuraria had the highest growth, productivity and photosynthetic performance, when grown on the immobilized system, which also offers some economics advantages. Introduction Cyanidiophyceae are a class of red microalgae living in extreme environments (Albertano et al. 2000;Pinto et al. 2003;Yoon et al. 2004). They prevalently thrive in geothermal volcanic areas at temperatures around 40 °C and at high sulfuric acid concentrations, with ambient pH values between 1 and 3 (Albertano et al. 2000;Pinto et al. 2007;Toplin et al.2008;Castenholz and Mcdermott 2010;Ciniglia et al. 2014;Ciniglia et al. 2017) These extreme environmental conditions strongly limit contaminations that are prevalent in open microalgal mass cultivation systems. In consequence, these organisms are of considerable interest for commercial applications (Carfagna et al. 2018;Carbone et al. 2019). Indeed, Galdieria has been the subject of different studies in algal biotechnology. It was used for wastewater treatment (Ju et al. 2016;Henkanette-Gedera et al. 2016;da Silva et al. 2016;Carbone et al. 2018;Galasso et al. 2019;Alalwan et al. 2019;Sosa-Hernández et al. 2019) and for recovery of rare earth elements (Minoda et al. 2015). Moreover, this organism produces high levels of phycobiliproteins that are used in diverse medical and cosmetic products (Schmidt et al. 2005;Graverholt and Eriksen 2007;Sørensen et al. 2013;Eriksen 2018) and in different compounds with antioxidant properties (Carfagna et al. 2016). However, biotechnological research on Galdiera is relatively recent. The data around the growth of this microalga are still fragmentary and even now it is not possible an industrial cultivation of this organism. Therefore, having in mind a possible scale-up and commercial applications of G. sulphuraria, in this paper, the growth and the photosynthetic performance of this microalga were systematically compared in five different types of cultivation systems (one immobilized and four suspended) at the same conditions of temperature and irradiance. Algal strain and stock cultures Galdieria sulphuraria strain 064 from ACUF collection (D'elia et al. 2018 http://www.acuf.net) was chosen. The stock culture was cultivated in Galdieria medium (Gross and Schnarrenberger 1995) acidified by sulfuric acid at pH 1.5. Stock cultures were grown in 1 l Erlenmeyer flasks and were exposed to an adaptive light intensity of 30 μmol photons m −2 s −1 with a light/dark cycle of 14/10 h. The temperature was 35 °C. Analysis of growth We consider several parameters to analyse growth. These parameters are depending variables of time, the only independent variable. Some depending variables, denoted with the term "specific", are normalized by dividing by the initial values, to take the different inocula into account (conversely, the non-normalized depending variables can be obtained multiplying the normalized ones by the initial values). We explicitly observe that normalization is necessary because Twin-Layer S needs inocula concentrations very different from those used for suspended systems. The considered variables are: coefficient of determination, specific weight increase, specific light yield, growth rate. Coefficient of determination The coefficient of determination (r 2 ) is a measure of how close the data are to the regression line. It was used to compare the different bioreactor systems. Specific weight increase (SWI) The specific weight increase (SWI) was used to analyse the trend of growth in the different bioreactors. This is the formula defining SWI: where w(t) is the dry weight at day t (more exactly, t is the number of the day when the sampling is taken and measured) and w(0) the dry weight at day 0 (g). Specific light yield (SLY) To consider the light energy necessary for the growth, we used the standard light yield and normalized it. The formula for the specific light yield (SLY) (photons mol −1 ) is the following: where SWI(x) is the specific weight increase, A the area of surface of the bioreactor exposed to the light (m 2 ), t is the number of days, s is the number of seconds of illumination per day (s) (in our case, this number, 50,400, is obtained multiplying the number of illuminations hours, 14, by the number of seconds in a hour, 3600), pm is the number of the given moles of photosynthetically active photons per second and per square meter (photons mol s −1 m −2 ) (in our case, pm is the number of the given PAR, 100, multiplied by 10 −6 ). Growth rate (GR) The growth rate in the time period is calculated thanks to the growth rate GR (day −1 ) with this formula: where Ln is the natural logarithm, w(t+h) is the dry weight at day t+ h, w(t) is the dry weight at day t, h is the number of days between two consecutive measures (in our case, h is equal to 3). Determination of biomass In liquid cultivation systems, 2 ml of the culture was harvested every three days in triplicate with a sterile syringe for dry mass determinations and then filtered on a polycarbonate disc using a vacuum pump. In Twin Layer System, the polycarbonate discs were taken off from the bioreactor and biomass in the inoculated area was considered, while the rest was scraped off. All samples were lyophilized in a freeze dryer for two hours and weighed with an analytical balance (Sartorius Bovenden, Germany). Analysis of the photosynthetic state of microalgae Pigment concentration: microalgae were harvested and lyophilised then they were mixed with quartz sand to obtain homogeneous powder. Photosynthetic pigments were extracted overnight with acetone (Costache et al. 2012). Chlorophyll a and carotenoids were analysed by spectrophotometry (Shimadzu UV-2450) (Tomitani et al. 1999). Pigment concentration Different formulae were considered to compare the photosynthetic state of each culture. These equations were used: where Chl a is the concentration of chlorophyll a (mg l −1 ), Carotenoids is the concentration of total carotenoids (mg l −1 ), and A is the absorbance at different wavelengths (662, 645,470 nm) (Costache et al. 2012). Specific pigments increase The trend of pigment concentration during growth tests was calculated according to the formula: where p(t) is the concentration of the pigment (chlorophyll or carotenoids) at time t (mg ml −1 for liquid systems and g m −2 for Twin Layer System) and p(0) is the concentration of the pigment at time 0 (mg ml −1 for liquid systems and g m −2 for Twin Layer System), obtained by the previous formulae. Normalized photosynthesis efficiency (NPE) NPE is the efficiency of solar light energy captured and stored in biomass. Therefore it is used to estimate the productivity. We normalized the standard formula for photosynthetic efficiency (De Vree et al. 2015) by using the dry weight at time 0. The formula for NPE (g −1 ) is the following: where H 0 C is the standard enthalpy of combustion (22.5 kJ g −1 ), w(x + h) the biomass dry weight at day t + h (g), w(x) the biomass dry weight at day t (g), w(0) the biomass dry weight at time 0 (g), h the number of days between Chl a = 11.75 * (A662) − 2.350 * (A645) two consecutive measures (in our case, h is equal to 3), A the area of surface of the bioreactor exposed to the light (m 2 ), s is the number of seconds of illumination per day (s) (in our case, this number, 50.400, is obtained multiplying the number of illuminations hours, 14, by the number of seconds in a hour, 3.600), pm is the number of the given moles of photosynthetically active photons per second and per square meter (photons mol s −1 m −2 ) (in our case, pm is the number of the given PAR, 100, multiplied by 10 −6 ), N is the Avogadro number, e is the approximate energy of a photon of 400 nm 173 wave length (kJ) (this value is around 4 * 10 −22 ). In this formula, we normalized by w(0) to highlight the relevant differences between the TL-S system and suspended systems. Moreover, we acknowledge that it can also be significant to normalize by dividing by w(t) be evidence possible differences between consecutive measurements. This variable is linked to the productivity of bioreactors and represents the efficiency with which solar energy is captured and stored in biomass (De Vree et al. 2015). Photobioreactors and bottle design set up The experiments were set up at the same light intensity of 100 μmol photons m −2 s −1 with a light/dark cycle of 14/10 h in presence of atmospheric CO 2 and at constant temperature of 35 °C. The systems used for the experiments are four suspension systems and one where cells are immobilized on photobioreactor (Fig. 1). In suspension systems the volume of the culture is invariable, because after the sampling of 2 ml water loss was replaced and the growth was influenced only very weakly. At the beginning of the experiment, the culture had an optical density of 0.4 and a dry weight of 0.4 g/l while in the Twin Layer System the culture had a dry weight of 20 g m −2 . • The Twin Layer System (Twin Layer-S) consisted of an immobilized photobioreactor where microalgae are inoculated on a polycarbonate disk that is attached on a hydrophilic substrate by self-adhesion, separating the algal biomass from the bulk of the medium (Nowack et al. 2005;Melkonian and Podola 2010;Li et al. 2017). The algae were placed on the Twin Layer-S only when the liquid culture achieved a sufficient density in suspension (optical density around 0.4). Then the algae were harvested by centrifugation for 30 minutes at 2000 rpm (Sorvall, RC5C), filtered onto polycarbonate membranes (PC40, 0.4 μm pore size, 25 mm diameter, Whatman, Dassel, Germany) and subsequently attached to the hydrophilic substrate (Fig. 1a). This system was chosen because it reproduces the natural habitat of this species, generally growing on substrates like soil and rocks (Gross et al. 1998;Ciniglia et al. 2004;Pinto et al. 2007). • The Decanter laboratory flask (Decanter-LF) had a lighted surface area of 10,201 cm 2 and was placed on a platform shaker with at a speed of 50 rpm. The total volume of the Decanter-LF was 1000 ml and the working volume was 250 ml (Fig. 1b). The Decanter-LF is not a bioreactor and there isn't air flux but it was selected because it is the most common system used in Galdieria growth test (e.g., Iovinella et al. 2020). • The Ultra-flat plate bioreactor (Flat-UPB) had a lighted surface area of 715 cm 2 and was composed of three plexiglass panels spaced by two silicone sheets. Four 1 mm orifices from the bottom of the photobioreactor aerated the system with a gas stream. The total volume was 700 ml and working volume was 400 ml (Fig. 1c). This reactor was chosen because it has a high surface area to volume ratio (Gifuni et al. 2018;Zuccaro et al. 2020). • The Tubular bioreactor (Tubular-B) was a glass column photobioreactor, with a lighted surface area of 275 cm 2 and a glass pipe with a membrane pump equipped with a sterile filter at the bottom of the column aerating the system. The total volume was 350 ml, the working volume was 200 ml (Fig. 1d). This On the bottom of the bioreactor, the gas stream was sparged by multiple orifices of a Teflon tube. The working volume was 400 ml (Fig 1e). This system was chosen because it has a good ratio between the photic and the dark zone and the microalgae are not exposed to an excess of light or darkness (Olivieri et al. 2013). Algal growth in different cultivation systems During the experiment, G. sulphuraria showed differences in growth in the cultivation systems. The slowest growth was observed in Decanter-LF where, at the end of the experiment, G. sulphuraria was only at the beginning of the exponential growth phase ( Fig. 2; Table 1). In the Inclined Bubble-C, the microalgae achieved the stationary growth phase on day 27 but the growth performance was lower than those observed in the others bioreactors ( Fig. 2; Table 1). Also, the Tubular -B and Flat-UPB achieved the stationary growth phase on the day 27 but showed a different behaviour ( Fig. 2; Table 1). Indeed, the flat-UPB showed highest values of SWI compared to the other suspension-based bioreactors (around 6.5), while the SLY values were lower than those in the Tubular-B, in which the maximum value was around 0.752 mol −1 on day 24 ( Fig. 2; Table 1). The maximum GR value was similar in the two bioreactors (approximately 140 day −1 ; Table 1). In the Twin Layer-S, the SWY maximum values were similar to those of the Flat-UPB while SLY were significantly higher during the first 21 days of cultivation than the values obtained in the other bioreactors (maximally 1.7 mol −2 on day 6, Fig. 2; Table 1). The values declined only in the last time of the tests when SLY values fell below 1.0 (Fig. 2b). Also, the maximum GR was higher in the Twin Layer-S than in the other bioreactors (0.222 day −1 ). Instead, the r 2 was the lowest of all photobioreactors ( Fig. 3; Table 1). Photosynthetic activity Characterization of photosynthetic pigments The photosynthetic pigments were analysed at the same time as biomass growth. Specifically, chlorophyll a and carotenoids were considered. As in the case of biomass growth, the Decanter-LF showed the lowest chlorophyll SP levels ( Fig. 4a; Table 2). Indeed, the SP(x) achieved a maximum value of 2 only on the last day of the experiment. In the Inclined Bubble-C, chlorophyll a achieved the maximum SP value on day 30 (around 5.4) where in the Tubular B, the maximum SP value was observed on day 27 (around 7.5 Fig. 4a; Table 2). Compared to the other suspension-based cultivation systems, the flat-UPB showed the highest SP(x) level of chlorophyll a (around 10) on day 24, and then decreasing around 9 on day 30 ( Fig. 4a; Table 2). In the Twin Layer-S, the chlorophyll a SP maximum value was on day 21, when it reached a value of 15 ( Fig. 4a; Table 2). In all systems, the chlorophyll a percentage was around 0.6% of the total weight (Fig. 5a). The carotenoids had a different trend from chlorophyll a except for the Decanter-LF, in which SP values were similar to those of chlorophyll a (around 1) but the maximum percentage value was 0.3% of total weight (Figs. 4b, 5b; Table 2). The SP values for carotenoids were higher in the Inclined Bubble-C(14) than in the Tubular-B (9) and consequently also the maximum percentage value was higher in the Inclined Bubble-C (0.3% and 0.15% respectively) (Figs. 4b, 5b; Table 2). In the Flat-UPB, the percentage maximum value was around 0.3% (day 27) and the SP values were higher than in the other suspension-based photobioreactors, achieving a maximum value around 28 on the day 27. In the Twin Layer-S, SP for carotenoids displayed a lower value than that in the Flat-UPB (around 20 on day 30) and the percentage maximum value was only 0.1% (Figs. 4b, 5b; Table 2). Normalized photosynthesis efficiency (NPE) When the normalized photosynthetic efficiency was calculated, the Decanter-LF showed the lowest level of NPE(x), that never exceeded 0.096 g −1 . The Flat-UPB and the Inclined Bubble-C showed a similar maximum level of NPE (0.109 g −1 and 0.094 g −1 , respectively). In the Tubular-B, NPE was lower than 1 g −1 until day 15; then it increased, with a maximum value of 0.188 g −1 (Fig. 6). In contrast, the Twin Layer-S, showed higher values of NPE during the first nine days of the experiment, resulting in a maximum of 0.208 g −1 on day 6; then the value decreased to about 0.1 g −1 (Fig. 6). Discussion For the experiment, a light intensity of 100 μmol photons m −2 s −1 was chosen because G. sulphuraria generally grows at low light intensities in the natural environment (Pinto et al. 2007;Eren et al. 2018) and also showed promising results in both liquid and immobilized cultivation systems with respect to its physiology and in relation to applications in biotechnology (e.g. Sano et al. 2001;Oesterhelt et al. 2007;Carbone et al. 2020). For the latter, exposition at this light intensity leads to an increase of phycobiliprotein production: Carbone et al. (2020) e.g. showed, in an experiment with a Twin Layer-S using different light intensities that 100 μmol photons m −2 s −1 was the optimal light intensity for production of phycobiliproteins, also Hirooka and Miyagishima (2016) obtained good production of phycocyanin at this light intensity in a suspended cultivation system using hot spring water supplemented with NH 4 + as culture medium. By comparing growth, productivity and photosyntesis performances, the Decanter LF showed the lowest level of biomass growth and photosynthetic performance, despite it is the most common system used for G. sulphuraria growth Carfagna et al. 2018) it. Indeed, it was placed on a plate shaker; the absence of bubbling didn't allow a good mixing of the culture for gas exchange, although the Decanter ensures a good mixing of nutrients around each cell surface (Rodriguez-Maroto et al. 2005;Mata et al. 2010). In literature, better performances are commonly reported for microalgae in the Inclined Bubble-C and Olivieri et al. (2011) showed that the green alga Stichococcus bacillaris grows better in the Inclined Bubble-C than in the Tubular-B;and De Vree et al. (2015) reported that Nannochloropsis sp. achieved higher biomass concentrations and enhanced photosynthesis performance in a flat panel cultivation system very similar to the Flat-UPB compared to other cultivation systems including a Tubular-B. Also, a number of studies found very high biomass levels were obtained with different microalgal genera such as Nannochloropsis, Chlorococcum, Scenedesmus and Arthrospira in a Flat UPB (Zhang et al. 2002;Koller et al. 2018;Hu et al. 1998;De Vree et al. 2015;Safafar et al. 2016;Tredici and Zitelli 1997). In our experiments, G. sulphuraria had better performances in the Tubular-B among the suspended cultivation systems tested; these differences are probably linked to the particular physiology of this microalga. Indeed, G. sulphuraria is an extremophile organism that can survive in the dark up to five months (Gross et al. 1998) achieving very high biomass densities under heterotrophic conditions (Gross and Schnarrenberger 1995;Graverholt and Eriksen 2007;Eriksen 2018;Sloth et al. 2006). Generally, heterotrophy is not typical for red algae, and presumably, this is a strategy of G.sulphuraria to survive in extreme environments (Gross et al. 1998;Gaignard et al. 2019). Therefore, the high illumination area of the Flat-UPB and high radial macroscopic circulation of the Inclined Bubble-C represent a drawback for an organism that lives in a cryptoendolithic condition, under which light is scarce or absent for days (Thangaraj et al. 2011;Gross et al. 1998;Janssen et al. 2003). The Tubular-B has a low radial macroscopic circulation that causes a shadow effect, due to external microalgal biomass that capture most of the incident light, thus creating a low-light environment for inner cells of the suspension (González-Camejo et al. 2019;Hu et al. 1998;Kiperstok et al. 2017;Zuccaro et al. 2020;Carbone et al. 2019). In this way a condition similar to the endolithic state is generated. Whereas the Inclined Bubble-C displayed lower growth and photosynthetic performance than the Tubular-B, the Flat-UPB had similar growth performance but lower photosynthetic activity than the Tubular-B. The Tubular-B and the Flat-UPB had high chlorophyll contents, and as reported in the literature, this is directly connected to the photochemical performance of PSII, and, as a consequence, of photosynthetic activity and indirectly also to growth performance (Schreiber et al. 1998;Zuccaro et al. 2019). However, algae grown in the Flat-UPB revealed higher percentage levels of carotenoids compared to those grown in the Tubular-B, indicating a stressful condition of the alga. Indeed, carotenoids perform an essential photoprotective role by quenching the triplet state chlorophyll molecules, scavenging toxic oxygen species formed during light stress, dissipating harmful excess excitation energy under light stress (Pisal and Lele 2005;Galasso et al. 2017;Takaichi 2011;González-Fernández et al. 2012;Sosa-Hernández et al. 2019;Sun et al. 2016). Moreover, despite the good growth performance of the Flat UPB, the productivity is lower than that in the Tubular-B. Indeed, normalized photosynthesis performance is lower in the Flat-UPB. Although the Tubular-B seems to be the best of the different suspended cultivation systems tested, the results obtained in this system are not comparable with the Twin Layer-S, in which G. sulphuraria exhibited best growth, photosynthetic performance and productivity. This result is not surprising: in natural environments, these microalgae generally live attached to substrates like soil or rocks and the Twin Layer-S partly reproduces conditions similar to the natural habitat of this species Melkonian and Podola 2010;Moreno Osorio 2018). Moreover, in the Twin Layer-S the lower cell layers of the biofilm are permanently shaded by the upper cell layers due to immobilization of the cells, thus minimizing photoinhbition (Gross et al. 1998;Schultze et al. 2015;Piltz and Melkonian 2018;Langenbach and Melkonian 2019;Kim et al. 2019). In consequence, Eventually, Twin Layer-S offers also some economics advantages for mass cultures of G. sulphuraria (Carbone et al. 2017a;Podola et al. 2017;Pierobon et al. 2018;Zhuang et al. 2018). Many high costs linked to suspended cultivation systems are eliminated: for example, the biomass is harvested directly by scraping, without a preconcentration step; there are lower water consumption and space utilization. Furthermore, the system is modular, thus easily scalable. However, in comparison with submerged photobioreactor, which have been sufficiently tested and analysed also at pilot and industrial scale, the Twin-layer-S has still to be completely validated at a relevant and demonstrative scale. Thus, while technoeconomic analysis of closed photobioreactor are already available in literature, an representative and meaningfull economic analysis of the Twin-layer-S has still to be performed.
5,175.8
2020-09-08T00:00:00.000
[ "Engineering" ]
Two-scale approach to the homogenization of membrane photonic crystals . Wave propagation and diffraction in a membrane photonic crystal with finite height were studied in the case where the free-space wavelength is large with respect to the period of the structure. The photonic crystals studied are made of materials with anisotropic permittivity and permeability. Use of the concept of two-scale convergence allowed the photonic crystals to be homogenized. INTRODUCTION Photonic crystals, i.e. dielectric or metallic artificial periodic structures, are generally thought of as strongly scattering devices, displaying photonic band gaps. However, their actual electromagnetic behavior when the free-space wavelength is large with respect to the period is also interesting, because it can produce strongly anisotropic behaviors, plasmon frequencies, or even negative index materials [1]. The study of the properties of these structures in this asymptotic regime comes under the theory of homogenization [2][3][4]. A lot of results are by now well established both for 2D and 3D structures (even for non periodic structures [5][6][7]). In this paper, we consider a photonic crystal made of a collection of parallel finite-size fibers embedded in a matrix. This covers the case of structures made out of a layer of bulk materials in which holes are made periodically (membrane photonic crystal) but also the case of structures made out of nanopillars (pillar photonic crystal [8,9]). The fibers are not supposed to be invariant in the direction of their axis (for instance they can be cone-shaped, see fig. 1). Our point is to derive the effective permittivity and permeability tensors of this structure when the ratio between the period of the structure and the free-space wavelength of the incident field is very small. We had already derived rigorous results for infinitely long cylindrical fibers [3], for which explicit formulas can be derived in some cases [2,[10][11][12]. Here we shall get homogenized permeability and permittivity tensors with a dependence along the axis of the fibers. Let us note that our results hold for dispersive and lossy materials (it applies to imperfect metals as well as to good insulators). Description of the structure and methodology We consider a 2D photonic crystal such as that in fig. 1. It is constructed from a basic cell Y pictured in fig. 2 . A contraction ratio η is applied to obtain a contracted cell in the horizontal directions ( Y η = η 2 Y × (−L, L) ). In the units of the free-space wavelength, the period of the lattice is thus η. The cells are contained in a cylindrical domain Ω ( Ω=O×(−L, L)) (cf. fig.1). The domain Ω is thus periodically filled with contracted cells. The space coordinates are denoted by x =(x 1 ,x 2 ,x 3 ) and we also The coordinates in Y are denoted by y =( y 1 ,y 2 ). We consider time harmonic fields, the time dependence is chosen to be exp (−iωt). For a given monochromatic incident field E i , H i , we denote by (E η , H η ) the total electromagnetic field. Our aim is to pass to the limit η → 0 and determine the limit of the couple (E η , H η ). In our methodology, we get at the limit a true electromagnetic scattering problem, for a given free-space wavelength λ and a bounded obstacle Ω characterized by some permittivity and permeability tensors. This situation is quite different from other homogenization techniques, making use of periodization conditions, in which the frequency tends to zero, thus not leading to a diffraction problem but rather to an electrostatic one. Such an approach can sometimes give useful explicit formulas but generally leads to complicated formulations [10-12, 20, 21]. Moreover, it does not handle the boundary effects which in some cases may lead to some miscomprehensions [13]. The relative permittivity tensor ε η (x) and relative permeability tensor where at fixed x 3 , the applications y → ε 0 (y,x 3 )= ε 0 ij (y,x 3 ) and y → µ 0 (y, The total electromagnetic field (E η , H η ) satisfies and where Z 0 is the impedance of vacuum. A short account of the two-scale expansion In order to describe this problem, we will rely on a two-scale expansion of the fields. That way, the physical problem is described by two variables: a macroscopic one x and a microscopic one y representing the rapid variations of the material at the scale of one basic cell, measured by η. By noticing that there are no rapid variations in the vertical direction x 3 , the microscopic variable is set to be: y = x ⊥ /η. Differential operators with respect to variable y are denoted with a subscript y. The fields are periodic with respect to that microscopic variable (this corresponds to the neighborhood of the center of the first Brillouin zone). The limit problem obtained by letting η tend to 0, will then depend on the macroscopic, physical, variable x but also on the microscopic, hidden, variable y. The total limit fields will read E 0 (x, y) and H 0 (x, y) and the observable, physical, fields will be given by the mean value over the hidden variable y: where |Y | is the measure area of Y . In order to lighten the notations, we denote by brackets the averaging over Y , hence H(x)= H 0 and E(x)= E 0 . The main mathematical tool that we use is a mathematically rigorous version of the multiscale expansion, widely used in various areas of physics. More precisely, for a vector field F η in L 2 (Ω) 3 , we say, by definition, that F η two-scale converges towards F 0 if for every sufficiently regular function φ (x, y), Y -periodic with respect to y,wehave: as η tends to 0. The limit field F 0 is square integrable with respect to both variables x and y and is Yperiodic in the y variable (this is the definition of the space analysis of this new mathematical tool can be found in [14]. We make the physically reasonable assumption that the electromagnetic energy remains bounded when η tends to 0, which is equivalent to assume that (E η , H η ) are both locally square integrable. Then it is known [14] that (E η , H η ) two-scale converges towards limit fields E 0 , H 0 . This physical assumption could be justified mathematically, however it seems quite obvious, from the point of view of physics, that the limit fields exist. The point is then to give the system of equations that is satisfied by these fields and to derive the effective permittivity and permeability tensors. The equations at the microscopic scale First of all, we have to determine the set of equations that are microscopically satisfied, that is, satisfied with respect to the hidden variable y, for that will give the constitutive relations of the homogenized medium. Multiplying Maxwell-Faraday equation by a regular test function φ x, x ⊥ η , and integrating over Ω, we obtain: Multiplying by η and letting η tend to 0, we get using (4): This is equivalent to: which is nothing else but the variational form for: curl y E 0 =0. In a very similar way, but using now Maxwell-Ampere equation, we get curl y H 0 =0 . On the other hand, since ε η E η is divergence free, we have, for every test function φ(x, y), Multiplying by η and letting η tend to 0, we get: which can be written as (notice that the div y operator acts only on the transverse components): Similarly, since the magnetic field is divergence free, we derive: Summing up, we have the following microscopic equations, holding inside the basic cell Y : Derivation of the homogenized parameters The systems in (11) are respectively of electrostatic and magnetostatic types. This means that, with respect to the hidden variable y, the electric field and magnetic field satisfy the static Maxwell system. This property is true only at that scale and not at the macroscopic scale. However, it is these static equations that determine the effective permittivity and permeability. Indeed let us analyze this system starting with the electric field. From the curl relation, we get ∇ y E 0 3 =0 , and so E 0 3 (x, y) ≡ E 3 (x). Besides, the basic cell having the geometry of a flat torus, we get the existence of a regular periodic function w E (y) such that: The function w E is the electrostatic potential associated with the microscopic electrostatic problem. Inserting (9) in equation (12) and projecting on both horizontal axis, we obtain: where ε 0 ⊥ denotes the 2 × 2 matrix extracted from ε 0 by removing the last line and last column. By linearity, denoting E ⊥ =( E 1 ,E 2 ), we derive that the potential w E is given by where w E,i are the periodic solutions of (13). We stress that these potentials depend upon y but also on x 3 . In fact, we get a family of homogenization problems parametrized by the vertical coordinate. By (12) we obtain: where: The magnetic field H 0 can be handled in the same way since it satisfies exactly the same kind of equations as H 0 (see (11)). In particular, we may represent its tranversal component in the form: where w H is a periodic magnetic potential (the possibility of this representation is due to the curl-free condition which means that no microscopic current is present). Analogously as in (14,15), we find: where where: div y µ 0 and µ 0 ⊥ denotes the 2×2 matrix extracted from µ 0 by removing the last line and last column. Of course, the same remark as in the case of electric potentials holds: the functions w H,i depend on the vertical coordinate x 3 . The above results show that, at the microscopic scale, the limit fields (E 0 , H 0 ) are completely determined by the physical fields (E, H). Now that the microscopic behavior is precised, we are able to determine the macroscopic system satisfied by (E, H).T o that aim, let us choose a regular test function φ (x) independent of variable y. From Maxwell equations we get passing to the limit η → 0, we get: Recalling that E 0 = E and that H 0 = H, we get: which, taking into account (14,16), brings to the limit system: The special case of a 1D grating Let us specialize the above results to the case of a one dimensional grating ( fig. 3) that is, the pillars are invariant in the x 2 and x 3 directions (the basic cell Y is depicted in fig. 3). We assume also that the pillars are made on a non-magnetic material and that the relative permittivity tensor is given by: The invariance of ε 0 with respect to y 2 and the periodicity condition suggest that we look for solutions that are y 2 -independent. Let us first consider w E,2 which satisfies: ∂ y1 (ε 1 ∂ y1 w E,2 )=0. This implies that w E,2 = cste. Next, we turn to w E,1 . It satisfies: ∂ y1 [ε 1 (1 + ∂ y1 w E,1 )] = 0 Therefore ε 1 (1 + ∂ y1 w E,1 )=cste = C. Let us now average this relation, we get: C 1 ε1 = 1+∂ y1 w E,1 . Due to the periodicity of w E,1 ,w eh a v e : ∂ y1 w E,1 =0and finally: C = 1 ε1 −1 . The homogenized relative permittivity tensor is given by: We retrieve a well-established result concerning the homogenization of 1D photonic crystals [2]. DISCUSSION The homogenized permeability and permittivity tensors are respectively µ 0 M and ε 0 E . Both are matrix functions of the coordinate x 3 . In case of cylindrical fibers (i.e. invariant along their axis), we would find that the effective tensors coincide with that obtained in the polarized cases [2,3,15]. In fact, because the permittivity and permeability tensors are not renormalized with respect to η, the homogenization process is purely local, that is, the effective constitutive relations are local ones. However, we emphasize that the locality is lost if we change the scale of the permittivity or permeability coefficients in the obstacles. In particular, the results obtained in the case of infinite conductivities in the polarized case [3] cannot be transposed to the case of fibers with finite length, due to the emergence of surprising non local effects which are studied in [16][17][18]. In this context, another problem is interesting: it is the study of the situation where the height L of the pillar is small with respect to the free-space wavelength. An ansymptotic study can be performed in that case, leading to a simplified formulation of the diffraction problem [19]. We also note that an approach relying on explicit calculations, for instance using Blochwaves theory or Fourier-Bessel expansions, cannot work here, due the lack of an explicit representation of the fields in case of finite size fibers. The method that is used here proves to be particularly efficient, the results are obtained at the cost of very few calculations. Besides, it gives an interesting physical pictures of the homogenization process in terms of hidden variables: the homogenized parameters are obtained through clearly defined electrostatic and magnetostatic problems with respect to these variables. It should also be noted that the case of dielectric materials with losses is handled by our result. This result can be straightforwardly applied to the study of membrane photonic crystal in the low frequency range where phenomena of birefringence and dichroism are obtained [20,21]. In our homogenization result, it is clear that the main numerical problem is the solving of the annex problems (13,18) for they give the effective matrices E and M. In certain simple cases, for instance that of circular isotropic non magnetic rods and a permittivity constant in each connected region, it is possible to find an explicit expression for the effective permittivity (it is in fact a very old problem). However, for more complicated geometries, there is a general numerical procedure based on fictitious sources, that allows to solve both annex problems at a low numerical cost. It should also be stressed that if the wavelength is not very large with respect to the period, the device may still be homogenizable, but then the evanescent fields play an important role [22] that can make the effective parameters be dependent upon the environnement of the device.
3,434.2
2008-01-01T00:00:00.000
[ "Physics" ]
TL-Moments for Type-I Censored Data with an Application to the Weibull Distribution This paper aims to provide an adaptation of the trimmed L (TL)-moments method to censored data. The present study concentrates on Type-I censored data. The idea of using TL-moments with censored data may seem conflicting. However, our perspective is that we can use data censored from one side and trimmed from the other side. This study is applied to estimate the two unknown parameters of the Weibull distribution. The suggested point is compared with direct L-moments and maximum likelihood (ML) methods. A Monte Carlo simulation study is carried out to compare these methods in terms of estimate average, root of mean square error (RMSE), and relative absolute biases (RABs). Introduction In the past fifty years, there has been great attention given to the use of unconventional estimation methods in the theory of estimation in addition to the classical methods.Classical estimation methods (e.g., method of moments, method of least squares, and maximum likelihood method) work well in cases where the distribution is exponential.However, in some applications, the data may contain some extreme observations, which can greatly influence the values of the estimator.Therefore, if there is a concern about outliers, one should use a robust method of estimation that has been developed to reduce the influence of outliers on the final estimates.Using a robust estimation techniques for estimating unknown parameters has great importance for investigators in many fields, such as in industrial, medical, and occasionally in business applications.In recent decades, much of the work on dealing with outliers has been focused on robust estimation methods (e.g., [1]). The L-moments method has been noticed as an appealing alternative to the conventional moments method [2].To avoid the effect of outliers, Elamir and Seheult [3] introduced an alternative robust approach of L-moments which they called trimmed L-moments (TL-moments).TL-moments have some advantages over L-moments and the method of moments.TL-moments exist whether or not the mean exists (e.g., the Cauchy distribution), and they are more robust to the presence of outliers. The idea of TL-moments is that the expected value E(X r−k:r ) is replaced with the expected value E(X r+t 1 −k:r+t 1 +t 2 ).Thus, for each r, we increase the sample size of a random sample from the original r to r + t 1 + t 2 , working only with the expected values of these r modified order statistics X t 1 +1:r+t 1 +t 2 , X t 1 +2:r+t 1 +t 2 , ..., X t 1 +r:r+t 1 +t 2 by trimming the smallest t 1 and largest t 2 from the conceptual random sample.This modification is called the rth trimmed L-moment (TL-moment) and marked as λ (t 1 ,t 2 ) r . The TL-moment of the rth order of the random variable X is defined as: The expectation of the order statistics are given by: Its basic idea for the method of expectation is to take the expected values of some functions of the random variable of interest and extend them to a sample and equate the corresponding results and solve for the unknown parameters. This paper is concerned with comparing the performance of three estimating methods-namely, TL-moments, direct L-moments, and maximum likelihood (ML)-with Type-I censored data.It is straightforward to adapt the methods for Type-II censored data.This study is applied to the estimation of the two unknown parameters of the Weibull distribution by a quantile function that takes the form: This article is organized as follows: TL-moments for censored data, in the general case, are introduced in Section 2. TL-moments for the Weibull distribution are presented in Section 3. A simulation study and real data analysis are presented in Sections 4 and 5, respectively.Concluding remarks are presented in Section 6. TL-Moments for Censored Data For the analysis of censored samples, Wang [4][5][6] introduced the concept of partial probability-weighted moments (PPWMs).Hosking [7] defined two variants of L-moments, which he used with right-censored data.Zafirakou-Koulouris et al. [8] extended the applicability of L-moments to left-censored data.Mahmoud et al. [9] introduced two variants of what they termed the method of direct L-moments, and used them of right-and left-censored data from the Kumaraswamy distribution. The aim of this section is to introduce an adaptation of the TL-moments method to censored data.In fact, the idea of using TL-moments with censored data may seem to be conflicting, but the idea is that we may use data censored from one side and trimmed from the other side. Right Censoring for Left Trim Let x 1 , x 2 , ..., x n be a Type-I censored random sample of size n from a population with distribution function F(x) and quantile function q(u).From the formula of TL-moments (1), we know that TL-moments are defined as: We suppose a left trim t 1 (i.e., t 2 = 0).From Formula (3), we get In this case, let the censoring time T satisfy F(T) = c and c be the fraction of observed data.The random sample takes the form x t 1 +1 , x t 1 +2 , ..., x n . TL-Moments for Right Censoring (Type-AT) The quantile function of Type-AT TL-moments is ( Substitution into Equation ( 4) leads to the Type-AT TL-moments where: When we suppose that the value of the smallest trim is equal to one (i.e., t 1 = 1), from (6), we get: In this case, the first four Type-AT TL-moments are given by the following: When we suppose that the value of the smallest trim is equal to two (i.e., t 1 = 2), from (6), we get: Substituting r = 1, 2, 3, 4 in Equation ( 9), we get the first four Type-AT TL-moments: Using the method of expectations, Type-AT TL-moments estimators are given by: 2.1.2.TL-Moments for Right Censoring (Type-BT) The quantile function of Type-BT TL-moments is Substitution into the formula of left trimming in (4), the Type-BT TL-moments are given by µ Using the results in (A9), the second integration can be written as: where β c (a, b) is the upper incomplete beta function. When we suppose the value of smallest trim is equal to one (i.e., t 1 = 1), from (12), we get: In this case, the first four TL-moments for Type-BT right censoring are calculated as follows: When we suppose that the value of the smallest trim is equal to two (i.e., t 1 = 2), from (12), we get In this case, the first four TL-moments for Type-BT right censoring are calculated as follows: Using the method of expectations, Type-BT TL-moments estimators are given by: Left Censoring for Right Trim Let x 1 , x 2 , ..., x n be a random sample of size n.We suppose right trim t 2 (i.e., t 1 = 0).From Formula (3) we get: In this case, the random sample becomes of the form x 1 , x 2 , ..., x n−t 2 .Type-I left censoring occurs when the observations below censoring time T are censored: Let censoring time T satisfy F(T) = h, where h is the fraction of censored data. TL-Moments for Left Censoring (Type-A T) The quantile function of Type-A T TL-moments is: Substitution into (18) leads to the Type-A T TL-moments where: When we suppose that the value of the largest trim is equal to one (i.e., t 2 = 1), from (19), we get: In this case, the first four TL-moments for Type-A T left censoring are calculated as follows: When we suppose that the value of the largest trim is equal to two (i.e., t 2 = 2), from (19), we get: In this case, the first four TL-moments for Type-A T left censoring are calculated as follows: Using the method of expectations, Type-A T TL-moments estimators are given by: TL-Moments for Left Censoring (Type-B T) The quantile function of Type-B T TL-moments is Substitution into Equation (18) leads to the Type-B T TL-moments where: Using the results in (A8), the first integration can be written as where β z (a, b) is the lower incomplete beta function. When we suppose the value of largest trim is equal to one (i.e., t 2 = 1), from (25), we get: The first four TL-moments for Type-B T left censoring are calculated as follows: When we suppose the value of the largest trim is equal to two (i.e., t 2 = 2), from (25), we get The first four TL-moments for Type-B T left censoring are calculated as follows: Using the method of expectations, Type-B T TL-moments estimators are given by: Application to the Weibull Distribution In this section, the rth population TL-moments for the Weibull distribution are introduced. Right Censoring with Left Trim • Type-AT; t 1 = 1 From Equation ( 6), the rth population Type-AT TL-moments for Type-I right censoring for the Weibull distribution are: By taking that the value of smallest trim is equal to one (t 1 = 1), from (7) we get: Substituting r = 1, 2 in Equation (8a), the first two Type-AT TL-moments for Type-I right censoring with left trim for the Weibull distribution will be: Putting z = − log(1 − u), this equation becomes: Using the results in (A3), this equation can be written as: where γ(c, b) is the lower incomplete gamma function. Similarly, from Equation (8b), we can also obtain the second Type-AT TL-moments, when t 1 = 1, for Type-I right censoring for the Weibull distribution as follows: When we suppose that the value of the smallest trim is equal to two (i.e., t 1 = 2), from (9), we get: Substituting r = 1, 2 in Equation ( 35), the first two Type-AT TL-moments for Type-I right censoring with left trim for Weibull distribution will be: and, • Type-BT; t 1 = 1 From Equation (12), the rth population Type-BT TL-moments for Type-I right censoring for the Weibull distribution are: By taking that the value of the smallest trim is equal to one (t 1 = 1), from (13) we get: Substituting r = 1, 2 in Equation ( 39), the first two Type-BT TL-moments for Type-I right censoring with left trim for Weibull distribution will be: and, When we suppose that the value of the smallest trim is equal to two (i.e., t 1 = 2), from (15), we get: Substituting r = 1, 2 in Equation ( 42), the first two Type-BT TL-moments for Type-I right censoring with left trim for Weibull distribution will be: and, (44) Left Censoring with Right Trim • Type-A'T; t 2 = 1 From Equation (19), the rth population Type-A T TL-moments for Type-I left censoring for the Weibull distribution are: When we suppose that the value of the largest trim is equal to one (i.e., t 2 = 1), from (20), we get: Substituting r = 1, 2 in Equation ( 46), the first two Type-A T TL-moments for Type-I left censoring with right trim for Weibull distribution will be: and, When we suppose that the value of the largest trim is equal to two (i.e., t 2 = 2), from ( 22), we get: Substituting r = 1, 2 in Equation ( 49), the first two Type-A T TL-moments for Type-I left censoring with right trim for Weibull distribution will be: • Type-B T; From Equation (25), the rth population Type-B T TL-moments for Type-I left censoring for the Weibull distribution are: When we suppose the value of largest trim is equal to one (i.e., t 2 = 1), from (26), we get: Substituting r = 1, 2 in Equation ( 53), the first two Type-B T TL-moments for Type-I left censoring with right trim for Weibull distribution will be: Table 1.The estimates, root of mean square error (RMSE) and relative absolute biases (RABs) for two parameters of the Weibull distribution using TL-moments, direct L-moments, and ML method based on left censoring (a = 0.5 and b = 5) in the presence of outliers. Table 2 . The estimates, root of mean square error (RMSE) and relative absolute biases (RABs) for two parameters of the Weibull distribution using TL-moments, direct L-moments, and ML method based on left censoring (a = 2 and b = 4) in the presence of outliers. Table 4 . The estimates, root of mean square error (RMSE) and relative absolute biases (RABs) for two parameters of the Weibull distribution using TL-moments, direct L-moments, and ML method based on right censoring (a = 0.5 and b = 5) in the presence of outliers. Table 5 . The estimates, root of mean square error (RMSE) and relative absolute biases (RABs) for two parameters of the Weibull distribution using TL-moments, direct L-moments, and ML method based on right censoring (a = 2 and b = 4) in the presence of outliers.
2,976.4
2018-08-02T00:00:00.000
[ "Mathematics" ]
A New Analytic Method to Tune a Fractional Order PID Controller T his paper proposes a new method to tune a fractional order PID controller. This method utilizes both the analytic and numeric approach to determine the controller parameters. The control design specifications that must be achieved by the control system are gain crossover frequency, phase margin, and peak magnitude at the resonant frequency, where the latter is a new design specification suggested by this paper. These specifications results in three equations in five unknown variables. Assuming that certain relations exist between two variables and discretizing one of them, a performance index can be evaluated and the optimal controller parameters that minimize this performance index are selected. As a case study, a third order linear time invariant system is taken as a process to be controlled and the proposed method is applied to design the controller. The resultant control system exactly fulfills the control design specification, a feature that is laked in numerical design methods. Through matlab simulation, the step response of the closed loop system with the proposed controller and a conventional PID controller demonstrate the performance of the system in terms of time domain transient response specifications (rise time, overshoot, and settling time) INTRODUCTION The fractional order PID controller (also called ) was proposed by Podlubny, 1994 and Podliubny, 1999 and has a transfer function where , , and and are the parameters of the controller that must be tuned.Parameters and increase the degree of freedom in tuning the controller, which makes the design of the control system more flexible.Fractional order PID controllers have less sensitivity to parameter variation due to these two additional parameters, Zhao, et al The remaining of this paper is organized as follows: In section 2, the proposed tuning method is presented, in section 3, a design and simulation example is presented to demonstrate the application of this method, and in section 4, the conclusions are drawn from the simulation results. THE PROPOSE TUNING METHOD Consider the unity feedback control system shown in Fig. 1.The transfer functions of the controller and the plant are ( ) and ( ), respectively.The sinusoidal transfer function of the controller is Substituting Eq. ( 2) in Eq. ( 3) and equating the real part and imaginary part of both sides yields The steady state error should be zero; therefore, (Final value theorem).The two relations and are assumed to exist between and ; the reason for choosing these relations is to get the sine and cosine functions for the same angle, namely, .i) First, assume that (6) There is one degree of freedom in choosing and (choose one and evaluate the other) as shown in Fig. 2. Substituting Eq. (6) in Eq. ( 4) and Eq. ( 5) yields ( ) Now, take discrete values of , from 0 to 1, say 0.01, 0.02, …, 1 (100 values).For each value of , solve Eq. ( 7) and Eq. ( 8) for and in terms of and . At the resonant frequency of the plant , the magnitude of the open loop transfer function is Squaring both sides of Eq. ( 11) yields Substituting Eq. ( 9) and Eq.(10) in Eq. ( 12), the value of is obtained as follows where , ( ( ) ), ( ( ) ) The performance index that will be used to select the optimal value of ( ) ∫ ( ) ii) Second, assume that (15) Substituting Eq. ( 15) in Eq. ( 4) and Eq. ( 5) yields For the same discrete values of , solve Eq. ( 16) and Eq. ( 17) for and in terms of and . DESIGN AND SIMULATION Consider applying the proposed design procedure to the plant given by the transfer function The resultant controller is ) are all achieved by this controller.The step response of the close loop system with this controller is shown in Fig. 5.For the purpose of comparison, a conventional PID controller is designed using matlab pidtune command, which designes a PID controller for a given transfer function.The transfer function of this controller is The step response of the close loop system with the PID controller is shown in Fig. 6.Table 1 shows the transient response specifications of the two systems.The system with the controller has better percentage overshoot, delay time, and rise time than that with the PID controller, while the settling time is much greater.This is because the specifications that are fulfilled by the controller are the gain crossover frequency which enhances the rise time and delay time, and the phase margin which enhances the percentage overshoot. CONCLUSIONS A conclusion can be drawn from the design procedure of this paper that if the domain of some of the design variables is restricted by a certain mathematical relation (restriction) between these variables ( and in this case), an analytic, rather than a numerical, solution can be obtained.Unlike the optimization problems that may be subjected to certain constraints (such as the parameters should be positive), the analytic solution gives exact solution of the design specification equations; this solution may be positive, negative, or even a complex number.This is evident in the value of , which is negative.While a controller that is tuned by optimization techniques can fulfill the design specifications with some sufficiently small error, the analytically tuned controller fulfill the design specifications exactly, since it is the solution of a set of simultaneous equations; this is evident in enhancing the percentage overshoot and rise time of the closed loop system.= set of positive real numbers.= fractional order of integration of the controller, dimensionless.= optimal value of , dimensionless.= fractional order of differentiation of the controller dimensionless.= optimal value of , dimensionless.= gain crossover frequency of the open loop transfer function, radian/s.= resonance frequency of the plant, radian/s.= real part of a complex number.= imaginary part of a complex number. Badri, and Tavazoei, 2013 solved the equations graphically by finding the intersection point of two curves.Sadati, 2007 used a performance index in time domain to determine the controller parameters, while Tepljakov, et al., 2015 used a performance index in frequency domain to determine these parameters.Lazarevic, 2013 tuned the controller parameters using genetic algoritm to minimize a performance index.The objective functions in such problems have complex surfaces such that the analytic methods of optimization often fail Dorcak, et al, 2006.Another approach to tune a fractional order PID controller was given in Xue, et al., 2006, Khalil, et.Al., 2009, and Joshi and Talange, 2013, by taking cerain values for the fractional order of integration and differentiation and finding the optimal values for the remaining gain parameters. The bode plot of the open loop transfer function is shown in Fig. 4. The control design specifications ( , , and  Xue, D., Zhao, C., and Chenm, Y., 2006, Fractional Order PID Control of a DC-Motor with Elastic Shaft: A Case Study, Proceeding of the 2006 Ammerican Control Conference, Minneapolis, Minnesota, USA, June 14-16, pp.3182-3187. Zhao, C, Xue, D., and Chen, Y, 2005, A FOPID Tuning Algorithm for a Class of Fractional Order Plants, IEEE International Conference on Mechatronics and Automation, vol.1, pp.216-221.NOMENCLATURE e= error between the desired and actual output.j= imaginary unit (= √ ).= derivative gain of the PID or controller, dimensionless.= integral gain of the PID or controller, dimensionless.= proportional gain of the PID or controller, dimensionless.=magnitude of the open loop transfer function at the resonant frequency, dimensionless.( ), ( )= performance indices.R= set of real numbers. , and Costa, 2010. In Zhao, et. al, 2005 and Caponetto, et al., 2004, the controller parameters were derived analytically to achieve gain (phase) margin and phase (gain) crossover frequency specifications. In Monje, et al., 2008, one of the equations was taken as an objective function and the rest of the specifications were taken as constraints.
1,818.6
2017-11-24T00:00:00.000
[ "Mathematics" ]
Multilayer metamaterial absorbers inspired by perfectly matched layers We derive periodic multilayer absorbers with effective uniaxial properties similar to perfectly matched layers (PML). This approximate representation of PML is based on the effective medium theory and we call it an effective medium PML (EM-PML). We compare the spatial reflection spectrum of the layered absorbers to that of a PML material and demonstrate that after neglecting gain and magnetic properties, the absorber remains functional. This opens a route to create electromagnetic absorbers for real and not only numerical applications and as an example we introduce a layered absorber for the wavelength of $8$~$\mu$m made of SiO$_2$ and NaCl. We also show that similar cylindrical core-shell nanostructures derived from flat multilayers also exhibit very good absorptive and reflective properties despite the different geometry. Introduction Perfectly matched layer (PML) (Berenger, 2007) absorbers are now widely used to terminate electromagnetic simulations with an open domain. PMLs suppress reflection and ensure absorption of incident electromagnetic radiation at any angle and any polarization. A variety of PML formulations exist, starting from the early split-field PML (Berenger, 1994), and the coordinate stretching approach (Chew and Weedon, 1994) up to the convolutional PML (CPML) (Roden and Gedney, 2000), and the near a ) b ) Fig. 1 a) A uniaxial PML characterized by anisotropic permittivity and permeability tensors is attached to a simulation area (ε 1 , µ 1 ). b) An effective-medium PML, which has the same uniaxial properties, can be composed of two isotropic materials (ε wi , µ wi , where i = 1, 2) arranged in a multilayered fashion. In both cases a perfect electric conductor (PEC) terminates the PML PML (NPML) (Cummer, 2003). In this paper we refer to the Maxwellian formulation of PML, represented by an artificial material with uniaxial permittivity and permeability tensors, usually termed as uniaxial PML (UPML) (Sacks et al, 1995;Gedney, 1996). A PML can be used with both time-domain and frequency domain methods, as well as with finite difference or finite element discretization schemes. It can assume various dispersion models, see e.g. the time-derivative Lorentz material that is capable of absorbing oblique, pulsed electromagnetic radiation having narrow and broad waists (Ziolkowski, 1997). A PML can not be applied in some rare cases and for instance it fails to absorb a backward propagating wave for which an adiabatic absorber should be used instead (Zhang et al, 2008;Loh et al, 2009). Electromagnetic absorbers have a much longer history than any kind of numerical modeling. Their possible applications range from modification of radar echo, through applications related to electromagnetic compatibility, up to photovoltaics. Early realworld absorbers were based on resistive sheets separated from a ground plate by quarter wave distances. With several sheets and multiple resonances it was possible to achieve broadband operation. The idea evolved into the theory of frequency selective surfaces (Munk, 2000). Furthermore, it is possible to obtain a tailored impedance at a surface transition region using homogenized periodic one-dimensional or two-dimensional corrugated surfaces (Kristensson, 2005). A static periodic magnetization obtained with ferromagnetic or ferrimagnetic materials is another route to obtaining broadband absorbers (Ramprecht and Norgren, 2008). A recent overview paper (Watts et al, 2012) can serve as a tutorial on absorbers with the focus on novel metamaterial absorbers based on split-ring and electric-ring resonators. In this paper we introduce the effective medium PML absorbers (EM-PML), which are metamaterial absorbers with a layered structure that exhibit effective permittivity and permeability tensors similar to a PML material. We calculate the reflection coefficient achieved with these layered absorbers. We look towards their possible physical realizations. Approximate representation of UPML A schematic of a uniaxial perfectly matched layer attached to a simulation area is shown in Fig. 1. In the outer area of the simulation domain (which neighbors the PML), the permittivity and permeability are equal to ε 1 , and µ 1 . The UPML is defined as a material whose permittivity ε 2 and permeability µ 2 take the following tensor forms, where and The parameter s is a non-zero freely chosen complex number, whose imaginary part determines the strength of absorption within the UPML. The conditions stated in Eq. (2) alone are sufficient to remove reflections for the TM polarization, for any plane-wave propagating in-plane, independently of its angle of incidence and frequency. Equation (3) assures the same for the TE polarization and when both conditions are satisfied, the UPML is reflection-free for any polarization and any angle of incidence in a three-dimensional case. Subsequently, s may be varied with the coordinate z to give a continuously graded permittivity and permeability, which works better with most discretization schemes. Then, the same parameter s may also be linked to a coordinate mapping from real to complex coordinates. Our approximate representation of a UPML consists of a one-dimensional stack of uniform layers. According to the effective medium theory (EMT) a stack consisting of thin layers may be homogenized and replaced by a uniform uniaxial medium with effective permittivity and permeability tensors. For the TM polarization, the effective permittivity and permeabilty tensors of a stack consisting of two materials with permittivity and permeability pairs (ε w1 , µ w1 ), and (ε w2 , µ w2 ) match that of a UMPL, expressed by Eq. (1), when where f is the filling factor, i.e. the volume fraction of material w1 in the stack. A similar condition applies for the TE polarization. Solving equations (4) and (5) for ε w1 , and ε w2 yields, where In Fig. 2 we illustrate the permittivities calculated from eq. (7) as a function of the fill factor f and s, for s = 1 + αi. We use the branch of Eq. (8) with |ρ| > 1. Permittivities of the two constituent isotropic media (ε w1 and ε w2 in left and right columns, respectively) that form the uniaxial EM-PML as functions of the filling factor f and the imaginary part of s (for re(s) = 1). Top row: real parts (a) re(ε w1 ) and (b) re(ε w2 ). Bottom row: imaginary parts (c) im(ε w1 ) and (d) −im(ε w2 ). Negative imaginary part of permittivity refers to materials with optical gain. Qualitatively, one of the materials is a high-loss material with a large refractive index (greater than one), while the other is a low-gain medium with a refractive index between 0 and 1 2.0 Fig. 3 Dependence of spatial reflection spectrum R(kx/k 0 ) on a/λ for a multilayer consisting of N = 5 periods obtained with f = 0.6 and a) s = 1 + 0.5i, b) s = 1 + 5i. The result is polarization invariant Expressions similar to Eqs. (4) and (5) may be written and solved for µ w1 , and µ w2 , and when ε 1 = µ 1 then ε w1 = µ w1 and ε w2 = µ w2 . An absorber designed for the TM polarization may have one of the permeabilities freely assigned, e.g. µ w2 = 1, while the other is then given by Eq. (6). If additionally s = 1+αi then re(µ w2 ) = 1 and the imaginary part of µ w2 for this case is still positive, as shown in Fig. 4. The imaginary part of µ w2 is small when f is large and α is small. Here, we have assumed that the premeability of the second material is unity µ w2 = 1, as can be done for TM polarized light, and re(µ w1 ) = 1 For either large f or small α the negative conductivity of the second material is also negligible (see Fig. 2d). The performance of a multilayer absorber obtained for f = 0.6 and s = 1 + 0.5i, and s = 1 + 5i is illustrated in Fig. 3. Either of the two values of s enables to construct an efficient broadband absorber which is at the same time subwavelength in size. Layers have both magnetic and electric properties, including gain, and a complex permeability. In the limit of a/λ → 0 the multilayer approaches the properties of a true UPML (but at the same time, its thickness approaches N · a → 0). When s = 1 + 5i, an absorber consisting of N = 5 periods, with a total thickness of L = 5a ≈ λ/20 reflects −30dB for a broad range of incidence angles, and the reflection decreases rapidly with total thickness L/λ. However, the evanescent waves are amplified in this situation. If the absorbing power is smaller, e.g. s = 1 + 0.5i, the thickness L/λ has to be larger, but the reflection is less sensitive to the magnetic permeability and gain. Let us asses how the reflection spectrum is changed after the magnetic permeabilities and gain have been neglected. The result is depicted in Fig. 5 for s = 1 + 0.5i and f = 0.6. The absorber consists of layers made of a lossy dielectric and of another material with permittivity lower than one. A probable route to implement it physically is to use a metamaterial, e.g. a fishnet structure, to make use of electromagnetic mixing rules, or to use some material near its resonance frequency. Now the reflection coefficient depends on polarization. Before neglecting gain and im(µ w1 ), the reflection has been smaller for the TM than for the TE polarization, both for the propagating and for evanescent waves. Reflection remained large only at grazing incident angles, like for ordinary UPML. However, the variant of the stack with no gain and with no magnetic properties performs better for the TE polarization (See Fig. 5bd). Finally, the multilayer considered here has an elliptical effective dispersion relation, while similar absorbers made of hyperbolic metamaterials have been also recently proposed (Guclu et al, 2012), although with no relation to PML. 3 Layered slab and core-shell metamaterial absorbers Based on the theoretical considerations and material parameters used to calculate the spatial reflection spectrum in Fig. 5, a simple rule of thumb for the range of required permittivities can be drawn up. This simple rule requires one permittivity to have its real part between 0 and 1, while the other premittivity would have its real part larger than one. The calculations show, that losses should be provided by the second material (with Re(ǫ) > 1), while in the first material we merely neglect gain. Materials in general have Re(ǫ) > 1, with the exception of localized transitions and broader frequency ranges in metals up to the plasma frequency. We will now demonstrate the operation of the proposed layered absorber consisting of nonmagnetic layers with no gain. In Fig. 6 we present the results of a finite difference time domain (FDTD) simulation of a layered absorber rolled into a cylindrical core-shell multilayer. The images show the time-averaged energy density distribution obtained for the TE polarization (in Fig. 6a) and TM polarization (in Fig. 6b). Light is incident from the bottom side. The composition of the metamaterial is the same as in Figs. 5cd, but the multilayer is deposited over a cylindrical metallic (perfectly conducting) material. The absorber consists of N = 5 periods of non-magnetic concentric layers, with the radial pitch equal to a = 0.18λ, the radius of the internal PEC material is equal to r int = 2a, and the external radius is equal to r ext = 7a. The permittivities of the layers are equal to ε w1 = 1.474 + 0.017i, and ε w2 = 0.289, and the filling factor equals f = 0.6. Reflections, understood as the part of the energy backscattered to the bottom side of the simulation area are as small as R T E = 0.05%, and R T M = 0.3%. Thanks to the cylindrical geometry, it is possible to see the operation of the absorber at all possible angles of incidence at the same time. Notably the absorber performs well both for the TE and TM polarization, and in a spherical 3D core-shell geometry where there is no decoupling into these two polarizations we expect similar operation. Finally, we demonstrate the operation of a layered absorber consisting or real materials. Here, we make use of materials with localized electronic transitions. In the mid-infrared SiO 2 is such a material, which features a strong transition at approximately 9 µm and in a range between 7.2 and 8 µm its real part of permittivity is between 1 and 0. The complementary material of choice is NaCl, which in this range has Re(ǫ) ≈ 1.5, however, it also is weakly dispersive in this range and due to Kramers-Kroning relations between the real and imaginary parts of permittivity it has a very small imaginary part. Thus, SiO 2 needs to provide dissipation to extinguish the incident beam. Deposition of alternating SiO 2 and NaCl or LiF layers of required thickness may be accomplished using such techniques as chemical vapor deposition or thermal evaporation (Fornarini et al, 1999;Kim and King, 2007), although with the required total thickness of the layers apporaching a few wavelengths, a fast method is preferable, at least for absorbers intended for infrared. Fabrication of multilayered shell structures is more challenging, although synthesis of core/shell nanoparticles incorporating SiO 2 or Al 2 O 3 and noble metals is commonplace nowadays (Liu et al, 2014;Wang et al, 2014;Mai et al, 2011). In the considered case here, it would be required to synthesize a layered spherical particle consisting of only dielectric materials. As mentioned previously, SiO 2 and Al 2 O 3 have been used before, although only in relatively simple synthesis of core-shell structres; magnesium fluoride has also been synthesised in the form of nanoparticles (Lellouche et al, 2012). However, undoubtedly some effort would be required to develop a procedure for the synthesis of layered nanoparticles. In Fig. 7 we present the results obtained for a plane wave incident normally onto a SiO 2 /NaCl multilayer designed for the 7-10 µm wavelength range. The refractive index of the materials is taken from literature (Palik, 1985) and the structure is optimized for λ = 8 µm. The pitch is equal to a = 200 nm, and the filling factor is f = 0.56. Due to small losses in glass, the absorption distance is relatively long and reflection smaller than 1% is observed already for 10 layers. In order to achieve both transmission and reflection smaller than 0.1% N = 400 is needed. The thickness can be reduced using materials with larger values of the imaginary part of permittivity. In general, a desired refractive index of one of the layers (in the range 0 < ℜ(n) < 1) can be manufactured using the electromagnetic mixing rules. Recently, absorbers made of hyperbolic metamaterials have been proposed (Guclu et al, 2012) and similar to that material, our multilayer has an elliptical dispersion with a large eccentricity. Conclusions We have introduced an approximate representation of the uniaxial perfectly matched layer reflection-free absorber. The representation consists of a one-dimensional stack of uniform and isotropic metamaterial layers. A further simplification to non-magnetic materials with no gain can be assumed for some combinations of filling fraction and absorbing power. We have also shown that similar cylindrical core-shell nanostructures derived from flat multilayers also exhibit very good absorptive and reflective properties. A probable route to implement the absorber experimentally is by using a lossy dielectric as one material and for the other to take a metamaterial, or to make use of electromagnetic mixing rules, or to use some material near its resonance frequency. As an example we have demonstrated a layered absorber for the wavelength of 8 µm made of SiO 2 and NaCl.
3,645.6
2014-06-07T00:00:00.000
[ "Physics" ]
Attentional Biases Toward Spiders Do Not Modulate Retrieval Abstract. When responding to stimuli, response and stimulus’ features are thought to be integrated into a short episodic memory trace, an event file. Repeating any of its components causes retrieval of the whole event file leading to benefits for full repetitions and changes but interference for partial repetitions. These binding effects are especially pronounced if attention is allocated to certain features. We used attentional biases caused by spider stimuli, aiming to modulate the impact of attention on retrieval. Participants discriminated the orientation of bars repeating or changing their location in prime-probe sequences. Crucially, shortly before probe target onset, an image of a spider and that of a cub appeared at one position each – one of which was spatially congruent with the following probe target. Participants were faster when responding to targets spatially congruent with a preceding spider, suggesting an attentional bias toward aversive information. Yet, neither overall binding effects differed between content of preceding spatially congruent images nor did this effect emerge when taking individual fear of spiders into account. We conclude that attentional biases toward spiders modulate overall behavior, but that this has no impact on retrieval. Imagine you want to select some books from an old wooden shelf.While doing so, you notice a spider appearing in a corner at the end of the shelf, whichstartled by your activitiesdirectly hides between some of the books.Will attention shift toward that location or divert from that area?And will this affect your immediate actions, for example, when you touch a book in that area?Our study is concerned with this question, namely how phobic stimuli might affect allocation of attention which in turn has a direct consequence for action control processes. Action control theories assume that when responding to a stimulus, the stimulus features as well as the response are coupled into a short episodic memory trace, referred to as an event file (Frings et al., 2020;Hommel, 2004).If any information repeats, the previous event file is retrieved and affects performance.Fully repeating all information is beneficial because the retrieved event file fully matches.However, if information only partially repeats, interference occurs, resulting in partial repetition costs (Hommel, 1998(Hommel, , 2004;;Hommel et al., 2001).Lastly, if no information repeats, there are no benefits nor interference caused by retrievalbecause nothing is retrieved.This data pattern can be investigated in primeprobe sequences: In these, participants first respond to a prime target, followed by the response to a probe target. From prime to probe, response repetitions and changes are orthogonally varied with repetitions and changes of response-irrelevant information.The effect resulting from the separate processes of integration into an event file and the retrieval of it (Frings et al., 2020;Laub et al., 2018) is referred to as stimulus-response (S-R) binding. Impact of Attention on S-R Binding Effects in Action Control The potential modulating role of attention on integration and retrieval has so far produced mixed results (for a discussion, see Singh et al., 2018).As long as there is some task relevance in feature dimensions, there is only little requirement for attention to features to spur on the integration of such into an event file (Hommel, 2005;Hommel & Colzato, 2004).Yet, attention can have a modulating role on the occurrence of binding effects (see Frings et al., 2020).For example, response-irrelevant stimuli only retrieve the response, if they are attended to (Moeller & Frings, 2014), and binding effects are stronger if the response-irrelevant feature is attended to (Singh et al., 2018).This fits well with the intentional weighting mechanism proposed by Memelink and Hommel (2013; see also Hommel et al., 2014): Here, the cognitive system is assumed to assign more weight to a certain feature dimension due to task demands (certain instructions or goals, etc.)like color, location, and so onand by that, said feature dimension has a stronger impact on processing and performance.Consequently, if attention is directed to certain dimensions, the resulting binding effects increase (Hommel et al., 2001(Hommel et al., , 2014)).To summarize, although the exact impact is debated (see Singh et al., 2018), there have been multiple observations of increased binding effects for attended features, especially if attention is allocated to such features during retrieval (e.g., Moeller & Frings, 2014).In other words, increased attention to a certain feature dimension should increase the strength of observed S-R binding effects. Attentional Biases Towardor Away From -Certain Stimuli Allocating attention to certain features can be done, for example, by task instructions (e.g., Hommel et al., 2014;Singh et al., 2018).It is also possible to allocate attention toward specific locations, that is, spatial attention, in multiple ways.For example, exogenous attention can be elicited by sudden visual onsets in the periphery and endogenous attention can be elicited by central stimuli that are predictive where the upcoming target will be presented (Posner, 1980; see also Chica et al., 2013Chica et al., , 2014)).Attentional effects have been found in response to arrows pointing (e.g., Pratt et al., 2010;Pratt & Hommel, 2003) or eyes looking (e.g., Ristic et al., 2007) somewhere.Moreover, some stimulus categories like highly arousing images (e.g., Vogt et al., 2008) or faces (e.g., Palermo & Rhodes, 2007;Ro et al., 2001) attract attention, for example, when the latter are presented in parallel with nonface objects (e.g., Theeuwes & Van der Stigchel, 2006). There is a vast amount of research on how specific stimulus categories drive attention in individuals with certain phobias.For example, individuals with high traits of anxiety shift their attention to threatening faces (e.g., Bradley et al., 1998) and threatful scenes (Mogg et al., 2004) more than low trait individuals.Mogg and Bradley (2006) found that individuals with high levels of spider fear have an attentional bias toward spider images presented with short exposure (200 ms), which is no longer observed with longer image exposure (500 ms and 2,000 ms).Spider-phobic individuals are also slower to disengage from such stimuli (Gerdes et al., 2008).However, it is argued that processes in individuals with high fear of spiders follow a pattern of initial vigilance followed by subsequent behavioral avoidance (Pflugshaupt et al., 2005; see also Rinck et al., 2010); this can also result in attentional avoidance of the spider stimulus, as well (e.g., Pflugshaupt et al., 2005;Rinck & Becker, 2006).For example, phobic individuals spend less time on viewing phobiarelevant images, suggesting avoidance behavior (Tolin et al., 1999).This spider avoidance in fearful individuals can occur rather automatically without initial attention toward threat (Huijding et al., 2011).Moreover, Thorpe and Salkovskis (1998) found that spider-phobic individuals do not only attend to spiders but also to regions of safety.In their study, participants had to detect target lights at two positionsa wall and a door.Crucially, they placed a living tarantula at one of these two target locations.Spider-phobic individuals responded faster to an appearing target stimulus, if the location of the real-live spider coincided with the location of the door.Evidence for such parallel scanning for safety in the presence of a real spider has also been supported with eye movement data of spider-phobic individuals (Lange et al., 2004). Current Study To our knowledge, no previous studies have looked at how the attentional biases generated by spiders might have an impact on binding and retrieval between response and location, especially because retrieval is subject to a potential modulation due to attentional resources.In the current study, participants discriminated the orientation of a shape 1 which repeated or changed its position.We expected S-R binding between response and location, that is, interference by partial repetitions compared to full repetitions and full changes, evidenced in a crossed data pattern (e.g., Hommel, 2004;discrimination tasks in Schöpper, Hilchey, et al., 2020).Crucially, an image of a cub and a spider appeared between prime target and probe target, one of which was spatially congruent with the subsequent probe target position.If attention to certain features increases S-R binding (e.g., Singh et al., 2018), especially if attention is allocated to such during retrieval (e.g., Moeller & Frings, 2014), we expect stronger S-R binding for trials in which the probe target is spatially congruent with a previously presented spider image due to attentional allocation toward this aversive stimulus category (e.g., Mogg & Bradley, 2006).In contrast, if attentional avoidance of spiders (e.g., Rinck & Becker, 2006) drives the data pattern, that is, attention is shifted away from the aversive content, we expect weaker S-R binding at said position.In any case, an attentional bias or attentional avoidance modulating S-R binding might be especially pronounced for spider-fearful individuals (Rinck & Becker, 2006). Participants Binding effects are reliably observed (e.g., Frings et al., 2007;Singh et al., 2016), and those arising from response × location binding can come with very high effect sizes (e.g., d = 2.97 and d = 2.77 for the color discrimination tasks in Experiment 1 and Experiment 2, respectively, in Schöpper, Hilchey, et al., 2020).One hundred and five students2 of the University of Trier participated for partial course credit.One participant was excluded, due to not following task instructions (i.e., pressing only one key for most of the experiment).Three participants were excluded because they were far outliers when compared with the sample (i.e., above 3 × interquartile range above third quartile of number of excluded trials in reaction times and error rates and above 3 × interquartile range above third quartile in overall error rates); these criteria, that is, being a far outlier in several variables, were decided a priori.This led to a final sample size of 101 participants (84 female, 17 male; M Age = 21.47,SD Age = 2.68, age range: 18-32 years). With assumed α = .05(one-tailed) and expected effect size of at least d = 0.8 for finding a binding effect, this sample size leads to a power of 1 À β = 1.00 (G*Power, version 3.1.9.2; Faul et al., 2007).This sample size is sufficient to find a within modulation of binding due to valence mapping with an expected effect size of d = 0.5 (assumed α = .05,two-tailed) with a power of 1 À β = 1.00; moreover, with a power of 1 À β = 0.95, it is sensitive to observe an effect size of at least d = 0.36 (assumed α = .05,two-tailed).Prior to experimental start, participants were informed that the experiment involved the presentation of spider images.All participants gave their informed consent to a linked consent form by responding in a text field popping-up at the start of the experiment.The experiment complied with ethical standards for conducting behavioral studies at the University of Trier.One participant reported an uncorrected visual impairment but said participant's data did not stand out when compared with the sample.Consequently, we included the data.All other participants reported normal or corrected-to-normal vision. 3 Apparatus and Materials The experiment was an online study which was programmed in PsychoPy (Peirce et al., 2019) and then uploaded to Pavlovia (https://pavlovia.org/).Participants were asked to work through the experiment on a computer or laptop, but not on a smartphone or touchpad.Due to the online setup, the exact sizes of stimuli could vary; however, all stimuli, distances, and positions were programmed in pixels so that the relative sizes were identical, irrespective of the device used.At the center of the screen was a white fixation cross (30 × 30 px).Two white frames (length × height: 246 × 186 px) with a line thickness of 3 px appeared 150 px above and 150 px below the fixation cross (center-to-center), that is, frames were 300 px apart (center-to-center).These frames were empty (i.e., black) throughout most of a trial sequence.However, in these frames, an image (240 × 180 px) of a cub (fox cub, polar bear cub, kitten; picture IDs: P081, P095, P096) or a spider (three different species; picture IDs: SP002, SP012, SP100) could appear, all taken from the Geneva affective picture database (GAPED; Dan-Glauser & Scherer, 2011).Targets were gray bars (RGB: 127, 127, 127), which either appeared with a horizontal (80 × 20 px) or vertical (20 × 80 px) orientation. Targets could appear at the center of the upper or lower frame, that is, 150 px above or below the fixation cross (center-to-center). To measure fear of spiders, we used a short screening developed by Rinck et al. (2002;"Spinnenangst-Screening," SAS).This short 4-item questionnaire is based on the diagnostic criteria for spider phobia in the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; American Psychiatric Association, 1994).It involves four statements in German language (e.g., "Ich habe Angst vor Spinnen"; I'm afraid of spiders), to which participants have to indicate their agreement on a 7-point Likert scale from 0 (= trifft gar nicht zu/strongly disagree) to 6 (= trifft genau zu/applies exactly). Design The experiment used a 2 (valence mapping: cub vs. spider) × 2 (response relation: repetition vs. change) × 2 (location relation: repetition vs. change) within-subject design.The binding effect is computed as the interaction of response relation and location relation.Its modulation by valence mapping is derived from the three-way interaction. Procedure The experiment took place online; thus, individual settings may have varied.The trial structure was a prime-probe design, that is, participants first gave a response to a prime target followed by a response to a probe target (Figure 1).A trial sequence started with the first fixation display (500 ms), showing the fixation cross at center and an empty white frame above and below it.The prime display was identical except that now a vertical or horizontal bar appeared in one of the two frames until response.Participants were asked to press the F-key for horizontal bars and the J-key for vertical bars.After a response was given, the prime target disappeared, leaving the fixation cross and white frames in isolation again for 500 ms.Now, an image of cub and an image of a spider were depicted in the two frames for 200 ms.There was always one cub paired with one spider, and the possible parings (i.e., fox cub with spider 1, fox cub with spider 2, and so on) were pseudorandomly balanced across the whole experiment.Participants were instructed that the images were task-irrelevant.Directly following the image presentation, the probe target appeared in one of the two frames.Probe responses were given as described for the prime display.After the probe response was given, the probe target and fixation cross both disappeared, leaving the empty frames in isolation (700 ms).If participants responded incorrectly during the prime or probe display, an error message appeared for 1,000 ms directly after the incorrect response. From prime to probe, the response could repeat with the location of the target also repeating (response repetition, location repetition; RRLR) or changing (response repetition, location change; RRLC), or the response could change with the location of the target repeating (response change, location repetition; RCLR) or also changing (response change, location change; RCLC).Crucially, the positions of the cub image and spider image were orthogonally varied with responses and location.If the position of the probe target was previously occupied by a cub image, this condition was labeled as cub mapping.In contrast, if the position of the probe target was previously occupied by a spider image, this condition was labeled as spider mapping.All combinations of targets, positions, and cub/spider images were pseudorandomly balanced across all conditions and participants.Participants first completed 16 practice trials which were drawn randomly from all combinations.Here participants received feedback for both correct ("Richtig!,"i.e., right) and incorrect ("Falsch!,"i.e., wrong) responses.For experimental trials, participants completed 36 primeprobe sequences for every condition, leading to a total of 288 trials.During the experimental trials, participants only received feedback for incorrect responses.Participants could take self-paced breaks after the 72nd, 144th, and 216th trial. After the experimental trials, participants completed the 4-item screening for spider fear (Rinck et al., 2002) without time pressure.Responses were given with the number keys. Results Trials in which the prime response was incorrect were excluded from analysis (5.72%) of probe reaction times and error rates.Further, for probe reaction times, we excluded trials if the probe response was incorrect (additional 4.59%), if the probe response was below 200 ms (additional 0.01%), or if it was above 1.5 interquartile range above the third quartile of each individual participant's distribution (Tukey, 1977; additional 3.62%).Due to these constraints, 13.94% of trials were discarded from the reaction time analysis.Mean reaction times and error rates are depicted in Table 1. Impact of Spider Fear on Binding Effects We found strong S-R binding effects both in reaction times and error rates.However, an image of a cub or spider spatially congruent with the succeeding probe position did not modulate overall binding effects.We thus looked at the impact of individual spider fear on the strength of binding.From the 4-item questionnaire (Cronbach's α = .90),we calculated the average value spider fear for each participant, which could range between 0 and 6.A higher value indicates higher self-reported fear of spiders.We then calculated a differential value resembling the three-way-interaction by subtracting the binding effect with spider mapping from the binding effect with cub mapping.Consequently, a positive value indicates a larger binding effect for the cub mapping.To test the hypotheses, we computed Pearson correlation coefficients between spider fear and the differential values for reaction times and error rates.Spider fear did neither correlate with the difference of binding effects in reaction times, r = À.09, p = .351,BF 01 = 5.24 (stretched beta prior width = 1.0 in JASP; JASP Team, 2023; Figure 3a), nor with the difference of binding effects in error rates, r = .07,p = .495,BF 01 = 6.39 (Figure 3b). Discussion In the current study, participants discriminated the orientation of gray bars, which could repeat or change their position.We found strong S-R binding effects between response and location, both in reaction times and error rates.However, presenting an image of a cub and an image of a spider, one of which was spatially congruent with the probe target position, prior to the probe display did not modulate the strength of binding effects.This effect also did not emerge when taking individual spider fear into account. When participants respond to stimuli in a sequence, action control theories (e.g., Frings et al., 2020;Hommel, 2004) assume the occurrence of S-R binding.Such binding effects can be pronounced if attention is allocated to certain features (e.g., Hommel et al., 2014;Moeller & Frings, 2014;Singh et al., 2018).In the current study, we aimed to use attentional biases toward (Mogg & Bradley, 2006) or (subsequently) away from (e.g., Pflugshaupt et al., 2005;Rinck & Becker, 2006) spiders, to shift spatial attention to certain locations.In fact, we found participants responding faster to a location at which previously a spider image was presented.This suggests that attention had been shifted toward the aversive content, easing overall responding to a following target if spatially compatible.Yet, this attentional bias toward spider images (e.g., Mogg & Bradley, 2006) did not modulate S-R binding effects.Although higher attentional biases in high fear individuals when confronted with spider stimuli have been observed (e.g., Mogg & Bradley, 2006;Rinck & Becker, 2006), we did not find the effect of interest being modulated by individual fear of spiders. If spider-phobic individuals are confronted with a spider stimulus, they have been found to attend to the threatful stimulus (e.g., Berdica et al., 2018;Mogg & Bradley, 2006) and subsequently avoid it (e.g., Koster et al., 2006) or even avoid it initially (Huijding et al., 2011).However, other studies have found that highly fearful individuals slower disengage from threatful images (e.g., Georgiou et al., 2005;Gerdes et al., 2008Gerdes et al., , 2009)).Thus, we could have observed a larger S-R binding effect in the spider-congruent position due to an attentional bias toward and slower disengagement from said area or a larger S-R binding effect in the spiderincongruent position due to attentional avoidance of said area.We did not find any modulation.However, the affective images in our study were presented for 200 ms, directly followedand masked bythe probe target at one of two positions.Thus, probe target onset coincided with the disappearance of affective stimuli, potentially easing attentional disengagement (c.f., Machado-Pinheiro et al., 2013).Moreover, attentional biases toward threat have been found to decrease with time (e.g., Mogg & Bradley, 2006), and attentional avoidance can occur very fast (Koster et al., 2006;Mackintosh & Mathews, 2003); thus, as the probe target appeared until response, that is, on average more than 500 ms after affective image offset, attention may have disengaged and shifted to the other position, reducing the impact of an initial attentional bias toward the spider image.However, future studies might investigate if a longer or even overlapping presentation of affective images with the probe target affects retrieval differently. In the current study, spider fear was assessed via a short spider fear screening (SAS; Rinck et al., 2002), gearing to the DSM-IV criteria for spider phobia, which participants responded to after finishing the experiment.In other words, we did not build our experimental groups of low and high fear based on diagnosed spider phobia.Additionally, prior to experimental start, participants were informed that the study involved spider images; thus, individuals with extremely high fear of spiders or even diagnosed spider phobia might have avoided study participation in the first place.Although the short screening developed by Rinck et al. (2002) showed high internal reliability, we did not find a modulation by spider fear.However, future research might compare group differences between participants with diagnosed spider phobia with an undiagnosed or low-fear sample. Related to the current study, Colzato et al. (2007) hypothesized that S-R binding is modulated by the dopaminergic system, as the latter is thought to play a role in integration of action features (Schnitzler & Gross, 2005).Based on research that found positive and negative pictures activating and modulating the dopaminergic system (e.g., Dreisbach & Goschke, 2004), the authors hypothesized that this might modulate the strength of binding as well.The authors used a design in which participants gave two responsesone based on a previous cue and the second based on the discrimination of a specific stimulus dimension (see Hommel, 1998Hommel, , 2004)), which was the shape (Experiment 1), location (Experiment 2), or color (Experiment 3) of the stimulus.Crucially, 200 ms prior to the to-be-discriminated stimulus, a positive or negative image appeared at screen center.Congruent with their hypothesis, the authors found increased binding effects for positive image trials, compared to negative image trials; however, this was only found when shape had to be discriminated.In the current study, we did not find larger binding effects for trials with positive mapping; however, note that in contrast to Colzato et al. (2007), who only presented one affective picture in a trial, we presented two pictures of different valence simultaneously (c.f., Berdica et al., 2018;Schöpper, Jerusalem, et al., subm.)thus, effects by the dopaminergic system might have affected the pattern, but in all trials.Further, attention away from threat to safety (cf.Lange et al., 2004;Thorpe & Salkovskis, 1998) or a general attentional bias toward positive images (for a meta-analysis, see Pool et al., 2016) might have balanced out or at least drastically reduced an attentional bias toward spider images.Therefore, this could be a possible explanation for an absent modulation of attentional biases on retrieval.Future research could modulate attentional biases to positive and negative images differently (for example, by introducing trials with neutral images) to concur this limitation of the current experimental design. That spider images have diverging (interindividual) consequences concerning attentional biases has been shown multiple times (e.g., Berdica et al., 2018;Rinck & Becker, 2006).Yet, what we here demonstrated is that this kind of attentional weighting of locations due to individual fear of spiders does not have an impact on actions in terms of event file retrieval. Figure 1 . Figure1.A potential trial sequence (not drawn to scale).This example depicts a trial in which the response changes while the location repeats (RCLR) with cub mapping, that is, the probe target appeared at the position that was previously occupied by an image of a cub.Photographs of animals (by the first author) are only used for illustration; the experiment used images ofDan-Glauser & Scherer (2011). Figure 2 . Figure 2. Calculated binding effects separate for cub and spider mappings in (a) reaction times and (b) error rates.Error bars represent standard error of the mean.***p < .001,n.s.= not significant. Figure 3 . Figure 3. Scatterplots depicting value on fear of spiders questionnaire (x-axis) and the differential value of binding effect with cub mapping minus binding effect with spider mapping in percent (y-axis) in (a) reaction times and (b) error rates. Table 1 . Mean reaction times in milliseconds and error rates in percent (in brackets) of probe responses as a function of response relation, location relation, and valence mapping
5,668.6
2023-05-01T00:00:00.000
[ "Psychology", "Biology" ]
Clustering of frequency spectrums from different bearing fault using principle component analysis In studies associated with the defect in rolling element bearing, signal clustering are one of the popular approach taken in attempt to identify the type of defect. However, the noise interruption are one of the major issues which affect the degree of effectiveness of the applied clustering method. In this paper, the application of principle component analysis (PCA) as a pre-processing method for hierarchical clustering analysis on the frequency spectrum of the vibration signal was proposed. To achieve the aim, the vibration signal was acquired from the operating bearings with different condition and speed. In the next stage, the principle component analysis was applied to the frequency spectrums of the acquired signals for pattern recognition purpose. Meanwhile the mahalanobis distance model was used to cluster the result from PCA. According to the results, it was found that the change in amplitude at the respective fundamental frequencies can be detected as a result from the application of PCA. Meanwhile, the application of mahalanobis distance was found to be suitable for clustering the results from principle component analysis. Uniquely, it was discovered that the spectrums from healthy and inner race defect bearing can be clearly distinguished from each other even though the change in amplitude pattern for inner race defect frequency spectrum was too small compared to the healthy one. In this work, it was demonstrated that the use of principle component analysis could sensitively detect the change in the pattern of the frequency spectrums. Likewise, the implementation of mahalanobis distance model for clustering purpose was found to be significant for bearing defect identification. Introduction In any rotating machineries, rolling element bearing is important in which it is functioning as both thrust and radial load bearer.Without bearing, the rotating shaft will be exposed to an excessive vibration which later on led to the fatigue damage.Basically, an abrupt bearing failure will precipitate massive impact to the maintenance and operational cost.Therefore, it is greatly essential to make sure that the bearing is consistently in pristine condition while it is operating.Besides, early bearing fault detection is vital in order to prevent the failure as well as reduced the loss. In industries, various technique can be applied for the purpose of bearing condition monitoring, and one of the common technique is vibration analysis [1,2].In simple cases, the vibration behavior of rolling element bearing can be analytically predicted.However, in more compounded system, vibration produced by rolling element bearing can be complex as a result from geometrical imperfection during manufacturing process, component instability as well as defect itself [3].Apart from that, the other excitation frequency from other component or other unidentified sources might also affecting the vibration behavior.Consequently, the vibration signal produced are random and it is difficult to detect the damage-related component.Since the past several decades, the feature extraction analysis and classification technique was widely used as an approach in attempt to address on this issue.In general, the major aims of feature extraction analysis is to extract the hidden signal features among the complex signal that could lead to the detection of damage occurrence in the system that was monitored.Basically, on one hand, the feature was extracted directly from the acquired signal by determining its statistical parameter [4][5][6][7] and fundamental frequencies [3,8] parameters.Meanwhile, on the other hand, the decomposition method such as wavelet analysis [9][10][11][12], and empirical mode decomposition [13][14][15] was applied before the feature is determined.The idea of decomposing the signal is to filter out all the non-related signal component which was initiated from both unidentified and unrelated sources during bearing operation. In industrial application, it is important to identify the damage in order to assess it's fitness for service.This is vital for the purpose of maintenance planning and that is the main reason why online monitoring system is needed at the first place.Basically, for damage identification purpose, several classification technique have been applied to the extracted features.This includes, Discriminative Subspace Learning [16], Hierarchical Diagnosis Network [17], Support Vector Machine [18,19], Extreme Learning Machine [20], and artifial neural network [21].Despite the wide exploration on the feature extraction and classification techniques, the unavoidable problems such as low signal-to-noise ratio due to the nonlinearity of the signal still becomes a major challenge even though wide variation of feature extraction approach was taken to overcome these problems. In this paper, the application of principle component analysis (PCA) as a preprocessing method for hierarchical clustering analysis on the frequency spectrum of the vibration signal was proposed.In common approach, PCA was implies to reduce the dimension of the complex signal before the features associated with the presence of defect was detected [22].However, in this work, PCA was applied to identify the change in frequency spectrum pattern due to the basis that the amplitude of fundamental frequency will changed with the existence of defect.In this study, the vibration signal was acquired from the operating bearings with different condition and speed.In the beginning part, the response of vibration amplitude at the respective fundamental frequencies with the occurrence of damage will be discussed.On the next stage, the application of principle component analysis as feature extraction method and hierarchical clustering as damage identification analysis will be demonstrated. Experimental setup The test rig for this experiment was designed to investigate failure and vibration characteristic of ball bearings.As illustrated in Figure 1, in this experiment, the shaft was driven by a variable-speed 0.37kW, 50Hz electric motor equipped with a controller in order to control the speed of the motor.A flywheel is installed at the middle of the spindle in order to apply load to the shaft and at the same time minimizing the speed oscillations of the shaft.A spring coupling was used to connect the motor and shaft to minimize shaft alignment error.The front side of the shaft (near to the motor) is fitted with tested bearing and the vibration response will be measured here while on the other side, a good bearing was fitted.In this study, a set of good bearings and another three bearings with different type of defect such as corroded, point defect and outer race defect were tested.The angular speed is set to 10%, 50% and 90% of the maximum motor speed, and shortly after the test commenced, the time response of vibration were acquired by using the Bruel & Kjaer (B&K) 4506B accelerometer.The time series of vibration (acceleration) response was acquired with the sampling frequency of 20 kHz (Δt = 0.039 ms). Frequency spectrum clustering In this work, 16 signals from healthy, corroded and outer race defect was selected for clustering analysis together with 12 signals from point defect bearing.Before starts the clustering process, the acquired time domain response from different types of bearings was converted into frequency spectrum through the Fast Fourier Transform (FFT) analysis.Direct observation on peak pattern at the respective fundamental frequency were made to identify the damage.Basically the fundamental frequencies were calculated based on the equation from [3].Even though the fundamental frequency could be observed, it is important to reveal the different in structure of all the collected frequency spectrums from different bearing conditions because in some cases, the change in pattern is too small to be observed.To achieve the aims, the principle component analysis was applied.Basically, the principle component analysis implies the eigenvalue decomposition on the covariance matrix of the multiple dataset.In this work, one frequency spectrum was considered as one dataset contained n number of samples.Moreover, all the dataset was normalized by its mean value before the covariance matrix was attained.In general, the PCA process could be represent in matrix operation as shown by [23] in equation 1 to 3 whereas in equation 3, ߛ and ‫ݒ‬ ⃗ are the eigenvalue and eigenvector (principle component) respectively Basically, the result from PCA will be represented in scatter plot to show how the numerous dataset scattered based on its pattern.To classify this numerous dataset, the hierarchical clustering approach was taken in which the distance between dataset will be measured prior to clustering process.In this work, the mahalanobis model as shown in equation 4 [24] was selected for a distance measurement due to the nature of principle components which will scattered in oval shapes when the datasets is strongly related [25]. 3 Results and discussion Frequency spectrums In this paper, due to the same result's pattern for all rotational speed, only the result from the test with rational speed of 287 rpm was presented.As explained earlier, the time series of the vibration signal acquired from the rolling element bearings with different condition is converted into frequency domain signal and these signal were illustrated in Figure 2. Meanwhile its theoretical fundamental frequency was shown in Table 1.Based from the Figure 2, it was clear that the ball spin frequency, BSF, and the ball passing frequency outer race, BPFO are obviously appear in frequency spectrum of healthy as well as defect bearings.In contrast, the ball pass frequency inner race, BPFI are barely unseen in the frequency spectrums of the corroded bearings in which the opposite trend had been shown in the frequency spectrums of other types of bearing. From the deeper observation, it was found that high amplitude of acceleration occur at BPFO for outer race defect bearing.In conjunction with that, among all the frequency spectrums from healthy and defects bearings, the amplitude of acceleration was higher at BPFI for point defect bearing.Based from this results, it was confirmed that the presence of specific defect will increase the amplitude of vibration at the specific fundamental frequencies.Previous findings [8] also have proven these phenomena accordingly.In contrast, the spectrums of corroded bearings shows high amplitude values for all fundamental frequencies.This is probably due to the uniform behavior of the defect itself.Fig. 2. Frequency spectrum for bearings that rotates 287 rpm. Frequency signal classification As discussed in the previous section, it is clear that the frequency spectrum from each of the bearing condition showed a significant different in its structure.To represent cluster all of these spectrums, the principle component analysis was applied to a set of frequency spectrums which consists of healthy, point defect, outer race defect and corroded bearings and the results was shown in Fig. 3. Fig. 3(a) illustrates the overall scatter plot of principle component 1 and principle component 2 while Fig. 3(b) shows the zoomed part.According to the result in both sub-figures, it was found that the principle components (PC) of frequency spectrums was scattered into four different groups.However, the scattered data form a group or population in ellipsoidal shape.This scatter trend occurs due to the similarities in the patterns of the tested dataset [26].In other words, it is strongly believed that each ellipsoidal-shape scattered dataset is belonging to the same group of bearing types.To confirm the claims, clustering process in needed in which for this process, the mahalanobis distance had been calculated in order to cluster the PCs from different type of According to the figures, based on mahalanobis distance, it was clear that the data have been regrouped in four major cluster.Yet, the dataset 49 and 54 was identified to be outliers.Table 2 was shown to simplify the representation of the dendrogram.As referred to the table, those dataset which belongs to corroded, outer race defect, healthy and point defect was registered to be in cluster 1, 2, 3, and 4 respectively.Meanwhile, two dataset which belongs to corroded bearing was found to be as outliers. Conclusions According to the results, it was found that the amplitude of vibration at Ball Passing Frequency Outer Race and Ball Passing Frequency Inner Race will increase in align with the presence of outer race defect and inner race defect respectively.Moreover, the overall amplitude of vibration spectrum was found to be uniformly increased for the case of corroded bearing due to the widespread uniform corrosion on the entire bearing.By applying principle component analysis, the change in amplitude at any of these fundamental frequencies can be detected.Meanwhile, the application of mahalanobis distance was found to be suitable for clustering the results from principle component analysis.Uniquely, it was discovered that the spectrums from healthy and inner race defect bearing can be clearly distinguished from each other even though the change in amplitude pattern for inner race defect frequency spectrum was too small compared to the healthy one.To draw the conclusion, it was demonstrated that the use of principle component analysis could sensitively detect the change in the pattern of the frequency spectrums.This was believe to give more option to detect the damage from the change in signal pattern apart from decomposing it.Likewise, the implementation of mahalanobis distance model for clustering purpose was found to be significant for bearing defect identification. Fig. 3 . Fig. 3. Classification of frequency spectrum from different bearing condition using principle component analysis (a) Overall (b) Zoomed part clustering result was shown in dendrogram plot in Fig 4. Table 1 . Fundamental Frequency of Rolling Element Bearing which rotates at 287 rpm.
2,998.8
2017-01-01T00:00:00.000
[ "Materials Science" ]
DEVELOPMENT OF A MODEL FOR CHOOSING STRATEGIES FOR INVESTING IN INFORMATION SECURITY Providing information security (IS) is a complex and costly task. In addition to costly investments, there are some contradictions to be resolved. First, there is a contradiction between the availability of information resources (IR) and the required degree of protection. This is especially true for distributed computing systems (DCS). Second, the over-expansion of information protection tools leads to a decrease in the ease of IR use. Third, it is a contradiction of the interests of the party operating the IS tools, focused on the predictable parameters of the efficiency of IS systems and companies that develop hardware and software solutions for IS. It is no secret that a series of manufacturers in the field of IS actively advertise the innovation of their solutions. As a result, the user a priori overpays for excess functionality or is forced to constantly increase the performance of the systems, adapting them to the requirements of developers. The increase in the scale and number of successful cyberattacks [1], the growing rate of computer crime, have become a global trend. The objective need to address the multi-criteria optimization task to manage resources allocated to information security is such acute that decision-makers (DMs) are forced to act in dynamically complex situations. Such situations are caused by the ever-changing landscape of cyber threats, the increasing complexity of cyberattacks, the variability of scenarios used by the attacker to carry out attacks, etc. In a dynamically changing situation, the side of the protection of various objects of information (OBI) has to make difficult decisions, which, in general, can be characterized by the following features. First, in order to achieve the goals of IS, the defense side has to take many decisions (for example technical, organizational, financial, etc.). And each of these decisions must be seen in the context of the rest. Second, decisions made to provide OBI IS are almost always dependent on each other. Such solutions are interconnected (for example, communication can be direct, stochastic, indirect, etc.). Third, the external environment of OBI can change under the influence of both external factors, for example, with the general decline in protection due to targeted attacks, and as a result of decisions made. Under such conditions, the complexity of the multicriteria optimization task of resource How to Cite: Lakhno, V., Malyukov, V., Akhmetov, B., Kasatkin, D., Plyska, L. (2021). Development of a model for Introduction Providing information security (IS) is a complex and costly task. In addition to costly investments, there are some contradictions to be resolved. First, there is a contradiction between the availability of information resources (IR) and the required degree of protection. This is especially true for distributed computing systems (DCS). Second, the over-expansion of information protection tools leads to a decrease in the ease of IR use. Third, it is a contradiction of the interests of the party operating the IS tools, focused on the predictable parameters of the efficiency of IS systems and companies that develop hardware and software solutions for IS. It is no secret that a series of manufacturers in the field of IS actively advertise the innovation of their solutions. As a result, the user a priori overpays for excess functionality or is forced to constantly increase the performance of the systems, adapting them to the requirements of developers. The increase in the scale and number of successful cyberattacks [1], the growing rate of computer crime, have become a global trend. The objective need to address the multi-criteria optimization task to manage resources allocated to information security is such acute that decision-makers (DMs) are forced to act in dynamically complex situations. Such situations are caused by the ever-changing landscape of cyber threats, the increasing complexity of cyberattacks, the variability of scenarios used by the attacker to carry out attacks, etc. In a dynamically changing situation, the side of the protection of various objects of information (OBI) has to make difficult decisions, which, in general, can be characterized by the following features. First, in order to achieve the goals of IS, the defense side has to take many decisions (for example technical, organizational, financial, etc.). And each of these decisions must be seen in the context of the rest. Second, decisions made to provide OBI IS are almost always dependent on each other. Such solutions are interconnected (for example, communication can be direct, stochastic, indirect, etc.). Third, the external environment of OBI can change under the influence of both external factors, for example, with the general decline in protection due to targeted attacks, and as a result of decisions made. Under such conditions, the complexity of the multicriteria optimization task of resource This paper has proposed a model of the computational core for the decision support system (DSS) when investing in the projects of information security (IS) of the objects of informatization (OBI). Including those OBI that can be categorized as critically important. Unlike existing solutions, the proposed model deals with decision-making issues in the ongoing process of investing in the projects to ensure the OBI IS by a group of investors. The calculations were based on the bilinear differential quality games with several terminal surfaces. Finding a solution to these games is a big challenge. It is due to the fact that the Cauchy formula for bilinear systems with arbitrary strategies of players, including immeasurable functions, cannot be applied in such games. This gives grounds to continue research on finding solutions in the event of a conflict of multidimensional objects. The result was an analytical solution based on a new class of bilinear differential games. The solution describes the interaction of objects investing in OBI IS in multidimensional spaces. The modular software product "Cybersecurity Invest decision support system" (Ukraine) for the Windows platform is described. Applied aspects of visualization of the results of calculations obtained with the help of DSS have been also considered. The Plotly library for the Python algorithmic language was used to visualize the results. It has been shown that the model reported in this work can be transferred to other tasks related to the development of DSS in the process of investing in high-risk projects, such as information technology, cybersecurity, banking, etc. Keywords: Smart City, optimal funding strategies, decision support, Python, Plotly library management by the side that ensures OBI IS is determined by the multidimensional composition of the information protection tools (IPT) and the complexity of the distributed OBI computing structures. It is obvious that the potential of intelligent decision-making support systems (hereafter DSS) needs to be harnessed in the process of solving such a problem. Such modular [2,3] or clustered [4] DSS in OBI IS management tasks can be used as a set of interconnected systems. Such DSSs are usually based on synergistic ensembles of methods and models. One such ensemble of methods and models is extremely important in the sub-task of OBI IS management such as the task of finding a rational strategy for investing in information protection tools for a distributed computing system (DCS) of OBI. Indeed, DM needs to prioritize the investment of financial resources (FR) in such areas of development of the DCS IS as [5,6]: 1) ensuring the cyber-resilience of OBI; 2) innovative technologies in the tasks of monitoring the risk indicators of the implementation of information threats and ensuring the required level of OBI IS; 3) IS culture; 4) IS of the DCS infrastructure or OBI in general; 5) safety of applied software (SW); 6) security of data processing technologies; 7) other. Note that, as shown in [6,7], innovations are not always beneficial in the specialized segment of the IS products and services market. Advances in the field of IS are most often the result of investments in the development and acquisition of new knowledge, the development of ideas to update the composition of IS systems. The innovative process in the field of IS is based on a complex system of mutually agreed and interconnected activities. In addition, the resources available to investors are important: financial, organizational, scientific, technological, manufacturing, organizational. Thus, innovative projects in the field of IS can be categorized as a set of mutually agreed goals and programs aimed at improving the effectiveness of the IS system of a particular OBI. It is noted in [8] that the probability of losses arising from the wrong strategy of investing the company's financial resources in IS is quite high. Although it remains a fact that the field of IS by its nature does not have to be overly innovative. A successful solution to the task of choosing a rational strategy to invest in the information security of OBI has become the basis for a successful business. This is particularly evident in the experience of successful IS deployment projects for innovative companies. However, it is not enough to have sufficient financial resources (FR) to implement OBI IS projects. It is also necessary to have a toolset to predict and evaluate the options of strategies for investing FR in the project. As noted above, effective support for solutions in such projects is not complete without the use of IT, and, specifically, DSS. The computational core of such DSS takes on all the routine work of finding analytical solutions to multicriteria optimization tasks. For example, in the context of the problem considered, it is possible to constructively define rational strategies for the allocation of FR to complex OBI IS projects. With the help of the intellectualized DSS, it is easier for DM to determine which of these or other areas of IS [8,9] is a higher priority for the investment of FR during the forecast assessment. Note that in fact in such situations the rate of return of the invested FR for the defense side will be different. All of the above dictates the need to intellectualize the search for rational strategies for investing in such complex projects as ensuring the information security of the object of informatization. And, without the appropriate computer support to make such risky decisions, DM may find it difficult to manage them. Literature review and problem statement In paper [10], the authors note that not all innovations positively affect the market for investment in the development of IS hardware and software. This leads to disagreements among experts about their expediency. Which is a definite drawback of this approach. Paper [11] notes that IS investment projects can be seen as a system of interconnected goals and programs on IS. The system approach is an advantage of this approach. However, this statement was not further developed in the cited paper. It is shown in [12] that achieving a predefined level of OBI IS depends on the successful solution to a whole range of tasks: financial, design, manufacturing, organizational, research, commercial, etc. Systemic character is undoubtedly the advantage of this approach. However, the paper does not provide an estimate of the potential of using DSS in such tasks related to the field of IS. The GL model, proposed in works [13,14], has become one of the main models used to evaluate investments in OBI IS. However, the GL model, and its modifications [15,16], exclude the possibility of considering real mechanisms for taking into consideration the interests of investors in the formation of the structure of the IS system. This significantly limits the practical aspects of the application of the model and the objectivity of the findings. The theoretical aspects of mathematical support for decision-making in the course of choosing a rational strategy for investing in IS are considered in [17,18]. However, these works do not describe the software implementation of those models. This makes it difficult to put the models reported in those works into practice. It is noted in [19] that the category of software products such as DSS and expert systems (ES) facilitates the task of finding rational strategies for investors in the field of IS. The authors do not give specific examples of the use of such systems in practice. Work [20] analyzes different approaches in terms of the mathematical apparatus used in such models. However, the work does not address examples of these models being implemented in practice. The authors of [21] describe the application of classical economic and mathematical models. However, in most situations related to investment appraisal, these models do not take into account many parameters of investing in complex projects in the field of OBI IS. The DSS to select investor strategies were analyzed in [22]. It is shown that the main drawback of such software products is low informative results. In addition, it is difficult to assess the prospects of investment projects and options for investors in the field of OBI IS. It is shown in [23] that there is no universal method of multicriteria optimization of the distribution of FR allocated for the construction of the contours of the IS distributed computing systems for OBI. This means that the solutions identified by the task, the computational core of the DSS, must include an ensemble of models. That has predetermined the relevance of the development of new models and software products in the DSS segment in the task of evaluating investor strategies for the IS of specific OBI. The software product being developed would be able to support decision-making procedures as they search for rational strategies for continuous investment by a group of investors in complex infrastructure projects related to the IS of OBI distributed computing systems. The aim and objectives of the study The aim of this study is to develop a model for a DSS computational core used in the process of selecting strategies for investing in information security. To accomplish the aim, the following tasks have been set: -to find the best strategies for investors and their sets of preferences in a bilinear differential quality game with several terminal surfaces for the procedure of investing in information security; to perform computer simulation of the selection of strategy to invest in the information security of an object of informatization. Materials and methods to study strategies for investing in information security The following research methods were used: game theory methods to synthesize new models of the computational core for a decision support system in order to select a rational financial strategy for investing in the information security of objects of informatization; methods of dealing with bilinear differential quality games with multiple terminal surfaces in order to find areas of investor preference. The practical implementation of the proposed model is based on the paradigm of object-oriented programming when implementing the modular software product "Cybersecurity Invest decision support system" (Ukraine) for the Windows platform. In addition, the visualization of the results obtained using DSS to describe the interaction of objects in multidimensional spaces was performed on the basis of the Plotly library for the Python algorithmic language. 1. Finding investor strategies based on the bilinear differential quality games with multiple terminal surfaces Problem statement. Two groups of investors (players) manage a dynamic system in multidimensional spaces. Groups of players have different strategies in their approach to investing in OBI IS. For example, one group acts based on prioritizing the paradigm of innovation in IS systems for OBI. At the same time, new and new hardware is needed. The second group justifies more pragmatic approaches. This approach of investors assumes the investment of financial resources in IS systems, which do not suffer from excessive demands on system resources. The dynamic system (DS) is set by a totality of bilinear differential equations with dependent movements. The sets of strategies (U) and (V) of player groups are specified for DS. In addition, the S 0 , F 0 terminal surfaces are defined for DS. The goal of the first group of players (hereafter Inv1) is to bring DS through their management strategies to the terminal surface S 0 . And this should be achieved regardless of the actions of the second group of players (hereafter Inv2). Inv2's goal is to bring DC through its management strategies to the terminal surface F 0 , regardless of Inv1's actions. The problem's statement generates two tasks. This is, respectively, a task on the part of the first ally player and on the part of the second ally [24]. Given the symmetry of the task for allied players, it can only be considered from the perspective of the first ally player. The solution is to find the sets of the players' initial states. It is also necessary to define their strategies. Strategies would allow the players to bring DS to one or another terminal surface. Players have certain financial resources (FR) to invest in OBI IS projects. For example, building multi-contour protection of a distributed computing system. We believe that Inv1 has a set of g(0)=(g 1 (0),…,g n (0)) FR(g i (0) -FR for the development of the i-th IS system for OBI. On the contrary, Inv2 has p(0)=(p 1 (0),…,p n (0)), (p i (0) -FR for the development of the i-th IS system for OBI, p i (0) is the vector of n-dimensional space with positive elements. These sets determine the predicted, at moment t=0, FR values (hereafter FinR) of the players for each new OBI information security system. We shall describe the dynamics of change in FinR for the players in the following way: Introduce the following designations: ; . Then the system of differential equations in the model takes the following form: If condition (2) is met, we believe that the financing procedure for the IOBI IS project under review has been completed. In this case, Inv2 did not have enough FR to continue the continuous investment procedure. This is, at least, true for one of the IS projects. If condition (3) is met, we believe that the continuous procedure of investing in IS projects has been also completed. In this case, Inv1 did not have enough FR to continue the continuous investment procedure. This is true, at least for one of the OBI IS projects. If both conditions (2) and (3) are not met, we believe that the continuous investment procedure for the IS projects of the object of informatization continues. The process of continuous investment procedure within the framework of the positional differential game scheme with full information was previously considered in works [18,24]. As already noted, due to symmetry, we shall confine ourselves to considering the task from the Inv1 standpoint. The second can be solved in a similar way. Defining the pure strategy and the set of preferences by Inv1 was reported in studies [18,24]. The first task solution is to find the Inv1's "preferred" sets. The optimal strategies for Inv1 are also defined. Similarly, the task is set and solved from the point of view of Inv2. Let us give the conditions under which the solution to the game is derived. That is, in the process of solving it, it is necessary to find the "preference" sets W 1 and the optimal strategies for Inv1. These conditions could be set by the following matrix inequalities (cases 1-5). Case 5 -all other variants of the ratios of these matrices' elements. Let us introduce additional designations. ( ) Considering these designations, for case 1, a set of preferences W 1 is determined as follows: The best strategy for the first player would be ( ) * * . U t E = For all cases except the first, the sets of preferences of the first player (Inv1) and his optimal strategies are found similarly. Similarly, the solution to the problem is found on the part of the second ally player. 2. Computer simulation of selecting the strategy of investing in the information security of an object of informatization The models described in the previous chapter were implemented in the DSS module "DSS Cybersecurity Invest" (Ukraine), which is designed both for use on a regular PC and for the visualization of the results online through any browser. The bulk of the modules were written in the C# programming language. The developed "DSS Cybersecurity Invest" DSS consists of several subsystems. The modular architecture of the DSS has allowed it to be implemented in a fairly flexible way. Thus, developers and the operating party have the option, if necessary, to complement the original DSS architecture with new functional modules. The software implementation of the "DSS Cybersecurity Invest" DSS is in the style of the MDI application. Thus, an expert, or another interested person, can simultaneously work with all the windows of a given software product, Fig. 1. The "DSS Cybersecurity Invest" functional modules enable to solve the following local tasks in supporting decision-making related to multicriteria optimization of OBI IS investment strategies. The purpose of the modules is as follows: Module 1 -The hierarchy analysis method is used in the first phase of the "DSS Cybersecurity Invest" DSS to expertly evaluate specific class information protection systems. The module is based on the application of the T. Saaty method and can be used by experts as an independent software product, and as part of the DSS to choose the best IPT options for DCS nodes. Module 2 is based on alternative algorithms (linear, modified dynamic programming, genetic, etc.) to determine the active composition of IPT for a DCS node of the object of informatization. The algorithms and related models are described in detail in works [18][19][20][21][22][23][24]. As a result of the operation of modules 1 and 2, an expert working with the "DSS Cybersecurity Invest" DSS would receive the final IPT sample for a DCS node on the right side of the window of module 2. Module 3 is designed to select a strategy for investing in OBI IS. The model used in a given DSS module is detailed above. A distinctive feature of this module is the possibility to visualize the results it receives through any browser online. The calculations were made for investment projects in different options for investment strategies at the Aktau Sea Port (Kazakhstan). The original modeling data are given in Table 1. The calculation results are shown in Table 2. Graphic dependences of the preference set W 1 for the first investor in IS for the cases of 3, 4, and 5 variables are demonstrated in Fig. 2-4. Table 1 Fragment of the original data table Table 2 Fragment of the table with the results of modeling the area of preference of the first investor and his investment strategy Here, T is the time during which the first player would bring the state of the system to its terminal surface with the data under the corresponding number in the table. It should be noted that due to the bilinearity of the system of differential equations and the multidimensionality of the considered problem, it is not possible to find the sets of preferences to other approaches for investors. For 4, 5, and 6 dimensional charts, one can emulate the depth of visualization with the Plotly library for Python by varying colors, size, or shape of markers. The Delta0(P0) parameter describes the value of the first investor's FR spent on bringing a dynamic system to its terminal surface. The points make it possible to determine the set of preferences of a first investor in IS. Here's how it works. As one knows, each point is a component set that characterizes the FR of investors. The component set, which is the first investor's FR, corresponds to a set of the components representing the second investor's FR. There may be several such component sets. Some of these sets, together with the first investor's FR component sets, belong to a set that guarantees the continuation of the process of investing in IS projects. Partbelongs to the set in which a second investor cannot continue investing. Then, by choosing minimum values from these values (for each component), we would get for each FR of the first investor a set, which would belong to the set of preferences of the first investor. In Fig. 3, the light shade of the markers would correspond to the lower Inv2 interest rate for the financial investments and the return on investment share of Inv2 in relation to the Inv1's investments in OBI IS projects. Fig. 4 provides further confirmation of the possibility of graphic interpretation in spaces of greater dimensionality than three. The essence of the interpretation is the same as in Fig. 3. The size of the marker for Fig. 4 makes it possible to use the visualization of the fifth dimension. We used the markersize parameter of a Scatter3D function for the Plotly library. The markers' shapes are great for visualizing project categories as part of the search for a rational OBI IS strategy. Round markers correspond to the category of projects to develop the security of applied software. Diamond markers are an investment in the security of data processing technologies. Markers in the form of a plus sign (+) -investing in the risk control of the implementation of information threats and providing the required level of OBI IS, etc. Our simulation results show the effectiveness of the proposed toolkit to solve the task of continuous management of the FR of parties, taking into consideration the multi-factor nature of investment in the OBI IS systems, using Aktau Sea Port as an example. Discussion of results of modeling the choice of the strategy of investing in the information security of an object of informatization A discrete-approximation method was used to solve the problem in question [24], which has made it possible to solve it in the case where known approaches, such as the first direct method by Pontryagin [26,27], and the alternative integral method [28][29][30], cannot be applied. This is due to the impossibility of using the Cauchy formula to find a solution to the system of differential equations. Approaches designed to address positional differential games that have built "stable bridges" to find the best strategies for players [27,28] are also not applicable in this task, as it allows any player management, including immeasurable functions that cannot be used in the approaches given in [26,31]. This gives reason for meaningful results in cases where widespread methods do not work. The graphic interpretation depicting the set of points for the DSS's online charts would be consistent with the investment model, in which it is assumed that a first investor can use the FR determined by the specified sets of these resources. These sets of FR can be determined by the choice of specific investment programs. For example, these may be programs to develop new technologies in the tasks of monitoring the risk indicators of the implementation of information threats and ensuring the required level of OBI IS, etc. As with Fig. 2, 3, we also gave sets of points that characterize the FR of the first and second investors. The essence of interpretation for Fig. 3 and Fig. 2 remains the same. However, let us repeat that the choice of this method of illustrating the set of preferences by a first investor allows for graphic illustration in spaces of greater dimensionality than three. You know, it is impossible to use more than three dimensions directly. A workaround has been found for the online platform of the "DSS Cybersecurity Invest" DSS. For 4, 5, and 6 dimensional charts, one can emulate the depth of visualization with the Plotly library for Python by varying colors, size, or shape of markers. The identified drawback of the model is the fact that the data acquired by the "DSS Cybersecurity Invest" DSS did not always coincide with the actual data when choosing investment strategies in OBI IS. Note that compared to existing models [13-17, 20, 22, 25], the proposed solution improves the predictability for an investor. The quantitative effect of the developed model is to determine the rational value of resources for the implementation of investment programs in OBI IS. The qualitative effect is that decision-makers have the opportunity to conclude whether it makes sense to start investing or not, depending on the resources available, both their own and the potential investor. The core of the mathematical model of mutual investment in OBI IS is the bilinear differential quality game. It should be noted that the methods for solving linear differential games are not applicable to solving such games [26][27][28] as the Cauchy formula is not applicable to finding a solution to the system of bilinear differential equations. In addition, for such games, the methods of solving positional differential games, proposed in [29,30], are not applicable. This statement is true even though the conditions of existence of the value of the game are met here. This is due to the fact that if players use non-measurable management, methods to solve positional games, in this case, are impossible to apply. Our work has found an analytical solution for a multidimensional case, which is very difficult. Usually, the conditions of sufficiency for the existence of the solution to the game are formulated. It seems promising to further study the presented model of models for solving tasks in the field of investment within the framework of a fuzzy information scheme, for example, industrial, energy, and other sectors of the economy. 1. A model has been developed for the computing core of DSS in the course of investing in various projects related to the information security of objects of informatization. The model is built on a system of bilinear differential quality games with several terminal surfaces for the task of making a decision during the continuous process of investing in OBI IS projects by a group of investors. An analytical solution has been obtained, which is based on a new class of bilinear differential games, describing the interaction of objects in multidimensional spaces. 2. Computer modeling of the process of choosing strategies for investing in OBI IS has been carried out. The applied aspects of visualization of the results of calculations for the online platform based on the Plotly (Python) library are considered. The resulting solution for the DSS online platform has made it possible, during the computer simulation of investment strategies in OBI IS, to visually describe the procedure of finding rational strategies for groups of investors.
7,026.6
2021-04-30T00:00:00.000
[ "Computer Science" ]
Recognition of Imbalanced Epileptic EEG Signals by a Graph- Based Extreme Learning Machine Epileptic EEG signal recognition is an important method for epilepsy detection. In essence, epileptic EEG signal recognition is a typical imbalanced classification task. However, traditional machine learning methods used for imbalanced epileptic EEG signal recognition face many challenges: (1) traditional machine learning methods often ignore the imbalance of epileptic EEG signals, which leads to misclassification of positive samples and may cause serious consequences and (2) the existing imbalanced classification methods ignore the interrelationship between samples, resulting in poor classification performance. To overcome these challenges, a graph-based extreme learning machine method (G-ELM) is proposed for imbalanced epileptic EEG signal recognition. The proposed method uses graph theory to construct a relationship graph of samples according to data distribution. Then, a model combining the relationship graph and ELM is constructed; it inherits the rapid learning and good generalization capabilities of ELM and improves the classification performance. Experiments on a real imbalanced epileptic EEG dataset demonstrated the effectiveness and applicability of the proposed method. Introduction Epilepsy is a common neurological disease that can cause recurrent seizures. During seizures, injury or life-threatening events may occur owing to the distraction or involuntary spasms of the patient [1,2]. In the clinical diagnosis of various seizures, electroencephalogram (EEG) signal detection plays a crucial role [3]. This is because the epileptic brain releases characteristic waves during seizures. In recent years, an increasing number of machine learning-based methods have been applied for epileptic EEG signal recognition [4][5][6][7][8]. Figure 1 illustrates a machine learning method-based system for epileptic EEG signal recognition. The figure shows that an epileptic EEG signal recognition system involves the following three main steps: (1) a feature extraction method is used on original epileptic EEG signals for training and testing, (2) EEG signals after feature extraction for training are used to train the machine learning-based model to build an epileptic EEG signal recognition system, and 3) EEG signals after feature extraction for testing are then inputted into the epileptic EEG signal recognition system for detection. Previously, many machine learning methods have been proposed for epileptic EEG signal recognition, such as the naive Bayes method (NB) [9], K-nearest neighbor (KNN) [10], support vector machine (SVM) [11], fuzzy system [12,13], and extreme learning machine (ELM) [14,15], and they have shown good effectiveness. In essence, epileptic EEG signal recognition is a typical imbalanced classification task [16,17]. Compared with negative samples (people without epilepsy), positive samples (patients with epilepsy) have extremely low representation and cannot be well classified by traditional classifiers. Although the misclassification of positive samples has little effect on the model accuracy, it may cause serious medical malpractice. Therefore, traditional machine learning methods face several critical challenges for recognition of imbalanced epileptic EEG signals: (1) traditional machine learning methods often ignore the imbalance of epileptic EEG signals and misclassify positive samples, which may cause serious medical malpractice, and (2) existing imbalanced classification methods ignore the interrelationship between samples, resulting in poor classification performance. Therefore, building a classifier that considers the imbalance of the epileptic EEG signals and additional knowledge of samples becomes imperative for classification of imbalanced datasets with epileptic EEG signals. To overcome these challenges, a novel imbalanced epileptic EEG signal recognition method based on a graph and ELM is proposed in this study. ELM has become a classical machine learning method with its solid theoretical foundations, fast training speed, and good predictive performance [18,19]. Although ELM can universally approximate to any continuous functions, it is not effective for classifying imbalanced datasets. Therefore, it is necessary to adopt strategies to make ELM correctly classify positive samples to obtain a reasonable classification result of an imbalanced dataset. Previously, numerous imbalanced ELM-based methods have been proposed. For example, Zong et al. [20] proposed the weighted extreme learning machine (WELM), which pioneered the application of ELM in imbalanced classification. Similarly, Zhang and Ji [21] proposed a fuzzy ELM (FELM), which regulated the distributions of penalty factors by inserting a fuzzy matrix. Yu et al. [22] proposed a special costsensitive ELM (ODOC-ELM) for imbalanced classification problems. Li et al. [23] proposed an ensemble WELM algorithm based on the AdaBoost framework to learn the weights of different samples adaptively. Yang et al. [24] proposed a novel ELM-based imbalanced classification method by estimating the probability density distributions for two imbalanced classes. Shukla and Yadav [25] combined CC-ELM with WELM to propose a regularized weighted CC-ELM. Xiao et al. [26] proposed an imbalanced ELM-based algorithm for two classes of classification tasks by solving each class classification error. Du et al. [27] proposed an online sequential extreme learning machine with under-and oversampling(OSELM-UO) for online imbalanced big data classification. In addition, some ELM-based imbalanced methods, such as ensemble weighted ELM [28], classspecific cost regulation ELM [29], label-weighted extreme learning machine [30], and class-specific ELM [31], have also been proposed. However, to the best of our knowledge, there is no study that uses imbalanced ELM methods for epileptic EEG signal recognition; therefore, it is necessary to propose such a method for epileptic EEG signal recognition. In this study, inspired by WELM, we propose a novel graph-based ELM (G-ELM) for imbalanced epileptic EEG signal recognition. First, we use the graph theory to construct a relationship graph of samples according to their data distribution. Then, we combine the relationship graph with ELM to propose G-ELM. The experimental results on a real imbalanced epileptic EEG dataset show that the proposed method can address imbalanced classification of epileptic EEG signals effectively. The main contributions of this study are as follows. (1) The proposed G-ELM sets the compensation for loss of positive samples to be greater than that of negative samples based on graph theory and then combines with the ELM to classify imbalanced data effectively. It is a novel imbalanced ELM-based method, which attains a good classification performance and inherits the rapid learning and good generalization capabilities of ELM (2) The proposed imbalanced classification method attempts to consider both the imbalance and interrelationship of epileptic EEG samples to obtain better performance for imbalanced epileptic EEG signal recognition. It can be utilized for imbalanced epileptic EEG signal recognition. It not only realizes effective classification of imbalanced epileptic EEG signals from a new perspective but also expands application of ELM-based algorithms (3) We use six imbalanced classification evaluation indices, i.e., accuracy, precision, recall, F-measure, G_ means, and AUC, to compare the performance of the proposed G-ELM and the existing imbalanced ELM-based methods. Extensive experiments on a real imbalanced epileptic EEG dataset indicate that the proposed method can address imbalanced epileptic EEG signal recognition effectively and outperform the existing imbalanced ELM-based methods The rest of this paper is organized as follows. Section 2 introduces the background underlying the proposed epileptic EEG recognition method. In Section 3, the details of the proposed G-ELM are presented. The performance of the proposed method is evaluated with several comparative methods in Section 4. The conclusions of this paper are provided in Section 5. Figure 1: Illustration of the machine learning method-based system for epileptic EEG signal recognition. Background In this section, we briefly describe the background related to the proposed epileptic EEG signal recognition method. It includes the epileptic EEG dataset, the feature extraction methods, and the classical ELM, which are used for epileptic EEG signal detection. Feature Extraction. Many studies [33][34][35] have shown that the original EEG signals cannot be directly used for training machine learning-based models and that feature extraction is a necessary step. This is because the original EEG signals are usually high dimensional, stochastic, nonstationary, and nonlinear and the background noise in the original signals is very complex. The commonly used feature extraction methods can be divided into three main categories: time domain analysis, frequency domain analysis, and time-frequency analysis. Time domain analysis-based methods extract the features by analyzing the characteristics of original EEG signals, such as mean, variance, amplitude, and kurtosis [36]. Frequency domain analysis-based methods usually analyze the EEG signals in the frequency domain to extract the features, such as fast Fourier transforms [37] and short-time Fourier transforms [38]. As for timefrequency analysis methods, the information of time and frequency domain is considered simultaneously to extract the features from original epileptic EEG signals. Typical timefrequency analysis-based methods are wavelet transform 3 Wireless Communications and Mobile Computing methods [39,40]. In this paper, we use the wavelet packet decomposition [40] for feature extraction from original epileptic EEG signals to simultaneously utilize the information of time and frequency domain. [19], which was first proposed by Huang et al., is a single-hidden-layer feedforward neural network [41]. It can directly optimize the output weight of the hidden layer by setting the number of hidden nodes, without paying attention to the weight and offset of the input layer, which can be generated randomly. Compared with other traditional supervised learning methods, it has good generalization ability and high learning speed. Figure 3 shows the network structure of an ELM. ELM. ELM ELM considers both empirical and structural risks, and its objective function is as follows: where represents the hidden layer feature matrix, where h i ðx j Þ = g ðA ðiÞ x j + b i Þ, A ðiÞ represents the ith row of the weight matrix 2, ⋯, n denotes the training samples, n is the number of training samples, d is dimension, and m is the number of hidden nodes; ε = ðε 1 , ε 2 ,⋯,ε n Þ T is the error matrix between the network outputs and the target outputs. C is a penalty parameter, which can adjust the accuracy and generalization ability of the ELM. The optimization problem in (1) can be solved based on the Karush-Kuhn-Tucker theory. The output weight of ELM can be calculated by Graph-Based Extreme Learning Machine In this section, a graph-based ELM (G-ELM) is proposed. We first introduce the relationship graph of an imbalanced dataset and then develop the proposed imbalanced classification method G-ELM by combining the relationship graph with an ELM. 3.1. Relationship Graph of an Imbalanced Dataset. In the context of imbalanced classification problem, the relationships between the training samples can be regarded as an undirected graph. Undirected graph can be expressed as G = ðV, EÞ, where V is the vertex set of graph G and E is the edge set of graph G. Figure 4 shows an undirected graph of an imbalanced synthetic dataset with 7 samples, where 2 positive samples are represented by a blue circle and 5 negative samples are represented by a red star. All samples are numbered for subsequent display. Note that there are connections between samples in different classes and the weight is 1. Samples in the same class are not connected. The elements of an adjacency matrix W can be defined as follows: Here, y i ∈ Y is the label of x i . According to the above definition of the adjacency matrix W, we can see that the distance of the samples in the same class can be considered 0. For samples in different classes, the distance between them can be considered 1. Wireless Communications and Mobile Computing Then, the relationship graph matrix can be expressed as where D = diag ðW ⋅ 1 n×1 Þ is the degree matrix; 1 n×1 stands for a vector with n × 1, whose elements are exactly 1; n is the number of training samples. As for the imbalanced dataset X, we need to increase the loss of misclassification of positive samples because the misclassification of positive samples (patients with epilepsy) could cause serious consequences. This can be realized by regulating the degree matrix D. The shortcomings of the cost learning algorithm can be compensated by increasing the relationship between samples. Therefore, the relationship graph not only ensures the accuracy of positive sample classification but also makes up for the lack of the mutual relationships and prior knowledge between samples. According to the above description, the relationship graph matrix L of the synthetic dataset in Figure 4 can be expressed as 3.2. Objective Function of G-ELM. According to the above relationship graph and ELM, the objective function of the G-ELM can be expressed as follows: Here, X = ½x 1 , x 2 ,⋯,x n ∈ ℝ d×n , n is the number of samples in X, d is the sample dimension, and Y = ½y 1 , y 2 ,⋯,y n T represents the true class label of the samples. H and hðx i Þ are the same as defined in ELM. β = ½β 1 , β 2 ,⋯,β m T represents the output weight vector. ε = ½ε 1 , ε 2 ,⋯,ε n T represents the loss between the network outputs and the target outputs. Equation (8) is the relationship graph matrix of the samples. By comparing (7) with (1), we can see that G-ELM is an improved version of ELM and still has the characteris-tics of high learning speed and strong generalization ability from ELM. Solution of G-ELM. In this subsection, we attempt to optimize the objective function of G-ELM. According to [20], the objective function of G-ELM is a convex optimization problem. The specific optimization solution process is as follows: The Lagrangian function corresponding to (7) is Let the derivation of J with respect to β, ε i , α i equal to zero: Substituting (10a) and (10b) into (10c), we obtain Combining (10a) and (11), we obtain β = With the obtained solution, i.e., β * , the predicted class label of the testing sample can be obtained as follows: where x test is a testing sample. Learning Algorithm of G-ELM. According to the above derivation, the implementation of G-ELM is summarized in Algorithm 1. Data Preparation. Although the real Bonn dataset has been used in many studies, the way of using it in this study differs from those in previous works. To evaluate the performance of the proposed G-ELM, nine imbalanced datasets were generated from the original five groups of EEG signals to simulate the imbalanced classification scenario. The details of the nine datasets are summarized in Table 2. In each dataset, the EEG signals of patients with epilepsy (E) were regarded as a positive class, while the other groups were regarded as a negative class, to identify whether the patients with epilepsy are experiencing seizure activity. A brief description of the five groups (A, B, C, D, and E) can be found in Table 1. The last column of Table 2 is IR, which is used to show the degree of imbalance of the dataset. IR can be defined as follows: where n + and nrepresent the number of samples of the positive class and the negative class, respectively. In our experiment, we randomly partitioned each dataset. In each dataset, 80% of the dataset were used for training and the remaining 20% were used for testing. Evaluation Indices. In our experiments, we used six imbalanced classification evaluation indices to evaluate all the adopted methods. The six imbalanced classification evaluation indices were accuracy, precision, recall, F-measure, G_means, and AUC, which can be, respectively, defined as Here, TP is the number of true positive samples, FN is the number of false negative samples, TN is the number of true negative samples, and FP is the number of false positive samples, respectively. where N + is the set of all the indexes of the positive samples and N − is the set of those of the negative samples; n + = jN + j and n − = jN − j. PðxÞ is the prediction value of x. Ið·Þ is the indicator function Input: The training samples X = ½x 1 , x 2 ,⋯,x n ∈ ℝ d×n and their corresponding labels Y = ½y 1 , y 2 ,⋯,y n T , where x i ∈ ℝ d ði = 1, 2,⋯,nÞ. The number of the hidden nodes m; the input weights A ∈ ℝ m×d and input biases b ∈ ℝ m ; the penalty parameter C. Output: The predicted class label of the testing sample x test . Step 1: Construct the mapping matrix of hidden layer H according to Eq. (2). Step 2: Compute the relationship graph matrix corresponding to the training samples X according to Eq. (4) and Eq. (5). Step 3: If n < m Then compute the output weight vector β * using the first formula in Eq. (12). Else compute β * using the second formula in Eq. (12). Step 4: Return the predicted class label of the testing sample y test = sign ðx test β * Þ. Adopted Methods and Parameter Settings. In the experiments, five ELM-based methods, i.e., ELM [19], W1-ELM [20], W2-ELM [20], R1-ELM [25], and R2-ELM [25], were adopted for comparisons with G-ELM. Referring to the guidelines in [2,20,46], a grid search strategy based on G_means was used to determine appropriate parameters of all the methods. We set parameter C in the range of 2 ½−28:2:28 and parameter m in the range of f50, 100, 300, 500, 1000g for all the adopted methods. All the adopted methods were run ten times on each generated imbalanced dataset. The average experimental results corresponding to the six imbalanced classification evaluation indices are reported. To evaluate the classification performance of the proposed G-ELM, five ELM-based methods were used for performance comparison. All experiments were repeated ten times for fairness. The mean and standard deviation of the corresponding indices of all methods in each dataset are reported in Tables 3-8. The best results are shown in bold. The improvement of G-ELM relative to ELM on all datasets using the six imbalanced classification evaluation indices is shown in Figure 5. According to experimental results in Tables 3-8, the following observations can be made: (1) For the adopted six imbalanced classification evaluation indices, the proposed G-ELM performs best on Tables 6 and 7 are two important indices to measure the performance of imbalanced classification methods, which can be combined with recall and precision to evaluate the effect of the methods. From the results, we can see that the proposed G-ELM has the best performance. It has excellent performance in imbalanced epileptic EEG signal recognition (5) AUC is an important index to evaluate imbalanced classifiers. From Table 8, we can see that the performance of G-ELM on all datasets is the best. G-ELM has excellent performance in imbalanced classification and good effectiveness of imbalanced epileptic EEG signal recognition 4.5. Statistical Analysis. Statistical analysis was performed to further analyze the performances of all the adopted methods in our experiments. For conciseness, we only present statistical analysis of the G_means results. Firstly, the Friedman test [47] was used to calculate the average ranking of each method. The rankings of all the adopted methods are shown in Figure 6. In Figure 6, we can see that the performance of G-ELM is the best. Then, the post hoc hypothesis test [48] was used to evaluate the statistical significance of the performance differences between G-ELM and the other adopted methods. Post hoc hypothesis test results ðα Fri = 0:05Þ are presented in Table 9. In Table 9, we can see that the null hypothesis is rejected when p Fri ≤ 0:025 due to p Fri ≤ Holm. Therefore, performance differences between G-ELM and the other adopted methods are significant, which means that G-ELM is effective for imbalanced epileptic EEG signal recognition. Conclusions In this study, we aimed to address the challenge that traditional machine learning methods ignore the imbalance of epileptic EEG datasets and the existing imbalanced classification methods ignore the relationships between samples. A graph-based ELM was proposed for imbalanced epileptic EEG signal recognition. First, graph theory was used to construct the relationship between samples according to the distribution. Second, a model combining the relationship graph and ELM was proposed; this model inherited the rapid learning and good generalization capabilities of ELM while maintaining satisfactory classification. Experiments on a real imbalanced epileptic EEG dataset demonstrated the effectiveness and applicability of the proposed method. However, there is still room for improvement in the scope and search method of the optimal parameters in this experiment. In the future, ways to design a better method to determine the optimal parameters will be further studied and explored. Conflicts of Interest None of the authors have any conflicts of interest.
4,726.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Double Controlled Metric Type Spaces and Some Fixed Point Results Thabet Abdeljawad 1 ID , Nabil Mlaiki 1, Hassen Aydi 2,* ID and Nizar Souayah 3 1 Department of Mathematics and General Sciences, Prince Sultan University, P.O. Box 66833, Riyadh 11586, Saudi Arabia<EMAIL_ADDRESS>(T.A<EMAIL_ADDRESS>(N.M.) 2 Department of Mathematics, College of Education in Jubail, Imam Abdulrahman Bin Faisal University, P.O. Box 12020, Jubail 31961, Saudi Arabia 3 Department of Natural Sciences, Community College Al-Riyadh, King Saud University, Riyadh 4545, Saudi Arabia<EMAIL_ADDRESS>* Correspondence<EMAIL_ADDRESS>or<EMAIL_ADDRESS>Tel.: +966-530894964 Introduction One of the generalizations of metric spaces was studied by Bakhtin [1] and Czerwik [2] who introduced the notion of b-metric spaces.Since then, many authors obtained several fixed point results for single valued and multivalued operators in the setting of b-metric spaces, for instance, see [3][4][5][6][7][8][9][10][11][12][13][14][15][16].Among the generalizations of b-metric spaces, we cite the work of Kamran et al. [17] (see also [18][19][20][21]) who introduced extended b-metric spaces by controlling the triangle inequality rather than using control functions in the contractive condition.Proving extensions of Banach contraction principle from metric spaces to b-metric spaces and hence to controlled metric type spaces is useful to prove existence and uniqueness theorem for different types of integral and differential equations.Some nice applications can be found for example in the recent article [22].In fact, the authors in [17] gave a slightly modified application of a proven fixed point result.However, finding serious applications to integral equations and dynamical systems is still of interest.In this article, we have been only motivated theoretically to relax the triangle inequality of b-metric spaces by using two controlled functions rather than using one.Definition 1. [17] Given a function θ : X × X → [1, ∞), where X is a nonempty set.The function d : for all x, y, z ∈ X. for all u, v, w ∈ X.Then, ρ is called a controlled metric type and (X, ρ) is called a controlled metric type space. Now, we introduce a more general b-metric space. for all u, v, w ∈ X.Then, q is called a double controlled metric type by α and µ. Remark 1. A controlled metric type is also a double controlled metric type when taking the same function(s).The converse is not true in general (see Examples 1 and 2). (ii): Otherwise, first (q3) is verified in the case that x = y.Consider the case that x = y, hence we get that x = y = z.In the subcases (x ≥ 1 and y ∈ [0, 1)) and (y ≥ 1 and x ∈ [0, 1)), it is easy to see that (q3) holds. On the other hand, we have q(0, ). This leads us to say that q is not an extended b-metric when considering the same function µ = α. Thus, q is not a controlled metric type for the function α. The topological concepts as continuity, convergent and Cauchy on double controlled metric type spaces are given in the following.Definition 4. Let (X, q) be a double controlled metric type space by one or two functions. (1) The sequence {u n } is convergent to some u in X, if for each positive ε, there is some integer N ε such that q(u n , u) < ε for each n ≥ N ε .It is written as lim n→∞ u n = u. (2) The sequence {u n } is said Cauchy, if for every ε > 0, q(u n , u m ) < ε for all m, n ≥ N ε , where N ε is some integer. (3) (X, q) is said complete if every Cauchy sequence is convergent.Definition 5. Let (X, q) be a double controlled metric type space by either one function or two functions-for u ∈ X and k > 0. (ii) The self-map T on X is said to be continuous at u in X if for all δ > 0, there exists k > 0 such that T(B(u, k)) ⊆ B(Tu, δ). Note that if T is continuous at u in (X, q), then u n → u implies that Tu n → Tu when n tends to ∞. In this paper, we present some fixed point theorems in double controlled metric type spaces.The first one is the related Banach contraction principle.The second one concerns with a nonlinear case involving a function φ satisfying suitable conditions.The last one is the related Kannan type result.The given concepts and theorems are illustrated by some examples. Main Results Our first fixed point result is the following: Theorem 1.Let (X, d) be a complete double controlled metric type space by the functions α, µ : X × X → [1, ∞).Suppose that T : X → X satisfies q(Tx, Ty) ≤ kq(x, y), for all x, y ∈ X, where k ∈ (0, 1).For u 0 ∈ X, choose u n = T n u 0 .Assume that In addition, for each u ∈ X, suppose that lim n→∞ α(u, u n ) and lim n→∞ µ(u n , u) exist and are finite. Then, T has a unique fixed point. Proof.Consider the sequence {u n = T n u 0 } in X that satisfies the hypothesis of the theorem.By using label (1), we get q(u n , u n+1 ) ≤ k n q(u 0 , u 1 ) for all n ≥ 0. Let n, m be integers such that n < m.We have We used α(x, y) ≥ 1.Let Hence, we have The ratio test together with (2) imply that the limit of the real number sequence {S n } exits, and so {S n } is Cauchy.Indeed, the ration test is applied to the term so the sequence {u n } is Cauchy.Since (X, q) is a complete double controlled metric type space, there exists some ξ ∈ X such that lim n→∞ q(u n , ξ) = 0. It is a contradiction, so ξ = η.Hence, ξ is the unique fixed point of T. Remark 2. The assumption (3) in Theorem 1 above can be replaced by the assumptions that the mapping T and the double controlled metric d are continuous.Indeed, when u n → ξ, then Tu n → Tξ and hence we have lim n→∞ q(Tu n , Tξ) = 0 = lim n→∞ q(Tu n+1 , Tξ) = q(ξ, Tξ), and hence Tξ = ξ. Theorem 1 is illustrated by the following examples. Example 4. Let X = [0, 4].Consider the double controlled metric q and functions α and µ given in Example 1. Choose Tx = 1 for all x ∈ X.Let u 0 = 1 and k = 1 2 .We have that is, (2) holds.In addition, for each u ∈ [0, 4], we have That is, (3) holds.All hypotheses of Theorem 1 are satisfied and ξ = 1 is the unique fixed point.Definition 6.Given u 0 ∈ X, the orbit O(u 0 ) of u 0 is defined as O(u 0 ) = {u 0 , Tu 0 , T 2 u 0 , ...}, where T is a self-map on the set X.The operator G : X −→ R is called T-orbitally lower semi-continuous at η ∈ X if when {u n } in O(u 0 ) such that lim n→∞ q(u n , η) = 0, we get that G(η) ≤ lim n→∞ inf G(u n ). Proceeding similarly as [17] and using Definition 6, we have the following corollary generalizing Theorem 1 in [24]. Corollary 1.Let T be a self-map on (X, q) a complete double controlled metric type space by two mappings α, µ.Given u 0 ∈ X.Let k ∈ (0, 1) be such that q(Tz, T 2 z) ≤ kq(z, Tz), f or each z ∈ O(u 0 ). ( Take u n = T n u 0 and suppose that Then, lim n→∞ q(u n , ξ) = 0. We also we have that Tξ = ξ if and only if the operator x → q(x, Tx) is T−orbitally lower semi-continuous at u. Our next fixed point result concerns with the nonlinear case using a control function of Matkowski [25]. Proof.Let {u n } and u 0 be as in the statement of the theorem.If, for some m, we have u m = u m+1 = Tu m , then clearly u m is the fixed point.Now, suppose that u n+1 = u n for each n.From condition (10), where clearly If, for some n, we accept that Λ(u n−1 , u n ) = q(u n , u n+1 ), then from (12) and that φ(t) < t, ∀t > 0, we have which leads to a contradiction.Hence, for all n, we must have If we proceed inductively, we deduce that for each n ≥ 0, we have 0 < q(u n , u n+1 ) ≤ φ n (q(u 0 , u 1 )). To show that {u n } is Cauchy, we proceed as in the proof of Theorem 1.For all m > n, we may get The assumption (11) by means of the ratio test applied to the series derived from the right-hand side of ( 14), as in the proof of Theorem 1, will lead to the sequence {u n } being Cauchy.Since (X, d) is complete, there exists η ∈ X such that lim n→∞ q(u n , η) = 0.That η is a fixed point is shown as in Remark 2. To prove the uniqueness of the fixed point, assume z is such that Tz = z and z = η.By (10), we have 0 < q(η, z) = q(Tη, Tz) ≤ φ(Λ(η, z)) = φ(q(η, z)) < q(η, z), which is a contradiction. In the following theorem, we propose the related fixed point result of Kannan [26]. Theorem 3. Let (X, d) be a complete double controlled metric type space by the functions α, µ : X × X → [1, ∞).Let T : X → X be a Kannan mapping defined as follows: q(Tx, Ty) ≤ a[q(x, Tx) + q(y, Ty)], for all x, y ∈ X, where a ∈ (0, ).For u 0 ∈ X, take u n = T n u 0 .Suppose that For each u ∈ X, assume that Then, there exists a unique fixed point of T. Proof.Let {u n = Tu n−1 } in X be such that the hypotheses ( 17) and ( 18) hold.From ( 16), we obtain Then, q(u n , u n+1 ) ≤ a 1 − a q(u n−1 , u n ).By induction, we get Now, let us prove that {u n } is a Cauchy sequence.Using the triangle inequality, for all n, m ∈ N, we obtain Similar to the proof of Theorem 1, we get ) which allows us to proceed as in the proof of Theorem 1 and we deduce that {u n } is a Cauchy sequence in the complete double controlled metric space (X, d).Thus, there exists u ∈ X as a limit of {u n } in (X, d).Assume that Tu = u.We have 0 < q(u, Tu) ≤ α(u, u n+1 )q(u, u n+1 ) + µ(u n+1 , Tu)q(u n+1 , Tu) ≤ α(u, u n+1 )q(u, u n+1 ) + µ(u n+1 , Tu)[aq(u n , u n+1 ) + aq(u, Tu)]. Passing to the limit on both sides of (20) and making use of the condition (18), we deduce that 0 < q(u, Tu) < q(u, Tu), which is a contradiction.Hence, Tu = u.To prove the uniqueness of the fixed point u, suppose that T has another fixed point v.Then, q(u, v) = q(Tu, Tv) ≤ a[q(u, Tu) + q(v, Tv)] = a[q(u, u) + q(v, v)] = 0. Therefore, u = v and T has a unique fixed point. 1. Condition (18) in Theorem 3 can be replaced by the continuity of the double controlled metric d and the mapping T as it was done in Theorem 2. 2. Continuity of the double controlled metric d and the mapping T in Theorem 2 can be replaced by the following condition: For each u ∈ X, we have lim n→∞ α(u, u n ) < ∞ and lim n→∞ µ(u n , Tu)φ(q(u, Tu)) < q(u, Tu). Perspectives It is an open question to treat the cases of the related Chatterjea, Hardy-Rogers, Ćirić and Suzuki contraction types.Moreover, it is always of great interest to find real applications for the proven fixed point theorems in metric type spaces.A future work in this direction will be highly recommended.
2,921.8
2018-12-12T00:00:00.000
[ "Mathematics" ]
Enhanced chiroptic properties of nanocomposites of achiral plasmonic nanoparticles decorated with chiral dye-loaded micelles The development of circularly polarized luminescence (CPL)-active materials with both large luminescence dissymmetry factor (glum) and high emission efficiency continues to be a major challenge. Here, we present an approach to improve the overall CPL performance by integrating triplet-triplet annihilation-based photon upconversion (TTA-UC) with localized surface plasmon resonance. Dye-loaded chiral micelles possessing TTA-UC ability are designed and attached on the surface of achiral gold nanorods (AuNRs). The longitudinal and transversal resonance peaks of AuNRs overlap with the absorption and emission of dye-loaded chiral micelles, respectively. Typically, 43-fold amplification of glum value accompanied by 3-fold enhancement of upconversion are obtained simultaneously when Au@Ag nanorods are employed in the composites. More importantly, transient absorption spectra reveal a fast accumulation of spin-polarized triplet excitons in the composites. Therefore, the enhancement of chirality-induced spin polarization should be in charge of the amplification of glum value. Our design strategy suggests that combining plasmonic nanomaterials with chiral organic materials could aid in the development of chiroptical nanomaterials. in enantioselective photopolymerization and multi-level optical encryption 26,[32][33][34] . Although TTA-UC has been demonstrated as an efficient strategy for enhancing CPL performance, the amplified g lum value in dilute solution still doesn't meet the requirement of practical applications. Meanwhile, the absolute emission intensity is suppressed by exciton-exciton annihilation. Therefore, there is still an intense demand for further amplifying g lum value and simultaneously improving upconversion intensity. Metallic nanostructures support localized surface plasmon resonance (LSPR), which results in the strong enhancement of electromagnetic field at the surface of these nanostructures [35][36][37][38] . Due to the strong and tunable LSPR features, metallic nanoparticles have attracted extensive research interests and offer potential applications in surface-enhanced Raman scattering 39,40 , plasmonic circular dichroism [41][42][43][44][45][46] , and surface-enhanced luminescence 47,48 . To date, the LSPR has also successfully enhanced the TTA-UC emission 49,50 . It has been demonstrated that LSPR can lead to a beneficial photoexcitation enhancement of the sensitizers and acceleration in the radiative decay process of the emitters 51 . Nevertheless, LSPR-enhanced CPL has never been involved in TTA-UC systems. In 2014, Harada and his co-workers reported the plasmon-enhanced CPL by silver nanoparticles in chiral surfactant assemblies 52 . Shi et al. demonstrated LSPR-boosted CPL of europium polyoxometalates 53 . Recently, silver nanowires and gold triangular nanoprisms have also been employed to amplify the circular polarization of chiral emitters 54,55 . However, in those systems, the plasmon-enhanced CPL was generated directly under the excitation of high-frequency light. As for TTA-based UC-CPL systems, the effect of LSPR on upconverted CPL can be complicated since TTA-UC involves multiple photophysical processes. As sketched in Fig. 1b, the LSPR of metal nanoparticles can probably influence a cascade of events in UC-CPL: (1) the photons absorption of sensitizers, (2) the triplet dynamics: transfer, diffusion, collision, and annihilation, (3) the radiative transition of chiral emitters. In this case, the energy-level matching between TTA-UC sensitizer/annihilator pairs and plasmonic nanostructures requires a considerate design. Herein, we report a plasmonic noble metal nanoparticle-assisted chiral upconversion system which can obtain the concurrent enhancement of upconversion emission and g lum (Fig. 1a). Under the excitation of UV light, the micellar aggregate (R-1M) consisting of hydrophobic chiral emitter R-1 (molecular structure in Fig. 2a) and hexadecyltrimethyl ammonium bromide (CTAB) exhibited prompt CPL with a g lum value around 1.5 × 10 −3 . After assembling with sensitizer Pd(II) meso-tetraphenyl tetrabenzoporphine (PdTPBP), the upconverted micellar aggregates (R-1UCM) could emit upconverted circularly polarized light upon excitation with 635 nm laser. Moreover, the g lum value was magnified more than four times and up to 6.4 × 10 −3 . After mixing with polystyrene sulfonate (PSS)-modified gold nanorods (AuNRs), the upconverted micelles could adhere to the surface of AuNRs through electrostatic interaction. It should be noted that the plasmon-micellar composites (R-1UCM/AuNRs) exhibited strong amplification of UC-CPL. When the molar ratio of AuNRs and R-1 was 3.0 × 10 −6 /1, the g lum value increased to 4.6 × 10 −2 . At the same time, the upconverted emission intensity of R-1UCMP/AuNRs was approximately two times larger than that of the pristine upconverted micelles. Based on the control experiments with different types of plasmonic particles and the finite-difference time-domain (FDTD) calculations, as well as the measurement of time-resolved upconversion emission a Under the excitation of UV light, the chiral emitter shows prompt CPL with a g lum value around 1.0 × 10 −3 . When chiral emitter/sensitizer/CTAB micelles assemble into upconverted micelles, the high-frequency UC-CPL can be obtained under excitation with lowfrequency light. The upconverted micelles exhibit amplified g lum values between 1.0 × 10 −3 and 1.0 × 10 −2 . Furthermore, plasmon-micelle composites that are formed by upconverted micelles and plasmon nanoparticles through electrostatic interaction can produce plasmon-enhanced UC-CPL, in which upconverted emission intensity and g lum value (>1.0 × 10 −2 ) are enhanced simultaneously. b Illustration of multiple photophysical processes in plasmon-assisted TTA-UC system. Source data are provided as a Source Data file. spectra, we verified the mechanism that LSPR can enhance the absorption of sensitizers and accelerate the radiative decay of the chiral emitters. Although the effect of LSPR on TTET is not entirely clear, through transient absorption spectra, we observed that the triplet excitons of R-1 (spin-polarized triplet excitons 25 ) accumulated fast in the presence of AuNRs. Therefore, the enhancement of chiralityinduced spin polarization should be a significant reason for amplifying the circular polarization of UC-CPL. It's a general way to modulate the upconversion process of dye molecules by utilizing the local electromagnetic field of metal nanoparticles, and it will give more insights into the development of the optoelectronics field. R/S-1 and CTAB assembled into chiral micelles Planar chirality-based R-1 and S-1 with perylene chromophores (Fig. 2a) were CPL-active emitters 25 . Therefore, we utilized R/S-1 and CTAB to fabricate CPL-active micellar aggregates (R/S-1M) through the coassembly method. After fixing the concentration of CTAB (10 -2 mol L −1 ) in water, a series of micellar aggregates with various concentrations of R-1 were investigated. The optimized concentration of R-1 was 5 × 10 −5 mol L −1 because of the strongest emission intensity (Supplementary Fig. 1). Although the quantum yield of R-1M was lower than R-1 in THF, it had no severe bathochromic shift and aggregation-caused emission quenching in fluorescence in comparison to directly dispersed R-1 in water (Supplementary Table 1 and Fig. 2). Those results suggested that R-1 were well stabilized in the presence of micelles. It is well-known that energy transfer can occur between sensitizer PdTPBP and perylene derivatives in TTA-UC 56 . Therefore, we further introduced PdTPBP to co-assemble with R-1 and CTAB to get R-1UCM. Transmission electron microscopy (TEM) images showed that R-1UCM were nanoscale with an average diameter of 4 nm (Fig. 2b). In addition, upconversion spectra of R-1UCM with different incident light power density of 635 nm were recorded ( Supplementary Fig. 3c). A threshold I th of 1637 mW cm −2 was obtained, above which TTA becomes the main triplet deactivation channel for the chiral emitters (Supplementary Fig. 3d). Plasmon-enhanced photon upconversion of R/S-1UCM Considering that the R-1UCM were formed by cationic surfactants, the as-prepared AuNRs with a longitudinal LSPR at 653 nm (AuNR653) were coated with PSS and thus gave negatively charged nanorods. After the attaching of PSS, the Zeta potential of AuNR653 inverted from +50 to −12 mV (Fig. 2d). The opposite surface charges between R-1UCM and AuNRs allow to fabricate plasmon-micelle composite through electrostatic interaction. As shown in Fig. 2c, the longitudinal LSPR band of the AuNR653 overlapped with the absorption band of PdTPBP, meanwhile, its transverse LSPR matched with the fluorescence band of R-1. This suggested a potential optical coupling between AuNR653 and the upconversion pairs. Based on the above results, we mixed AuNR653 and R-1UCM in an aqueous solution. The mixed solution was kept undisturbed for 10 h, and the Zeta potential decreased from +67 to +58 mV when AuNR653 was added into the R-1UCMP solution (Fig. 2d). To further identify the successful formation of R-1UCM/AuNR653 hybrids, the precipitates were collected after centrifugation at 6797×g for 10 min and redispersed with water. The Zeta potential of the composites was +30 mV, while a pure AuNR653 was −12 mV. Moreover, in R-1UCM/AuNR653 system, a growth of hydrodynamic diameters of AuNR653 from 4.9 and 58.8 nm to 7.5 and 68.1 nm was studied by dynamic light scattering (Fig. 2e). Therefore, the electrostatic attraction enabled attaching of the positively charged R-1UCM to the surface of the negatively charged AuNR653. Under the excitation of a 635 nm laser, the measured upconverted luminescence of R-1UCM/AuNR653 composite was 6.7-fold that of R-1UCM at 476 nm (Fig. 2f). Meanwhile, the I th of R-1UCM/AuNR653 decreased to 1318 mW cm −2 ( Supplementary Fig. 3b). The upconversion quantum yield of R-1UCM increased from 0.2 to 0.5% after mixing with AuNR653 ( Supplementary Fig. 3e). The above phenomenon revealed that the AuNR653 could enhance the luminescence of R-1UCM. We further investigated the relationship between the enhancement of photon upconversion and rod concentration. By increasing the AuNR653 amount, the upconverted luminescence intensity versus the concentration of metal nanoparticles showed a volcano-shape curve (Fig. 2g). When the ratio of AuNR653 and R-1UCM reached 3.9 × 10 −6 , upconverted luminescence intensity would decrease due to the competing absorption at 635 nm between sensitizers and AuNR653 and the reabsorption of upconverted luminescence by AuNR653. Plasmon-enhanced chiroptical properties Considering the planar chirality of R/S-1, we explored the chiroptical properties of R/S-1M assemblies. The mirror-imaged circular dichroism (CD) and CPL signals of R/S-1M were consistent with the R/S-1 in dilute solution ( Fig. 3a and Supplementary Fig. 6), suggesting that the assemblies of chiral emitters and CTAB had little influence on chiroptical properties. The g lum values of prompt CPL at 481 nm were +1.4 × 10 −3 and −0.9 × 10 −3 for R-1M and S-1M, respectively. A positive UC-CPL curve was observed in R-1UCM upon being excited by a 635 nm laser. The corresponding g lum value of UC-CPL raised up to +6.4 × 10 −3 , presenting 4.3 times larger than the prompt CPL. The enhancement effect was additionally obtained in S-1UCM, which could be attributed to the chirality-induced electron spin polarization during the TTA-UC process 25 . Furthermore, we thoroughly studied the plasmon-assisted UC-CPL in R-1UCM/AuNR653 system. With the addition of AuNR653, the g lum values of plasmon-enhanced UC-CPL gradually increased and reached a maximum when the molar ratio AuNR653/R-1 was 3.9 × 10 −6 / 1, which was 7.5 times and even 32.0 times larger in magnitude compared with the g lum values of R-1UCM and R-1M, respectively. As expected, the enantiomeric S-1UCM system showed the same enhancement phenomenon (Fig. 3a, b and Supplementary Fig. 4). The true enhancement factor per attached micelle should be larger because a fraction of the micelles was free, as exhibited in the TEM image of R-1UCM/AuNR653 (Fig. 2h). Here, we attempted to obtain the real enhancement factor through comparing the integrated extinction intensities of R-1UCM and the supernatant of R-1UCM/AuNR653 after removal of AuNR653 by centrifugation. In this operation, the R-1UCM/ AuNR653 composite with a molar ratio AuNR653/R-1 at 3.0 × 10 −6 /1 was selected due to high-concentration AuNR653 can quench the upconverted emission. As shown in Supplementary Fig. 5, the extinction intensity of the supernatant was 58% of the pure R-1UCM, which meant that approximately 42% of the micellar aggregates were combined with the AuNR653. Accordingly, the corrected enhancement factors were 3.3 and 15.7 times for upconverted emission and g lum value, respectively. With the deviations in the centrifugation process, this method was not perfect, but sufficient to roughly estimate the enhancement factor in a very simple and convenient fashion. Above all, the LSPR-assisted CPL enhancement method was successfully achieved in chiral organic-plasmon upconverted luminescence systems. Moreover, there was an obvious increase in ground-state chirality through measuring the CD signal ( Supplementary Fig. 6) in both the R-1 and S-1 systems in the presence of AuNR653. To eliminate the influence of absorption, we converted those CD signals to the absorption dissymmetry factor (g CD ). As shown in Fig. 3c, the maximal g CD factor of plasmon-micelle composites at 466 nm was 1.4 × 10 −3 , which was almost the twice larger than the one of individual micelle particles. These results clearly verified that either ground-or excitedstate chirality can be enhanced by the plasmonic LSPR effect. We noted that the amplification level of CD was much smaller than that of UC-CPL. This may be explained by the extinction band of R-1 was not well matching the LSPR wavelength of AuNR653. Additionally, the measurement of regular downshifting CPL was carried out in R-1M/ AuNR653 composites. Different from the UC-CPL, upon excitation with 400 nm, the R-1 mixed with different concentrations of AuNR653 only showed a slight increase in g lum values ( Supplementary Fig. 7). This result indicated that although the emission band of R-1 overlapped with the transverse LSPR of AuNR653, the electromagnetic field of AuNR653 was too weak to allow a dramatic enhancement effect at such an excitation wavelength. This behavior will be further investigated in the following experiments by employing plasmonic nanorods with various longitudinal LSPR wavelengths. Mechanism of the enhancement To gain further insights into the mechanism of LSPR-enhanced upconversion and CPL, we utilized transient absorption spectroscopy to dig into the behavior of excited triplet states. The comparison between PdTPBP and R-1UCM transient spectra were illustrated in Supplementary Fig. 8. Two ground-state bleaches (GSB) can be observed at 442 and 492 nm, which were in accordance with the singlet transition of PdTPBP. Furthermore, PdTPBP also showed excited-state absorption (ESA) at 484 and 524 nm due to T 1 − T n transitions 27 . In the R-1UCM system, the GSB signal shifted to 446 nm because of the presence of R-1. A signal appeared at 500 nm, which resulted from the T 1 − T n transitions of R-1. The transient absorption of R-1UCM/AuNR653 also exhibited a GSB signal at 446 nm, while two ESA peaks can be seen at 494 and 570 nm (Fig. 3d). After analyzing the transient absorption of AuNR653, we concluded that the split of triplet transitions peak of R-1 around 500 nm was caused by the GSB of AuNR653 at 520 nm. Nevertheless, either at 446 or 500 nm, the transient absorption of R-1UCM/AuNR653 showed higher intensities than that of R-1UCM, demonstrating an LSPR enhancement effect for GSB and ESA ( Fig. 3e and Supplementary Fig. 8d). In terms of exciton dynamics, the ESA decay of the R-1UCM and R-1UCM/AuNR653 were monitored at 446 and 500 nm, and the R-1UCM/AuNR653 showed faster decay at those peaks ( Supplementary Fig. 9). Additionally, time-resolved photoluminescence spectra revealed that the lifetime of R-1UCM showed an obvious decrease from 102 to 82 μs in the presence of AuNR653 (Fig. 3f), which resulted from the LSPR-caused faster emission rates 49,57 , further demonstrating the coupling of LSPR to R-1/PdTPBP pairs. It's well-known that the LSPR features dramatically depend on the composition, shape, and size of the plasmonic nanostructures, three types of PSS-modified noble metal nanorods were synthesized to be involved in the hybrid systems. Two of them were gold nanorods with longitudinal LSPR bands at 737 and 812 nm, respectively. The rest was gold-core-silver-shell nanorods with a longitudinal LSPR band at 659 nm, and this core-shell nanoparticle was expected to have a stronger plasmonic response due to the presence of silver 37,58 . It was clear to see the different coupling degrees between plasmonic bands and excitation light due to the distinction of overlaps in the spectra (Fig. 4e). Moreover, the LSPR activities of these nanorods remained stable as their extinction spectra only showed a slight shift or broadening after the hybrid process ( Supplementary Fig. 10). The upconversion emission of R-1UCM coupled with those Au and Au@Ag nanorods was investigated. The molar ratio of AuNR653 to R-1 was fixed at 3.0 × 10 −6 /1. To avoid the influence of unequal adsorption of R-1UCM on the surface of metal nanoparticles, the concentrations of various metal nanorods were determined by their monomer's surface area, ensuring that the total surface area of those plasmonic particles were the same (Fig. 4a-d and Table 1). As shown in Fig. 4f, for gold nanorods, the increasing spectral overlap between the plasmon band and 635 nm laser would lead to the enhancement of upconversion intensity. Interestingly, the maximal upconversion enhancement was obtained in R-1UCM/AuNR@Ag659 system. To further explore the mechanism, we computed the electric field intensities of plasmonic nanoparticles by FDTD simulation. The calculated results suggested that the electric field was much stronger at the ends than the field at the side for the discrete AuNR when excited by a 635 nm laser, which was in line with the excitation of the sensitizers (Fig. 4g and Supplementary Fig. 11). Furthermore, the local electric fields on the metal surface decreased quickly when the longitudinal LSPR band gradually red-shifted and was far away from the 635 nm laser due to the increased aspect ratio of gold nanorods. After coating a thin Ag shell, AuNR@Ag659 showed 1.5-fold higher electric field enhancement compared to pure AuNR653, which further led to better-upconverted luminescence, agreeing with the observations in Fig. 4f. Those results demonstrated that the stronger electromagnetic field of noble metal nanorods, the more enhancement of photon upconversion. Additionally, Fig. 4h showed a plot of g CD against g lum for the results of chiral micelles and micelle/plasmon systems. Once chiral micelles were coupled with plasmonic nanoparticles, either g CD or g lum values would be enhanced compared with pure chiral micelle systems. The maximal g CD around 466 nm of AuNR653, AuNR737, and AuNR812 were comparable because of the similar electromagnetic field intensities at the transverse LSPR band around 520 nm. However, the enhancement of g lum values exhibited large differences with the increasing electromagnetic field intensities of various plasmonic nanoparticles. Noted that the g lum value of R-1UCM/AuNR@Ag659 was able to reach 6.4 × 10 −2 , which was about 43-fold larger than that of R-1M (Supplementary Fig. 12). Those results further highlighted the essential role of an electromagnetic field of plasmonic nanoparticles in modulating the intensity of circular polarization. It has been proven that the enhancement of chiroptical properties originated from the coupling effect between chiral substances and plasmonic localized electromagnetic field 52,59,60 . However, in a UC-CPL system, one should take into account multiple photoinduced processes. In addition to the well-known promotion of absorption and emission rates, we noted the improved excited-state absorption of R-1 in the micelles/plasmon hybrids. There were two possible sources of this phenomenon: (1) the enhanced absorption of sensitizers led to an increased generation rate of R-1 triplet excitons via TTET and (2) the improved absorption rate of triplet excited state induced by the LSPR effect. Consequently, the total amount of triplet excitons was raised up. Among the chiral upconversion system, the quantitative balance between two spin orientations of triplet excitons was disturbed according to our previous report 25 , one kind of triplet excitons with a certain spin orientation was more than its counterpart due to the chirality-induced electron spin polarization in TTET and TTA processes. This type of triplet excitons can be regarded as spinpolarized triplet excitons. In terms of AuNRs assisted UC-CPL process, the amount and fraction of spin-polarized excitons should increase with the increase of the total amount of triplet excitons. To verify our hypothesize, we evaluated the intensity of spin polarization by comparing TTA efficiencies between R-1UCM and R-1UCM/AuNR653, because the TTA efficiency (Φ TTA ) is significantly affected by the relative density of triplet excitons with opposite spin orientation. Unfortunately, the calculated Φ TTA of both R-1UCM and R-1UCM/AuNR653 were approximately unit based on the upconversion decay shown in Fig. 3f, because the density of triplet excitons was saturated under high-power excitation. Therefore, we turned to estimate Φ TTA upon low-power excitation. Interestingly, the Φ TTA of R-1UCM/AuNR653 (58 %) was lower than that of R-1UCM (67 %, Supplementary Fig. 13), suggesting that the density of triplet excitons with opposite spin orientation was more unbalanced in plasmon-assisted systems. This result agreed with our hypothesis that R-1UCM/AuNR653 probably had a higher spin polarization level. On the one hand, LSPR can enhance the chiral absorption, which was based on electron transition, thus it was also probably to boost the electron spin polarization generated by the energy transfer based on the electron exchange mechanism. Additionally, spin split and magnet-controlled/induced CPL have been reported in pure organic systems 61,62 . These phenomena suggested a possibility for electron spin polarization to have an impact on chiral emission. Therefore, we propose that the effect of LSPR-enhanced chirality-induced spin polarization is critical to amplifying the g lum values of UC-CPL. As shown in Fig. 4i, the chirality-induced spin polarization both in TTET and TTA has been enhanced due to the coupling with plasmon nanorods. Consequently, the UC-CPL resulting from spinpolarized singlet excitons showed a higher g lum value. Indeed, the mechanism proposed still needs a more careful explanation, we suppose this probably will be proved by means of circularly polarized ultrafast spectroscopy in the future. Photon upconversion and chiroptical properties enhanced by gold nanospheres The UC-CPL activity enhanced by LSPR was further verified in another chiral upconversion pairs. We synthesized chiral monomer analogs R-2 and S-2 that were different from dimers R-1 and S-1 (Fig. 5a). In addition, the chromophore changed from a perylene unit to phenyl anthracene. Then micellar aggregates of R-2 (R-2M) were fabricated into assemblies, and the concentration of CTAB in water was kept at 10 −2 mol L -1 . The maximal fluorescence intensity of R-2M was achieved at an R-2 concentration of 1.3 × 10 −4 mol L −1 (Supplementary Fig. S14). The following experiments were therefore performed at this optimized concentration. Firstly, we investigated the chiroptical responses of R/S-2 in THF and R/S-2M in an aqueous solution. The CD spectra of R-2 exhibited a negative signal around 404 nm, while S-2 showed the opposite curve. There are no obvious changes in the corresponding CD spectra of R/S-2M as well as in CPL spectra ( Supplementary Fig. 15), verifying the optical activities of R/S-2M. Regarding the photon upconversion system, PtOEP was employed as a triplet donor because of the energy-level matching with anthracene derivatives 63,64 . Consequently, gold nanospheres (AuNSs) were applied (Fig. 5b) to couple with the upconverted micelles of R-2 (R-2UCM). As shown in Fig. 5c, there were large overlaps between the plasmon bands of AuNSs and the excitation-emission range of R-2. The upconverted luminescence spectra of R-2UCM showed a blue emission upon excitation of 532 nm laser (Supplementary Fig. 16). After mixing with AuNSs, the upconverted emission of R-2UCM/AuNSs composites showed a 2.7-fold enhancement, which indicated that the LSPR of AuNSs can boost the photon upconversion process. In terms of chiroptical properties, similar to the plasmon complexes of R/S-1M/ AuNR653, the ground-state chirality of R/S-2MP/AuNSs was also amplified ( Fig. 5d and Supplementary Fig. 17). The |g CD | value around 404 nm was amplified from 9.1 × 10 −5 in R-2M to 1.6 × 10 −4 in R-2M/ AuNSs. As expected, the TTA-based upconversion could amplify the circular polarization of prompt CPL, then the local electromagnetic fields of metal nanoparticles could further enhance the g lum value of UC-CPL (Fig. 5e). Totally, the g lum value of R-2UCMP/AuNSs became 36time larger than that of prompt CPL of R-2M (Fig. 5f). Such behavior suggested the general phenomenon that plasmon could modulate the photon upconversion process and CPL activity. Discussion In summary, we reported an effective and general approach for concurrent boosting upconverted luminescence intensity and g lum value by constructing a composite system composed of chiral upconverted micelles and achiral plasmonic nanoparticles. The generality of plasmon-enhanced UC-CPL was demonstrated by using two types of chiral emitters to couple with different metal nanoparticles. In Au@Ag nanorods and UCM complex system, the overall enhancement of g lum value and upconversion emission intensity has been achieved at 43time and 3-time, respectively. Based on the theoretical simulations and experimental results, we proposed that plasmon-induced enhancement of absorption of sensitizers, a radiative transition of chiral emitters, and chirality-induced spin polarization were responsible for the plasmon-enhanced optical phenomena. Such plasmon-assisted chiral emitters with high upconversion efficiency and large circular polarization will contribute to chiroptical applications. Characterizations The 1 H and 13 C NMR spectra were recorded with a Bruker Fourier 400 (400 MHz) spectrometer, where CDCl 3 was used as a solvent and tetramethylsilane was the standard for which δ = 0.00 ppm. UV-vis spectra were obtained using a Hitachi U-3900 spectrophotometer and fluorescence spectra were measured on an Edinburgh FS5 fluorescence spectrophotometer using a Xe lamp as the excitation source. Upconverted luminescence spectra were recorded on a Zolix Omin-λ500i monochromator with a photomultiplier tube (PMTH-R 928) using external excitation source 532 nm (MGL-FN-532, 1.5 W) and 635 nm (MRL-III-635L, 200 mW) lasers (Changchun New Industry Optoelectronic Technology Co. LTD). Upconverted emission decays were recorded on a HORIBA Scientific Nanolog FL3-iHR320 spectrofluorometer using multichannel scaling. CD spectra were measured on a JASCO J-1500 spectrophotometer. CPL spectra were obtained using a JASCO CPL-200 spectrophotometer using a Xe lamp or 375 nm (MDL-III-375, 100 mW, Changchun New Industry Optoelectronic Technology Co. LTD) laser as the excitation source. Upconverted CPL were recorded on a JASCO CPL-200 spectrophotometer with external excitation source 532 nm and 635 nm lasers. Zeta potential and dynamic light scattering were measured on a Zetasizer Nano ZS analyzer. TEM images were taken using an FEI Tecnai G2 20 S-TWIN microscope (200 kV), and the samples were dropped on carbon-coated Cu grids. Transient absorption spectra were measured on Vitara T-Legend Elite-TOPAS-Helios-EOS-Omni spectrometer supported by Technical Institute of Physics and Chemistry, CAS. Theoretical simulation. The extinction spectra and electric field enhancements of single AuNRs with different sizes and Au@Ag coreshell nanostructure was simulated using FDTD Solutions 8.6 developed by Lumerical Solutions, Inc. The geometric models of individual nanorods are set according to the statistical results that has been summarized in Supplementary Table 2. For the Au@Ag core-shell nanostructure, the thickness of the Ag shell is 0.25 nm for longitudinal and 1.92 nm for transverse sections. An electromagnetic pulse with a wavelength range between 400 and 1000 nm was launched into a box containing the target nanorods/nanostructure to simulate a propagating plane wave interacting with the nanorods/nanostructure. The nanorods/nanostructure and its surrounding medium inside the box were divided into 1 nm meshes. Calculations were done for single nanorod/nanostructure in water. (refractive index of 1.33) and excited by linearly polarized light. The morphology of the nanorods/nanostructure was modeled as a cylinder capped with two half spheres at both ends. For the longitudinal excitation, the incident light was perpendicular to the long axis and polarized along the length axis. The optical constants for bulk Au and Ag were extracted from Johnson Christy database and Palik, respectively. Synthesis of R/S-2. Synthesis of R-1 was performed according to the literature 65 . To a toluene solution (30 mL) containing R-4,12dibromo[2.2]paracyclophane (0.20 g, 0.55 mmol), 10-phenyl-9-anthraceneboronic acid (0.081 g, 0.27 mmol), and tetrakis(triphenylphosphine)palladium(0) (0.032 g, 0.027 mmol), 10 mL Na 2 CO 3 (0.1 M) aqueous solution was added. After degassing the mixture three times by freezing and thawing cycles, the mixed solution was refluxed for 12 h under stirring. After cooling to room temperature, the mixture was concentrated with a rotary evaporator and redissolved in dichloromethane (DCM). Then water was added and the aqueous was extracted with DCM three times. The organic layers were combined, dried over sodium sulfate, and precipitates were removed by filtration. The products were purified by silica gel column chromatography (n-hexane/ethyl acetate from 20/1 to 2/1 v/v as an eluent). After evaporating in vacuo, a light-yellow powder of R-2 (0.187 g, 0.262 mmol, yield 48%) was collected. 1 Synthesis of Au nanoparticles through a seed-mediated growth Preparation of Au seeds. CTAB-capped seeds were prepared by chemical reduction of HAuCl 4 with NaBH 4 in the presence of CTAB: a mixture solution was obtained by mixing CTAB (0.1 M, 7.5 mL), HAuCl 4 (22.1 mM, 0.113 mL) and deionized water (1.8 mL) under magnetic stirring. Then, a freshly prepared and ice-cold NaBH 4 (0.01 M, 0.6 mL) was injected into the mixture solution and continued to agitate for 3 min. The color of the solution turned from yellow to brown after adding NaBH 4 . The Au-seeds solution was kept at 30°C without any disturbance for 2-5 h before the further experiment. Preparation of gold nanospheres with the diameter around 20 nm. (1) About 10 nm Au nanoparticles were synthesized by a seed-mediated method. The growth solution consisted of CTAC (0.1 M, 500 mL), HAuCl 4 (22.1 mM, 5.65 mL), and AA (0.1 M, 75 mL). Then, the Au-seeds solution (6.7 mL) was added to the above growth solution to initiate the growth of the gold nanospheres. Then, the well-mixed solution was kept at 30°C for 0.5 h. The gold nanospheres were purified by centrifugation (24,878×g, 23 min). The precipitates were redispersed in deionized water (50 mL). ). Then, the Au-seeds solution (240 μL) was added to the above growth solution to initiate the rod growth. After 12 h, the AuNR812 was obtained. Preparation of<EMAIL_ADDRESS>4-ATP (10 mM, 5 μL) was added in 10 mL AuNR720 (0.1 nM) dispersed in 10 mM CTAB and incubated at a 30°C water bath for 30 min. Then, AgNO 3 (0.1 M, 5 μL) and AA (0.1 M, 50 μL) were added into the above solution to trigger the overgrowth of the Ag shell at 70°C. After 1 h, AuNR@Ag with the longitudinal SPR band at 659 nm were obtained. All nanorods were purified twice by water with centrifugation (13994 × g, 15 min). Preparation of PSS-modified nanoparticles. The concentrated nanoparticle suspension was mixed with CTAB (0.1 M, 0.01 mL) and incubated at 30°C for 30 min. Then, a 50 μL mixture solution containing PSS (20 mg/mL) and NaCl (60 mM) was added into the above rod suspension and well-mixed. After incubating at a 30°C water bath for 12 h, the PSS-coated nanoparticles were purified by centrifugation twice (13,994×g, 15 min). Preparation of micellar aggregates. All the various micelles were prepared by a typical nanoprecipitation method. For example, 0.06 mL tetrahydrofuran (THF) solution of R-1 (5 mM) was mixed with CTAB (22 mg), then 6 mL water was added rapidly under continuous sonication for 10 min. After that, the mixture was purified by a rotary evaporator at 40°C to remove THF. The resultant R-1M were further filtered through a 0.22-μm filter membrane. Preparation of plasmon-micelle nanocomposites. The aqueous solution of PSS-modified plasmonic nanoparticles (nanorods and nanospheres) was added into the micelle solution (3 mL) and the micelles were attached on the surface of plasmonic nanoparticles to form plasmon-micelle composites through electrostatic attraction after 10 h. Estimation of UCQY. The unknown upconversion quantum yields (Φ unk ) of R-1UCMP and R-1UCMP/AuNR653 were measured with respect to a standard reference solution of methylene blue (Φ ref = 0.03). The luminescence was measured under the excitation power over 1000 mW cm −2 , so in the excitation regime of high UC efficiency. The relative quantum yield values Φ unk were obtained according to the following relation: where Φ ref is the reference standard yield, E is the fraction of photons absorbed at the excitation wavelength, I is the integrated photoluminescence intensity and n is the medium refractive index. Estimation of TTA efficiency. The evolution in time of converted luminescence intensity I UC obey to where T A is the population density of acceptor triplets, k A is the acceptor triplet's spontaneous decay rates and k A = 1/ (2 × τ UC ), Φ TTA is annihilation efficiency. It is confirmed that Equation (2) allows to fit properly the decay traces to give annihilation efficiency 28 . Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
7,463.4
2023-01-05T00:00:00.000
[ "Physics", "Chemistry" ]
Interlaboratory Comparison on Absolute Photoluminescence Quantum Yield Measurements of Solid Light Converting Phosphors with Three Commercial Integrating Sphere Setups Scattering luminescent materials dispersed in liquid and solid matrices and luminescent powders are increasingly relevant for fundamental research and industry. Examples are luminescent nano- and microparticles and phosphors of different compositions in various matrices or incorporated into ceramics with applications in energy conversion, solid-state lighting, medical diagnostics, and security barcoding. The key parameter to characterize the performance of these materials is the photoluminescence/fluorescence quantum yield (Φf), i.e., the number of emitted photons per number of absorbed photons. To identify and quantify the sources of uncertainty of absolute measurements of Φf of scattering samples, the first interlaboratory comparison (ILC) of three laboratories from academia and industry was performed by following identical measurement protocols. Thereby, two types of commercial stand-alone integrating sphere setups with different illumination and detection geometries were utilized for measuring the Φf of transparent and scattering dye solutions and solid phosphors, namely, YAG:Ce optoceramics of varying surface roughness, used as converter materials for blue light emitting diodes. Special emphasis was dedicated to the influence of the measurement geometry, the optical properties of the blank utilized to determine the number of photons of the incident excitation light absorbed by the sample, and the sample-specific surface roughness. While the Φf values of the liquid samples matched between instruments, Φf measurements of the optoceramics with different blanks revealed substantial differences. The ILC results underline the importance of the measurement geometry, sample position, and blank for reliable Φf data of scattering the YAG:Ce optoceramics, with the blank’s optical properties accounting for uncertainties exceeding 20%. ■ INTRODUCTION −5 Applications range from sensing and bioimaging to barcoding, solid-state lighting, and energy conversion.Performance parameters of luminescent reporters include the spectral position of the luminophore absorption and emission bands, their spectral widths and overlap, as well as fundamental spectroscopic quantities acting as measures for the absorption and emission efficiency such as the molar absorption coefficient or absorption cross section and the photoluminescence or fluorescence quantum yield (Φ f ). 6The latter equals the ratio of the number of emitted and absorbed photons, providing the conversion efficiency of absorbed into emitted photons. 7,8From the material or sample side, the size of the measurable fluorescence signal is determined by the product of the luminophore's absorption coefficient or absorption cross section at the excitation wavelength and Φ f , termed brightness. 6−21 Also, Φ f measurements are an essential part of photophysical and mechanistic studies and provide the basis for the design of next-generation functional luminescent materials.This underlines the importance of reliable and reproducible Φ f measurements for the scientific community, manufacturers, and users of commercial luminescent materials, as well as international standardization organizations like the International Electrotechnical Commission (IEC). The Φ f of transparent luminophore solutions can be determined relative to a so-called fluorescence quantum yield standard of known Φ f using a conventional photometer and fluorescence spectrometer.However, the determination of Φ f of scattering liquid and solid samples, such as dispersions of scattering luminescent particles, solid phosphors, and optoceramics, requires absolute measurements with an integrating sphere (IS). 22This triggered the renaissance of IS spectroscopy 23−29 and the development of stand-alone IS setups and IS accessories for many spectrofluorometers in the past decade.−36 Also, typical sources of error and achievable measurement uncertainties were addressed and quantified. The need for reliable, comparable, and standardized Φ f measurements expressed by companies involved in solid-state lighting and display technologies and/or the production and application of luminescent converter materials led to the first international standard IEC 62607-3-1 "Nanomanufacturing − Key control characteristics, Part 3-1: Luminescent nanomaterials − Quantum efficiency", released in 2014.As a response to this need and to improve the reliability of Φ f data, BAM certified a set of 12 dye-based Φ f standards for the ultraviolet (UV), visible (vis), and near-infrared (NIR) region in 2022. 37hese Φ f standards, utilized as transparent dye solutions, are designed for relative Φ f measurements of transparent luminescent samples and the performance validation of IS setups. 37However, at present, there are no scattering Φ f standards that are available. Aiming for the determination of sources of uncertainty of Φ f measurements of scattering samples together with the need to update the standard IEC 62607-3-1, we performed a first interlaboratory comparison (ILC) of absolute Φ f measurements with commercial stand-alone IS setups.This ILC involved three laboratories from academia and industry.This included BAM as a designated metrology institute and producer of reference materials such as fluorescence standards, Fraunhofer Application Center for Inorganic Phosphors at the Campus Soest of the South Westphalia University of Applied Sciences (FH SWF), with longstanding expertise in the development and spectroscopic characterization of functional luminescent materials, and SCHOTT AG (Schott).Schott produces optical materials for applications in automotive, lighting, health care, optical, and semiconductor technologies.To identify typical sources of uncertainty and quantify achievable measurement uncertainties, this ILC included the following steps: (i) assessment of the reliability of the spectral correction curves provided by the instrument manufacturer with validated and BAM-certified spectral fluorescence standards, 38,39 (ii) absolute Φ f measurements of transparent dye solutions and dye solutions containing defined amounts of scattering silica (SiO 2) particles, and (iii) absolute Φ f measurements of industry-relevant scattering optoceramics of varying surface roughness.All measurements were performed according to the same measurement protocols using the same samples provided by BAM and Schott.Special emphasis was dedicated to the influence of the illumination and detection geometries of the IS setups, the sample-specific surface roughness, and the scattering and reflectance properties of the chosen nonluminescent blank on the resulting Φ f values. Scatterer.Amorphous, nonporous SiO 2 particles (300 nm) were added to transparent dye and blank solutions in defined amounts to introduce scattering.The synthesis and characterization of the nonfluorescent SiO 2 particles are provided in the Supporting Information (SI). Blanks.BaSO 4 powders B1 from Sigma-Aldrich (99.99% trace metals basis), B2 (Puratronic, 99.998%, LOT 24177), and B3 (Puratronic, 99.998%, LOT 10226568) from Alfa Aesar (Thermo Fisher Scientific) and two PTFE diffuser targets from SphereOptics with thicknesses of 250 μm (SG 3203) and 2 mm (SG 3213) were used.The latter was polished on one side to allow for blank measurements with a smooth and a rough surface.The targets were cut into circles with a diameter of 15 mm to fit into the quartz Petri dish.■ RESULTS AND DISCUSSION Comparison of Instrumentation.In this ILC, Quantaurus first (C9920-02G, "Q1") and second generation (C11347-11, "Q2") IS setups from Hamamatsu were used for absolute Φ f measurements.The major differences between Q1 and Q2 are (i) the sample orientation within the IS and (ii) the illumination geometry, specifically, the angle of incidence of the excitation light on the sample surface and the corresponding reflections.The latter is dependent on the measured sample.For solutions, the sample is positioned in the center of the IS for both setups.In Q1, the sample is oriented perpendicular to the excitation beam, and it is positioned at a 28°angle to the excitation (Figure 1, central panels) in Q2.Powders and solid samples are placed at the bottom of the IS with an 8°tilt to the excitation path in Q1 (Figure 1, top right panel).For Q2, such samples are positioned flat on the bottom of the IS resulting in a 53°angle to the excitation beam (Figure 1, bottom right panel).These differences and other less relevant ones are summarized in Table S1. Validation of the Setup Calibration Implemented by the Instrument Manufacturer.The first step to accurate and comparable Φ f measurements is reliable instrument calibration.Thus, the reliability of the wavelength (relative) dependent spectral responsivity (emission correction) of the different IS setups was assessed and validated in the ILCrelevant wavelength range of 400 to 750 nm using the BAM KIT dyes F003−F005, a commercial set of certified spectral fluorescence standards.Figure 2 displays the measured corrected emission spectra of F003−F005, including the relative standard deviations calculated from the averaged measurement repetitions at absorbances of 0.1 (indicated by shaded areas).The data of all ILC partners and the corresponding certified spectra of F003−F005 are in good agreement and also closely match the dyes' emission spectra previously determined in an ILC on spectral correction. 38,39he small deviations from the certified values, obtained by dividing the measured spectra by the certified spectra of each dye, are more pronounced for shorter wavelengths.Within the spectral width of the emission band of each dye, the relative deviations are within ±0.1, confirming the reliability and comparability of the calibration of the IS setups. Comparability of the Φ f Measurements of Transparent Dye Solutions.Φ f measurements of four transparent dye solutions were performed using the certified Φ f standards F015, F016, F017, and F019, 37 varying in the spectral overlap between absorption and emission band and thus their sensitivity to reabsorption effects 41 at two dye concentrations (OD 0.05 and OD 0.1 at λ exc ).The Φ f values and measurement uncertainties (standard deviation of N = 4 × 2 × 3) as well as the certified Φ f values of each dye are shown in Table 1 and Table S2.The uncertainties given in these tables do not contain contributions from the calibration uncertainty 41 of the IS setups.The obtained Φ f data always match well with the certified Φ f values, considering the respective measurement uncertainties.This comparison underlines the reliability of the calibration of all IS setups with larger uncertainties observed for setup Q1 compared with Q2. Absolute Determination of Φ f of Scattering Dye− SiO 2 Particle Dispersions.To explore the effect of scatterers on the Φ f determination, defined amounts of 300 nm SiO 2 particles were added to ethanolic solutions of phosphates F015 and F016.As blanks, ethanol containing the same amount of SiO 2 particles were applied, thus considering scatteringinduced changes in the distribution of the incident and emitted photons.To rule out a possible quenching of dye fluorescence by SiO 2 particles, time-resolved fluorescence measurements of the dye solutions without and with scatterers were performed prior to the Φ f measurements by BAM.The matching fluorescence decay curves and lifetimes shown in Figure S2 confirm the absence of fluorescence quenching.All Φ f obtained by the ILC partners are presented in Table 1. The Φ f values of the transparent and scattering dye solutions are in good agreement and match the certified Φ f values.A prerequisite for this good comparability and the correct determination of the number of absorbed photons at the excitation wavelength is the consideration of scatteringinduced changes in light distribution within the IS by matching the scattering properties of sample and blank, i.e., keeping the size and amount of the SiO 2 particle scatterers for the sample and blank constant. YAG:Ce Optoceramics (OC).Solid OC made from a scattering polycrystalline inorganic material (Y 1−y Ce y ) 3 Al 5 O 12 (y = 0.001 to 0.01) with a very high Φ f , a high temperature stability, and an excellent long-term stability is a key functional material in modern lighting technologies.Applications are, e.g., converter materials for blue LEDs in the automotive industry.Φ f is a key quantity for OC light conversion efficiency and performance.Despite the considerable application relevance, the comparability of Φ f measurements of these materials across instruments and laboratories has not yet been assessed.In this first ILC we explored (i) the influence of the illumination and detection geometry of the IS setup, (ii) the choice of a suitable blank, and (iii) the influence of the sample-specific parameter "surface roughness" on the reliability and comparability of the resulting Φ f values.Thus, we chose YAG:Ce OC samples with a Φ f close to unity as this facilitates the detection of small Φ f differences and allows for the straightforward identification of Φ f values that are physically not meaningful, i.e., exceeding 100%. In this ILC, we absolutely determined the Φ f of YAG:Ce OC samples with varying surface roughness using previously developed and well-documented measurement protocols (SI).The samples were placed in a small quartz dish on the sample holder, which is part of the IS surface in both the Q1 and Q2 setups (Figure 1).Blank measurements are commonly done with an empty IS as the sample holder coated with the same material, i.e., Spectralon, as the IS surface, making it an ideal Lambertian diffusor with reflectivity of 99%. 42However, the sample holder surface can be prone to aging and contamination by absorbing and/or luminescent impurities, e.g., from previous samples.This can change its scattering characteristics and result in a nonideal scattering behavior.The presence of absorbing impurities can lead to an underestimation of the light fraction absorbed by the sample as the area of the excitation part of the sample's spectrum is subtracted from that of the blank; as a result, Φ f is overestimated.Thus, using an additional blank with scattering and transmission properties closely matching those of the sample is recommended and a prerequisite for an identical or at least similar distribution of the excitation light for sample and blank. Also, the light distribution in the IS can be affected by the sample's/blank's surface roughness, as the first reflex of the excitation light can considerably influence the measured fraction of absorbed photons and hence Φ f .To underline the importance of the blank's optical properties, we performed Φ f measurements of the same YAG:Ce OC with different types of blanks in Q1 and Q2.First, to identify the optimum excitation wavelength, the absorption and emission intensities of the YAG:Ce OC were measured as a function of excitation wavelength from 430 to 470 nm in 5 nm increments using a thin PTFE foil blank.As shown in Figure 3, the highest emission intensities resulted for excitation wavelengths between 445 and 465 nm.However, the emission spectra and Φ f values are independent of excitation wavelength within the derived measurement uncertainties (see Figure 3 and Figure S4).All Φ f measurements were performed at 455 nm, matching the output wavelength of blue LEDs. Influence of Blank for Q2.Prior to the ILC, different blanks were tested at BAM: BaSO 4 powders B1, B2, and B3, and a thin PTFE foil, chosen due to their high and nearly wavelength-independent reflectivity in the visible region.Φ f measurements were performed with and without the lid of the Petri dish to realize (i) a rough surface without a specular reflex (powders of different grain size without a lid), (ii) a surface with specular reflectivity (with a lid), and (iii) a flat surface with Lambertian scattering (PTFE foil without a lid).The YAG:Ce OC was always placed in a Petri dish without a lid. For measurements using B1, B2, and B3 with and without a lid, physically meaningless and hence erroneous Φ f values >100% were obtained, as shown in Figure 4.This indicates considerable differences in the scattering and reflection behaviors of the OC sample and the BaSO 4 powder blanks.Likely explanations are differences in surface roughness and scattering behavior, as well as absorption characteristics caused by water adhesion during the fabrication process of the powders.This also explains the large Φ f deviations of the supposedly identical powders B2 and B3 (the same manufacturer, different batches).As suggested by the more reasonable Φ f values of (98.5 ± 1.4)% and (92.5 ± 0.4)% obtained for B2 and the thin PTFE foil, both materials present suitable blanks.Subsequently in the ILC, these two blanks were used to explore the influence of different instruments, measurement geometries, and sample handling procedures, as well as the influence of the OC surface roughness. ILC Results.The results of the Φ f measurements of the YAG:Ce OC with Q1 and Q2 using B2 and the thin PTFE foil as blanks are shown in Figure 5.Both Φ f data sets collected with setup Q1 closely match, with no observable blank influence.For B2, we obtained Φ f values of (99.2 ± 0.7)% (FH SWF) and (99.6 ± 0.7)% (Schott), and for the thin PTFE foil, (99.7 ± 0.4)% (FH SWF) and (99.6 ± 0.4)% (Schott).The statistical data analysis also included individual measurements with Φ f values exceeding 100%.Measurements with Q2 yielded significantly lower Φ f values and deviations in Φ f for both blanks, i.e., Φ f values of (93.6 ± 0.9)% and (98.5 ± 1.2)% for B2 and the PTFE foil, respectively.The observed deviations in Φ f values for Q1 and Q2 cannot be attributed to the instrument calibration, as the corrected emission spectra and the Φ f values of the transparent dye solutions measured with the three setups were in good agreement, also with the certificate (cf. the Validation and Comparability sections). A more likely explanation for the considerable deviations in Φ f is a different light distribution within the IS for the scattering samples resulting from the different measurement and detection geometries of setups Q1 and Q2.As shown in Figure 1, for Q1, an almost perpendicular sample excitation is realized (8°to the surface normal), while in the case of Q2, the excitation light hits the sample at an angle of 53°with respect to the sample surface normal.This small geometric difference between the setups affects the sensitivity to the blank surface structure.Evidently, the use of powder blanks is more error prone and can introduce higher, additional (handling) uncertainties, as control of the powder distribution within the Petri dish even with a lid to smoothen the surface is challenging.We thus recommend the usage of a solid blank like the PTFE foil for Φ f measurements of solid samples. Influence of Sample and Blank Surface Properties.To assess the influence of the sample's surface properties on the Φ f determination with Q2, four YAG:Ce OC samples with different surface roughness were produced.The Φ f of these diffusely reflecting samples were then measured using four different blanks (Table S7).To study the effect of the specular reflecting surface of the quartz lid of the Petri dish and the diffusely scattering surfaces of the thin PTFE foil and the Spectralon surface of the IS in combination with different degrees of roughness of the YAG:Ce OCs, we selected the previously used thin PTFE foil with and without the Petri dish (without a lid), the empty IS, and the empty Petri dish without any additional blank material.The Φ f values derived for the YAG:Ce OCs with a small and a high surface roughness (sample 1 with the smoothest and sample 4 with the highest surface roughness) are displayed in Figure 6a.The results obtained for samples 2 and 3 with intermediate surface roughness are shown in Figure S6 and summarized in Table S7.Interestingly, Figure 6a reveals that the surface roughness of the sample did not significantly affect the Φ f values obtained with Q1 and Q2 regardless of the blank used.However, the Φ f values clearly differed for both IS setups.As observed in the previous section, the blank considerably influences the Φ f values obtained with both setups: (i) Using the PTFE foil with and without the Petri dish led to Φ f values exceeding 100% for Q1, while with Q2, maximum Φ f values of (95.7 ± 0.5)% were obtained.(ii) Similar results are found for the empty Petri dish with Φ f exceeding 100% or reaching values close to 100% for Q1, while measurements with Q2 exhibited Φ f values of (92.8 ± 0.6)%.(iii) While Φ f measurements with the empty IS as a blank did not yield Φ f values >100%, the deviation between the Φ f measurements with Q1 and Q2 was found to be 7.2%.As previously suggested, these effects can be attributed to (i) differences in the light distribution of the scattering sample/ blank as result of the differences in illumination and detection geometry for Q1 and Q2 and/or (ii) changes in the reflectivity of the IS surface at the sample position, potentially due to contaminations and/or aging effects.Apparently, none of the used blanks sufficiently match the samples' scattering properties (specular or diffuse).Overall, the Φ f values measured with Q1 seem to be overestimated, as indicated by Φ f values exceeding 100%.For Φ f measurements with Q2, all Φ f values were <100% with deviations of 3.9% and 3.4% between the four different blanks (Figure 6a) for samples 1 and 4, respectively.Then, we assessed the performance of a custom-made 2 mm thick PTFE target blank with a high (>95%) and almost wavelength-independent reflectivity in the visible.Due to its larger thickness, the reflectivity of the PTFE target is about 20% higher than that of the 250 μm thin PTFE foil (cf. Figure S7).To mimic the sample surface, one side of the PTFE foil was polished, and the other was kept "rough" for effective diffuse scattering.Figure 6b displays the Φ f values of samples 1 and 4 obtained with the smooth and rough side of the PTFE foil.Generally, Φ f values measured by the three ILC partners are (i) below 100% and (ii) more closely match, within the statistical uncertainties. The different surface roughnesses of samples 1 and 4 did not affect the Φ f values measured with Q1, yet deviated by about 2% for Q2, with the smoother sample 1 yielding smaller Φ f values.For sample 4, similar Φ f values were obtained as measured with Q1 for the smooth and rough PTFE target.Overall, our results indicate slightly higher experimental uncertainties of about 2% for sample 1 (smooth surface with a specular reflex) for Q2.Comparing the overall performance of the three IS setups, measurements with Q1 yielded higher Φ f values with smaller experimental uncertainties.However, Φ f values exceeding 100% were measured depending on the chosen blank.Measurements with Q2 led to smaller Φ f values with slightly higher uncertainties, which are more strongly affected by the blank. These deviations cannot be attributed to instrument calibrations (cf. Figure 2) but are likely caused by the measurement geometry and the light distribution inside the IS.This is supported by converging Φ f values measured with Q1 and Q2 with a nonabsorbing diffusely reflecting target (PTFE, 2 mm) blank. During the ILC, we noticed that some deviations in Φ f measurements may result not only from instrument calibration and sample handling but also from the instrument settings, i.e., the accuracy of the automatically selected measurement parameters by the instrument and their stability.This can differ between IS setups.An example is shown in Figures S8− S10, where the spectral shape of the excitation peak shifted between sample and blank measurements, pointing to lamp instabilities or a monochromator drift.For the measurement series with N = 4 × 2 × 4, the Φ f values are solely affected by handling uncertainties.Also instrument aging must be considered as revealed by differences in Φ f values of about 2% for the same sample/blank pair, which were collected five months apart. ■ CONCLUSION AND OUTLOOK Aiming for the identification and quantification of uncertainty sources of Φ f measurements of scattering liquid and solid luminescent samples, we performed the first interlaboratory comparison (ILC) of the absolute Φ f measurements.This ILC involved three laboratories from academia and industry, using two commercial stand-alone integrating sphere (IS) setups with different illumination and detection geometries (Quantaurus first (Q1) and second (Q2) generation from Hamamatsu).As representative samples, we selected transparent and scattering dye solutions as well as solid phosphors, namely YAG:Ce optoceramics (YAG:Ce OC) broadly exploited as converter materials for blue LEDs.Following carefully developed measurement protocols for the ILC, we systematically explored (i) the influence of the illumination and detection geometry, (ii) the optical properties of the blank, necessary to determine the number of absorbed photons, and (iii) the influence of the sample-specific surface roughness, representatively varied for YAG:Ce OC samples.As a prerequisite for the ILC, first, the reliability of the spectral correction curves implemented by the instrument manufacturer was assessed using the BAM-certified spectral fluorescence standards F003−F005.The good agreement of the measured corrected emission spectra and certified spectra with dye spectra obtained in a previous ILC on the spectral correction of emission spectra confirmed the reliability of the setup calibrations. One of the key findings of this ILC is that although the differences in the illumination and detection geometry of the two IS setups appear to be small, they are only negligible for liquid transparent luminescent samples.These differences appear to be also insignificant for scattering luminescent solutions, yet a prerequisite is a blank with scattering properties closely matching those of the sample.This was realized in this ILC by matching the concentrations of the silica particles in the dye solutions and the blank.For a more general statement and general recommendations for the absolute determination of the Φ f of scattering luminescent dispersions, comprehensive experiments with luminescent particle dispersions covering a large range of scattering cross sections and scatterer concentrations are necessary, which was beyond the scope of this ILC. For absolute Φ f measurements of solid luminescent and scattering materials such as broadly applied OCs, special care is needed to circumvent measurement uncertainties originating from impurities or changes in the light distribution caused by different scattering and reflection properties of the blank and the sample.Reliable absolute Φ f measurements of such samples require careful consideration of the illumination and detection geometry of the IS setup and the selection of a suitable blank.Criteria for a blank choice are (i) a good match with the samples' scattering properties, specular or diffuse, and (ii) the ease of handling and reproducible positioning within the integrating sphere.Use of the sample holder as a blank is not recommended, due to potential aging of its material and the ease of introducing absorbing and/or emissive contaminations from previously measured samples.Also, BaSO 4 powders and thin PTFE foils cannot be recommended as blanks for such OCs or other solid samples with similar scattering characteristics.Some of the BaSO 4 powders utilized in this ILC led to an overestimation of the resulting Φ f by about 20%, indicating batch-to-batch variations of commercial BaSO 4 powders.Moreover, BaSO 4 powders can introduce larger uncertainties related to handling, as their surface roughness cannot be well-controlled and reproduced.Also, the use of thin and flexible PTFE foil blanks can lead to considerable deviations between the Φ f data obtained with different IS setups.Ultimately, we recommend nonabsorbing blank materials with a high reflectivity (>95%) like the 2 mm thick PTFE target to be placed on the sample holder, as this provided physically meaningful and comparable Φ f values for both IS setups used in this ILC.We ascribe this finding to the near-Lambertian light scattering behavior of this material, yielding a homogeneous light distribution within the IS. Overall, standardized measurement protocols in combination with a validated blank are mandatory to ensure reliable Φ f measurements.Only this will enable a direct comparison of different IS setups and Φ f data from different laboratories.In the future, we plan a similar ILC with a larger number of partners with different types of IS setups to broadly establish measurement uncertainties and identify instrument-specific sources of uncertainty.Also, scattering Φ f standards were developed by BAM. Additional experimental details, calculation of uncertainty contributions and uncertainty budget, method, validation, and overview of CRM spectral properties (PDF) ■ O p t o c e r a m i c ( O C ) S a m p l e .A Y A G : C e O C ((Y 1−y Ce y ) 3 Al 5 O 12 (y = 0.001 to 0.01)) was provided by Schott.More details on sample preparation and measurement conditions are given in the SI. Figure 3 . Figure 3. Emission spectra of the YAG:Ce OC as a function of excitation wavelength (λ exc = 430−470 nm).Inset: photo of the polished OC sample. Figure 4 . Figure 4. Absorption (red) and Φ f (blue) of the YAG:Ce OC measured in a quartz Petri dish without a lid with Q2.Blanks: BaSO 4 powders B1, B2, and B3 and a 250 μm thick PTFE foil, both with (solid) and without (shaded) a lid.λ exc = 455 nm.All blanks were placed in the quartz Petri dish with a lid (solid bars) and without a lid (shaded bars).The YAG:Ce OC was always placed in the Petri dish without a lid. Figure 6 . Figure 6.(a) Φ f values of YAG:Ce OC samples without (solid blue) and with the highest degree of surface roughness (striped), determined with IS setups Q1 and Q2 using four different blanks: (i) empty IS, (ii) empty Petri dish, (iii) thin PTFE foil without the Petri dish, and (iv) thin PTFE foil in the Petri dish.(b) Φ f measurements utilizing a 2 mm thick PTFE target with a smooth and a rough surface roughness as a blank.λ exc = 450 nm.
6,602.4
2024-04-17T00:00:00.000
[ "Materials Science", "Physics" ]
Intelligent HTC for Committor Analysis Committor analysis is a powerful, but computationally expensive, tool to study reaction mechanisms in complex systems. The committor can also be used to generate initial trajectories for transition path sampling, a less-expensive technique to study reaction mechanisms. The main goal of the project was to facilitate an implementation of committor analysis in the software application OpenPathSampling ( http://openpathsampling.org/ ) that is performance portable across a range of HPC hardware and hosting sites. We do this by the use of hardware-enabled MD engines in OpenPathSampling coupled with a custom library extension to the data analytics framework Dask ( https://dask.org/ ) that allows for the execution of MPI-enabled tasks in a steerable High Throughput Computing workflow. The software developed here is being used to generate initial trajectories to study a conformational change in the main protease of the SARS-CoV-2 virus, which causes COVID-19. This conformational change may regulate the accessibility of the active site of the main protease, and a better understanding of its mechanism could aid drug design. Introduction The main goal of this project is to facilitate an implementation of committor analysis in the software application OpenPathSampling (OPS) [1,2] that is performance portable across a range of HPC hardware and hosting sites. Committor analysis is essentially an ensemble calculation that maps straightforwardly to a High Throughput Computing (HTC) workflow, where typical individual tasks have moderate scalability and indefinite duration.Since this workflow requires dynamic and resilient scalability within the HTC framework, OPS is coupled to a custom HTC library [3] that leverages the Dask [4,5] data analytics framework and implements support for the management of MPI-aware tasks. The committor can also be used to generate initial trajectories for transition path sampling, a less-expensive technique to study reaction mechanisms.The software developed here is being used to generate initial trajectories to study a conformational change in the main protease of the SARS-CoV-2 virus, which causes COVID-19.This conformational change may regulate the accessibility of the active site of the main protease, and a better understanding of its mechanism could aid drug design. The project targets porting the custom HTC library to a wider variety of HPC platforms (specifically additional resource managers and MPI runtimes) and stress testing the framework for very large task counts.Both OPS and jobqueue features are Python-based and we have developed more sophisticated support for Python-based MPI-enabled tasks, including direct access to the memory space of executed tasks (which allows us to avoid the use of the filesystem for information transfer between tasks). We have also investigated the use of the UCX protocol for communication to reduce the overall overhead of the HTC framework. Preparing OpenPathSampling for the HTC framework Leveraging the Dask framework involved significant software development within OPS.Two main problems had to be addressed: OPS had no mechanism to gather results reported from parallel workers into a single result set, and tasks based on OPS could not be serialised by Dask. Both of these problems were solved by building atop an ongoing overhaul of the OPS storage and serialisation subsystem.These have been combined with a new implementation of the committor simulation to make a usable parallel committor simulation for OPS. The overall approach to interfacing the HTC framework with OPS was to allow the HTC framework to effectively act as a drop-in replacement for objects from the Dask scheduler.This required significant effort to prepare OPS to support Dask in general, but then little additional effort to support the HTC framework.In particular, OPS could not trivially support Dask because: • Some objects in OPS could not be serialised by Cloudpickle (the default serialisation in Dask). • OPS caches results of some calculations (functions referred to as "collective variables") in memory and also stores them to disk.Maintaining this behaviour in a parallel scheme required new objects to replace the existing collective variables. A new storage subsystem was already in development for OPS for reasons of performance, extensibility, and maintainability.We completed the core functionality of that storage subsystem and extended it to address issues with Dask compatibility.The incompatibility with Cloudpickle was overcome by using the new storage subsystem's serialization capabilities to serialise the data into a memory-based database, which could then be again serialized by Cloudpickle.Deserialising the memory-based database then becomes part of the task. Developing new collective variables was a planned part of the new storage subsystem, and we added functionality to support gathering results into a canonical copy of the collective variable as part of the process of storing results from a task to disk. In addition, we added support for the OPS GROMACS engine which we use as the scalable, hardwareportable MD engine that drives the OPS tasks. Development and optimization of the jobqueue features HTC library Building upon [3], we expanded the capabilities of jobqueue features [6] to include the following: • Support for MPI-enabled tasks (i.e., tasks which can leverage MPI4PY for parallelisation).The framework has the ability to access the memory space of the root MPI process of each task.• Support for a Portable Batch System (PBS) resource scheduler. • Continuous integration support for SLURM and PBS.This means that the library features are fully tested (automatically) on both of these schedulers through the use of a set of custom Docker containers.• Support for OpenMPI, Intel MPI and MPICH MPI runtimes (including runtime process distribution). • A Docker-based tutorial infrastructure which includes a SLURM scheduler.The infrastructure includes two computation nodes and one head node.On the head node a JupyterLab instance is installed, which allows the user to interact with the tutorial infrastructure (and explore our package) directly through their browser.We intend to create some tutorial Jupyter notebooks to facilitate use of this infrastructure.• Support for Dask and Dask.distributed 2.x (tested up to version 2.21, released in July 2020). • Configuration of the HTC library for the PRACE resources JUWELS and Irene.Irene uses both a custom scheduler (based on SLURM) and a custom MPI runtime, however we were able to override the more generic classes of jobqueue features to support these (see Appendix B for details).MPI-enabled task support is now tested via in a continuous integration setup, and this feature was the main test case used for our scalability studies.For each task, the framework reuses the MPI environment created when the worker was initialised on the node.We have tested the framework with PyLAMMPS to show that each task does indeed have access to, and uses, all available resources (this information is something that LAMMPS reports in its output).A minimal example of what the implementation of this looks like in code is provided in Listing 1. return " Potential energy : % s " % L .eval ( " pe " ) @on_cluster ( cluster = lamm ps_clus ter ) def my_lammps_job ( input_file = " in .melt " , run_steps =100) : t1 = lammps_task ( input_file , run_steps = run_steps ) print ( t1 .result () ) # blocking call ( t1 is a lazy future object ) Listing 1: Example of decorator usage to parallelize computation The main actions for optimisation/improvement were: • Porting of the HTC framework to Dask 2.x. • Porting of the framework to Irene and JUWELS. • Implementing support for the PBS resource manager and additional MPI runtimes. • Continuous integration development to cover all library features. • Investigation of UCX instead of IPoIB as a communication protocol. There was no particular bottleneck to these developments; a lot of the time was spent resolving diverse issues related to the rapid development of the underlying Dask infrastructure.HPC network infrastructure is one key potential weakness since we currently use IPoIB and require a network connection between the scheduler and the workers.Since the interface used by the scheduler and those of the workers are now independently configurable, we have not (as yet) encountered a situation where issues related to this were not surmountable. Another issue is that the Dask scheduler is intended to be a (relatively) long running task (with low/moderate resource usage).This is because when the Dask scheduler submits jobs to the resource manager, it cannot know when the jobs will execute nor how long the entire workflow will take.The scheduler would therefore normally run on a login node.When using JupyterLab on the Juelich Supercomputing Centre (JSC) systems (here JUWELS in particular) a Dask scheduler has the potential to run for up to 3 days.On Irene, we also did not encounter limitations to running the scheduler on the login nodes, but we are aware this is likely to be the case on some sites.The scheduler can always be run within a resource manager job context. Scalability of jobqueue features In an HTC use case, strong and weak scalability are open to definition.To give an initial definition, one could say that strong scaling would correspond to the ability of the individual tasks to scale to larger resource usage.Such a measure as this is entirely dependent on the scalability of the applications used within the tasks themselves rather than related to the HTC framework.In the case of OPS, it uses an underlying community application engine to achieve scalability (GROMACS for the use case described here, but this is configurable).We have tested the framework with both GROMACS (as used within OPS) and LAMMPS (in particular PyLAMMPS).Neither of these applications are directly under the remit of this project and both applications are known to scale well on HPC resources. Another argument that could be made is that strong scaling could be defined as the number of simultaneous workers that the framework can utilise.Since each worker is an individual job, the limitation here does not come from the framework, it is the number of simultaneous jobs that can be allocated by the resource manager for a single user (which is usually of O(100) but varies from site to site).Since job submissions are ultimately handled by the library dependency dask jobqueue [7], we must wait for support of job arrays [8] to be able to reliably perform any such analysis (and this was not in the remit of this project). Our interest in this project lies in the scalability of the HTC framework itself with respect to task counts.We would argue that is somewhat equivalent to a weak scaling analysis when considering an HTC workload: higher task count is equivalent to larger workload and triggers the dynamic provisioning of additional workers by the HTC framework.jobqueue features had not really been stress-tested for the number of tasks it was capable of supporting.Previous efforts had looked at up to 2000 tasks, in this case we scaled out to 1M tasks on all available architectures, with each individual task using 2 nodes worth of resources.Each set of 2 nodes forms a worker and the workers are reused by queued tasks.The number of workers that the framework can use is entirely dependent on the maximum number of simultaneous jobs allowed by a specific site.The package can simultaneously use workers of different resource types (CPU, KNL, GPU,...).As we can see from Figure 1 (and Table 1), we were able to easily scale our task workload out to 1M tasks.The overhead of the framework is negligible at 1ms per task for SkyLake (SKL) architecture and 10ms per task for Knights Landing (KNL) architecture (and is completely independent of the resource usage of the workers themselves).Such overheads are negligible even for short task durations.To put this in context, the node configuration times on JUWELS is about 10s; on Irene it is about 6s for the SKL, and 29s for the KNL (see Table 2).CPU time savings for short task durations are, therefore, potentially substantial when using the framework. Tasks We note that maintaining a functional software environment is non-trivial for the use cases we address here, there are many complications possible due to the software requirements of the framework and the tasks themselves.This is in addition to the complexity due to the potential to simulate simultaneously on a variety of hardware (such as we have done for the CPU, GPU and KNL partitions of JURECA). UCX as a communication protocol To test the efficiency of the UCX protocol in comparison to TCP (using IPoIB in our case), we used several test cases with different kinds of computation, from basic hand written functions to those based on functions provided by Dask.The most significant differences between Dask related operations and hand written tests can be seen on two specific tests: increment mapped on a Dask bag, and regular tree reduction.This can be seen in Tables [7,8,9,10].As we can see, UCX working only on Dask bag gives slightly better results then TCP.However, when it comes to also working on multiple tasks, TCP lost less time on communication.In Tables [11, 12, 13, 14], we can see that for the case of a fully hand written tree reduction, we have better performance for TCP for both single task, and multiple tasks. Size of data in task TCP On most of these tests, UCX performance is not very different to TCP performance (where we are utilising IPoIB).In general, UCX does give slightly better results for tasks based on Dask functions and TCP performs better for hand written tasks (most likely due to the fact that Dask can perform optimisations when it has control over the data objects).Critically, UCX has been seen to lead to some errors related to the fact that it is not yet a fully supported technology within Dask.In particular, during tests there were problems in the case of restarting the workers while UCX was in use, which means that we potentially lose resiliency in this scenario.Given the very limited potential for performance improvement and the significant impact of sacrificing resiliency, we would not recommend the use of UCX with jobqueue features (or even Dask) at this time. Conclusions Our HTC library jobqueue features is resilient and scales extremely well.OPS was expanded to be able to use the Dask framework and integration with the jobqueue features library now requires just 3 lines of code (see Appendix A for details).We have shown that support on other PRACE machines is a relatively straightforward process.This means that OPS can now almost trivially transition from use on a personal laptop to some of the largest HPC sites in Europe. UCX support was found to be somewhat unstable (currently) and beneficial only in specific scenarios that make heavy use of Dask objects.Several times on closing a cluster we received errors about UCX losing the connection, but results were correctly returned.Problems appear when a cluster attempts to restart, then the connection can not be reestablished.For this reason we say that resiliency is currently compromised when using UCX, as such we would not recommend its use until this issue is resolved. The parallelised committor simulation capability that the integration between OPS and the HTC library provides allows for the possibility to more easily address committor analysis in a scalable way.Ongoing work will use the tools developed here for a committor simulation of the SARS-CoV-2 main protease.Initial analysis of the stable states is based on a long trajectory provided by D.E.Shaw Research [9].That trajectory indicates that a loop region of the protein may act as a gate to the active site.Initial frames will be taken from that trajectory to be used as configurations in the committor simulation. For a given molecular system, the committor simulation can be scaled up in two ways: by taking more initial configurations, which ensures a broader exploration of configuration space, and by running more trajectories per configuration, which leads to a more accurate calculation of the committor value.The results of this simulation will provide insight into the dynamics of the loop region, and the mechanism of its gate-like activity.Furthermore, trajectories generated by the committor simulation can be used an initial conditions for further studies using methods such as transition path sampling. B Integration with the PRACE Irene Joliot-Curie system The PRACE Irene Joliot-Curie system uses a the fully customised workload manager as a wrapper for several workload managers, such as SLURM or Moab.The jobqueue features library is configured for standard SLURM and PBS workload managers, and so some integration steps were required.The syntax of the custom workload manager wrapper was far from common but, for the most part, corresponds with an underlying SLURM syntax.This general correspondence allows for a relatively straightforward customization of the jobqueue features library to allow for interaction with the Irene system. The required steps to achieve such integration were the identification of the specialized commands and syntax of Irene, and mapping those to the classes already available within the jobqueue features.The customizations of the jobqueue features classes are shown in Listing 3. Figure 1 : 6 Table 2 : Figure 1: Total overhead of the framework per task in seconds for various numbers of tasks and on various architectures. Table 1 : Overhead per task (seconds, excluding node configuration) on different clusters.A maximum of 5 workers were available for each task count. Table 3 : Data set creation time per task for different protocols in relation to data size Table 4 : Data set creation computation time per set of tasks for different protocols in relation to data sizeCrucial to each test, we need to create data structures for them to use.That is why we test the impact of that creation for both TCP and UCX protocol, shown in Tables[3, 4, 5, 6].As we can see times of single operations Table 5 : Data set creation time per task for different protocols and numbers of tasks Table 6 : Data set creation computation time per set of tasks for different protocols and numbers of tasks do not show much differences, which was expected.For whole set of tasks, where most of communication take place, TCP shows better results. Table 7 : Bag increment computation time per task for different protocols relative data size Table 8 : Bag increment computation time per set of tasks for different protocols relative data size Table 10 : Bag increment computation time per set of tasks for different protocols and number of tasks Table 14 : Tree reduction computation time per set of tasks for different protocols and numbers of tasks
4,212.6
2020-11-17T00:00:00.000
[ "Computer Science" ]
Dual conditional GAN based on external attention for semantic image synthesis Although the existing semantic image synthesis methods based on generative adversarial networks (GANs) have achieved great success, the quality of the generated images still cannot achieve satisfactory results. This is mainly caused by two reasons. One reason is that the information in the semantic layout is sparse. Another reason is that a single constraint cannot effectively control the position relationship between objects in the generated image. To address the above problems, we propose a dual-conditional GAN with based on an external attention for semantic image synthesis (DCSIS). In DCSIS, the adaptive normalization method uses the one-hot encoded semantic layout to generate the first latent space and the external attention uses the RGB encoded semantic layout to generate the second latent space. Two latent spaces control the shape of objects and the positional relationship between objects in the generated image. The graph attention (GAT) is added to the generator to strengthen the relationship between different categories in the generated image. A graph convolutional segmentation network (GSeg) is designed to learn information for each category. Experiments on several challenging datasets demonstrate the advantages of our method over existing approaches, regarding both visual quality and the representative evaluating criteria. Introduction Conditional image synthesis mainly uses text, Gaussian noise or semantic layout to generate constrained images.Typically, conditional Generative Adversarial Networks (GANs) (Mirza & Osindero, 2014) are common approaches for conditional image synthesis.In conditional image synthesis, semantic image synthesis is to generate photorealistic images through semantic layouts.Since the information contained in the semantic layout is relatively sparse, semantic image synthesis is a huge challenge to image synthesis methods. Semantic image synthesis is widely used, for example past work includes specified content creation (Mirza & Osindero, 2014;Ntavelis et al., 2020) and drawing editing (Park et al., 2019;Tang, Xu, et al., 2020;Zhu et al., 2020) and other related work.In addition, the industrial applications of this work are also very wide, such as virtual reality and AIGC related applications. Currently, the semantic image synthesis methods based on GANs generally use noise as the input, and the semantic layout is used to control the image synthesis process through the adaptive normalisation methods.SPADE (Park et al., 2019) is the representative semantic image synthesis method.SPADE effectively solves the problem of blurred boundaries of each category in the generated image.CC-FPSE (Liu et al., 2019) and SCGAN (Y.Wang et al., 2021) are improvements based on SPADE (Park et al., 2019), and these methods have achieved good results.However, since these methods only use a single constraint to control the synthesis process, the quality of the generated images still cannot meet the needs of users. In addition, the discriminator also has an impact on the quality of the generated images.In GANs, the discriminator mainly consists of a convolutional network.Generally, PatchGAN (Isola et al., 2017) is a commonly used discriminator.Recently, some new discriminators have also been proposed.OASIS (Schonfeld et al., 2021) proposed a novel discriminator based on the segmentation network.The discriminator based on the segmentation network can effectively prompt the generator to generate the object shapes that conform to the semantic layout.But to a certain extent, it ignores the positional relationship information between different categories. In order to solve the above problems, we propose Dual-conditional GAN with based on an external attention for semantic image synthesis (DCSIS).In DCSIS, the adaptive normalisation module uses the semantic layouts of one-hot encoding to generate the first constraint.The external attention uses the semantic layouts to generate the second constraint.Two different modules perform two constraint controls on the input in sequence, forming a dual conditional attention (DCA).Compared with only using a single constraint, DCA can better utilise the category information and boundary information in the semantic layouts to synthesise better detailed information.Attention mechanism has been widely used in image synthesis and effectively improve the quality of synthesised images (Tang et al., 2019;Q. Wang et al., 2020).A novel graph attention (GAT) is introduced into the generator, which aims to strengthen the relative positional relationship between objects of different categories. DCSIS has two discriminators, one is the traditional discriminator SESAME (Ntavelis et al., 2020) and the other is the proposed segmentation network based on graph convolutional network.The proposed segmentation network based on graph convolutional network can not only align semantic information, but also better establish relationships between objects of different categories.The overview of the proposed DCSIS model is shown in Figure 1.We conduct experiments on three challenging datasets. In general, the main contributions of this paper are as follows: (1) We proposed two constraint methods to control the synthesis process.The semantic layout of RGB format and the semantic layouts of one-hot encoding are used to generate two constraints and form dual conditional control.(2) We designed a segmentation network discriminator based on graph convolutional network, which can better align semantic information.(3) We designed a novel graph attention to enhance the relational information between objects of different categories. Related work Generative adversarial networks have achieved remarkable success on unconditional image synthesis tasks (Brock et al., 2019;Tero Karras et al., 2019;T. Karras et al., 2020).Since the result of unconditional image synthesis is uncontrollable, conditional image synthesis using external control information to control the result of image synthesis is proposed.Semantic layout is a commonly used control information in conditional image synthesis, which is mainly used as the input of the generator.Pix2pix (Isola et al., 2017) and pix2pixHD (T.-C.Wang et al., 2018) are classical conditional image synthesis methods that take semantic layout as the input of the generator.Edge-GAN (Tang, Qi, et al., 2020) used edge details to optimise detailed structural information for image synthesis. Due to the sparsity of the information contained in the semantic layout, directly using the semantic layout as the input increases the pressure of network learning and it is difficult to effectively improve the quality of the generated images.Therefore, the current mainstream conditional image synthesis methods generally use noise as the input of the generator, and the semantic layout as the constraint to control the image synthesis process. At present, using the adaptive normalisation method that takes the semantic layout as the input to constrain the image synthesis process has gradually become the mainstream constraint method.AdaLIN (Kim et al., 2019), SPADE (Park et al., 2019), SEAN (Zhu et al., 2020), the class-adaptive normalisation method (D.Chen et al., 2020) and SAFM (Lv et al., 2022) are some well-known adaptive normalisation methods.These adaptive normalisation methods take the semantic layout as the input, and utilise the semantic layout to constrain the features of the noise during the normalisation process.The adaptive normalisation methods generate corresponding parameters through semantic layout to control the normalisation results.Since the semantic layout is only used to generate normalised parameters, it effectively avoids the defect that the semantic layout contains sparse information. Except for the generator, the discriminator also has an impact on the quality of the generated images.Some new discriminators are proposed. CC-FPSE (Liu et al., 2019) proposed a pyramid discriminator, which jointly feeds the generated image and semantic labels into the discriminator, and then discriminates true and false on multiple resolutions.OASIS (Schonfeld et al., 2021) proposed a segmentation network discriminator supervised with semantic labels. LGGAN (Tang, Xu, et al., 2020) proposed to use a local class-specific and feature module to learn the appearance distribution of different objects globally and the generation of different object categories.SC-GAN (Y.Wang et al., 2021) learns to generate normalised parametric models by convolving semantic vectors. In addition to these GAN-based methods, there are also some special non-GAN methods, such as CRN (Q.Chen & Koltun, 2017) which utilises refined cascaded networks for semantic image synthesis.There is also the recent application of the more popular diffusion model SDM (W.Wang et al., 2022) to this work, which combines the SPADE normalisation module with a diffusion model backbone to control image generation. For conditional image synthesis, fully exploiting the information of the semantic layout is crucial to the quality of the generated image.However, most approaches only use the semantic layout for a single constraint control.As a comparison, DCSIS uses the semantic layout of the RGB format and the semantic layout of the one-hot encoding format to form two different constraint controls.This approach further improves the utilisation of the semantic layout information. Dual condition attention Image synthesis methods that only use adaptive normalisation to constrain the generated images have been unable to meet the needs of existing tasks.In this paper, the proposed dual condition attention module (DCA) is used to constrain the generated images.DCA contains two constraints: the semantic layouts of RGB encoding and the semantic layouts of one-hot encoding.Structurally, DCA contains SPADE and the proposed attention network.DCA and the architecture of the generator network is shown in Figure 2. Inspired by the pre-trained visual language models CLIP (Radford et al., 2021) and GLIDE (Nichol et al., 2021), the paper proposes the RGB semantic encoder to extract the information of the semantic layouts of RGB encoding. The RGB semantic encoder consists of a CNN encoder and a transformer module, as shown in Figure 3.The semantic latent space generated by the RGB semantic encoder is fed into the proposed attention network in DCA.Different from the ordinary attention modules, the proposed attention network is a conditional attention module named Seg_Attention.The module is formulated as follows: where Q, K and V come from feature maps, and K c and V c come from backbone network.Since SPADE only uses the parameters generated by the semantic layouts to control the input information, the information loss of the object category in the semantic layout is more, resulting in a lack of further guidance for the details of the generated images.Hence, the RGB semantic latent space is used to enhance the semantic constrained information in DCA.In this way, the image synthesis process is controlled by two different forms of constraints, which improves the utilisation of semantic layout information. Graph attention To strengthen the connections between categories in the generated images, two attention mechanisms are employed at the end of the generator.The spatial attention (Tang, Bai, et al., 2020) and the proposed graph attention are used in the generator.The purpose of the proposed graph attention is to learn the correlation between regions in the image.Learning the correlation between regions can effectively improve the quality of each region in the synthesised image. The proposed graph attention is shown in Figure 4.In Figure 4, FC means fully connected layer.As shown in Figure 4, the proposed graph attention is composed of the patch embed where α represents the features processed by embed module, ⊕ represents element-wise addition, T represents matrix transposition, and ⊗ represents element-wise multiplication.F l−1 represents the feature outputted by the (l − 1)th Seg_Attention block and F l represents the feature outputted by the lth Seg_Attention block. Graph convolution segmentation network For semantic image synthesis tasks, the role of the discriminator is to distinguish between the synthesised images and the ground-truth images.Generally, the classification-based discriminators are commonly used in image synthesis.However, classification-based discriminators ignore the relationship between each object in the image.Insufficient learning of the relationship between objects can easily lead to blurred object boundaries in the synthesised images.Therefore, this paper proposes a segmentation network based on graph convolution as a new discriminator.The new discriminator is called GSeg.The role of GSeg is to align category information in the synthesised images and the semantic layouts.Simultaneously, SESAME is also used as a discriminator.Our method includes two different discriminators: GSeg and SESAME.The architecture of GSeg is shown in Figure 5. GSeg uses an encoderdecoder structure. As shown in Figure 5, the encoder consists of a graph convolutional module in a vision GNN (Han et al., 2022).The decoder consists of the convolutional modules.Compared with ordinary convolution modules, graph convolution modules can better learn the relationship between objects. Optimisation objective In our method, the adversarial loss L adv , the feature matching loss L feat , the perceptual loss L perc and the semantic alignment loss L seg are used to control the quality of the synthesised images. Adversarial loss: In GANs, adversarial loss is very effective for image fidelity, and has achieved good results in many images synthesis works.The adversarial loss can be defined as: where I R denotes the real image, z denotes the noise, S hot denotes the one-hot encoded semantic layout, and S RGB denotes the RGB semantic label, G represents the generator, D represent the discriminator, and E represents the RGB semantic encoder. Feature matching loss: According to (T.-C.Wang et al., 2018), in the discriminator, we output multiple sets of different feature maps.During the training process, the L 1 loss is used to constrain the feature maps of different scale spaces.Its calculation process is shown in formula (6): where N i denotes the number of features in D i (I R , S hot ). Perceptual loss: In this paper, a pre-trained VGG (Qi et al., 2018) is used to extract the features of the real images and the synthesised images.The perceptual loss in the multiscale feature space is shown in formula (7): where I F represents the generated image, represents the VGG model, and k represents the feature map of the k th layer in the VGG model.Semantic alignment loss: To constrain the semantic alignment between the synthesised images and the corresponding semantic layouts, our method employs the semantic alignment loss for control.The semantic alignment loss can be expressed as: where the ground-truth label image S has three dimensions, where the last two denote spatial locations, namely (j, k) ∈ H × W, is the balance weight of each class in the one-hot semantic graph, and Seg represents the graph convolutional segmentation network. The weighted summary of these loss functions is shown in formulas (10): where λ adv , λ feat , λ perc and λ seg are the corresponding weight parameters. Experiment Experiments are conducted to evaluate the performance of the proposed approach for image synthesis on various benchmarking datasets.We compare qualitative and quantitative results with some competing methods.These competing methods includes CRN (Q. Chen & Koltun, 2017), Pix2PixHD (T.-C.Wang et al., 2018), SPADE (Park et al., 2019), CC-FPSE (Liu et al., 2019), SCGAN (Y.Wang et al., 2021), SDM (W.Wang et al., 2022) and OASIS (Schonfeld et al., 2021).In addition, this article will use multiple sets of ablation experiments to verify the benefits of each module. Dataset and experiment details Dataset: This paper conducts experiments on three challenging datasets, namely Cityscapes (Cordts et al., 2016), ADE20K (Tero Karras et al., 2019) and CelebAMask-HQ (Brock et al., 2019) Experimental details: Our method uses the ADAM optimiser, the learning rate of the generator is set to 1 × 10 −4 , the learning rate of the discriminator is set to 4 × 10 −4 , and the RGB semantic encoder is trained together with the generator.λ adv , λ feat , λ perc and λ seg are set to 1, 10, 10 and 1 respectively.In the first half of the optimisation process, only the real images are fed into GSeg, and when the epoch reaches half of the maximum value, the real images and the synthesised images are fed into GSeg.We perform 150 epochs of training on the Cityscapes and ADE20K datasets, 150 epochs of training on the CelebAMask-HQ dataset.All experiments were performed on a Nvidia 2080ti GPU. Evaluation metrics: This paper uses two metrics to evaluate network performance: Fr´echet Inception Distance (FID) (Heusel et al., 2017) and mean Intersection-over-Union (mIoU).FID is used to measure the distance between the distribution of synthesised results and the distribution of the real images.mIoU is used to evaluate the semantic segmentation accuracy of the synthesised images.The higher the semantic segmentation scores (mIoU) are and the lower FID is, the better the method should be.Following previous methods (Liu et al., 2019;Park et al., 2019), we use semantic segmentation models DRN-D-105 (Yu et al., 2017), UpperUnet101 (Xiao et al., 2018) and Unet (Lee et al., 2020;Ronneberger et al., 2015) for semantic segmentation evaluation of Cityscapes, ADE20K and CelebAMask-HQ respectively. Comparison with previous methods In this section, we compare our method with several state-of-the-art semantic image synthesis methods on three datasets. Quantitative results: The quantitative comparison results of the synthesis models on the Cityscapes, ADE20K and CelebAMask-HQ datasets are shown in Table 1.From Table 1, mIoU of DCSIS are superior to all baseline methods on all datasets.For mIoU, DCSIS achieves 68.8, 47.4 and 77.8 on the three datasets of Cityscapes, ADE20K and CelebAMask-HQ.DCSIS gives the relative improvements of 1.7, 2.1 and 2.5 compared to SIMS on the Cityscapes dataset, OASIS on the ADE20K dataset and SIMS on the CelebAMask-HQ dataset, respectively.Compared with the discriminator based on the segmentation network used by OASIS, GSeg has better segmentation performance.This also makes mIoU of DCSIS significantly higher than that of OASIS.For FID, DCSIS achieves 48.4,31.8 and 17.1 on the three datasets of Cityscapes, ADE20K and CelebAMask-HQ.For the CelebAMask-HQ dataset, DCSIS outperform all baseline methods on FID.FID of DCSIS is slightly lower than that of OASIS for the other two datasets, but our method has better segmentation performance. The parameter size of DCSIS is 108M.Although DCSIS has 8M more parameters than SPADE, DCSIS gives the relative improvements of (10.2, 6.9), (0.8, 10.2), (0.4, 0.1) and (6.0, 5.7) compared to SPADE on three datasets for FID and mIoU.CRN has 21M fewer parameters than DCSIS.However, FID and mIoU of CRN on each dataset are much lower than DCSIS.DCSIS has 14M more parameters than OASIS.But DCSIS gives the relative improvements of 2.3, 2.1 and 2.7 compared to OASIS on three datasets for mIoU.Overall, DCSIS achieves good image synthesis results with a moderate parameter size. Quantitative results: The Qualitative comparison of different methods on the three datasets of CelebAMask-HQ, Cityscapes and ADE20K are given in Figures 6-8, respectively.For the three datasets, the images synthesised by DCSIS not only have much better visual quality, but also are closer to the ground truth images in the overall colour distribution. Compared with all baseline methods, our method produces realistic images while respecting the spatial semantic layout, and can generate diverse scenes with high image fidelity.The reason is that DCSIS uses double constraints for finer control over the synthesised images and GSeg also effectively improves the clarity of object boundaries.Experiments also show that it is difficult to effectively control the details of the object only by a single adaptive normalisation method. Ablation experiment In this section, a set of experiments are to investigate the effect of each component on the performance of DCSIS.We conduct ablation experiments on the Cityscapes dataset (5) SPADE + SESAME + DCA + Gseg + GAT (DCSIS).The quantitative and qualitative results are presented in Table 2 and Figure 9, respectively.As shown in Table 2, it can be seen that DCA, Gseg and GAT have a significant impact on the performance of DCSIS.Among the all architectures mentioned above, DCSIS obtains the best results.It can be seen that FID and mIoU can be improved when the segmentation network as the discriminator.Compare with Unet, GSeg brings the relative improvements of (0.3, 1.3) for the Cityscapes dataset and (1.3, 1.9) for the CelebAMask-HQ dataset on FID and mIoU.This shows that the segmentation performance of GSeg is better than that of Unet. When DCA is introduced, DCA brings the relative improvements of (2.0, 2.3) for the Cityscapes dataset and (0.4, 2.1) for the CelebAMask-HQ dataset on FID and mIoU.Compared with only using a single constraint, using two different forms of constraints to control the synthesis process is beneficial to improve the quality of the synthesised images. GAT brings further improvements in FID and mIoU.The relative improvement of FID and mIoU are (0.3, 0.8) for the Cityscapes dataset and (0.4, 0.4) for the CelebAMask-HQ dataset, respectively. Overall, DCA has a greater impact on the quality of the synthesised images than GSeg and GAT.These components effectively improve the performance of DCSIS. From Figure 9, it can be seen that the visual quality of the images synthesised by DCSIS is better than other methods.For the CelebAMask-HQ dataset, human skin tones in the synthesised images obtained by DCSIS appear more realistic than other methods.For the Cityscapes dataset, DCA makes the boundary information between categories clearer.Overall, our method makes texture details in the images appear very natural and more realistic.For all datasets, the results suggest that DCA, GSeg and GAT can effectively improve the visual quality of the synthesised images.It proves that components are useful for the final results in DCSIS and can further improve the performance in DCSIS. Conclusions This paper presents a novel image synthesis approach, namely DCSIS, in which DCA, GSeg and GAT are used to enhance the information of the semantic layouts and improve the results of image synthesis.In DCSIS, DCA uses the double constraints to control image synthesis.Except for the adaptive normalisation as the first constraint, we also propose a semantic encoder as the second constraint.The proposed semantic encoder uses the semantic layouts of RGB encoding as the input.DCA can effectively improve the quality of the synthesised images.The generator learns the relationship between different categories using GAT to further improve the quality of the synthesised images.The proposed GSeg is used as the discriminator of DCSIS.Gseg is used to align semantic information and establish relationships between objects. Experiments are conducted on three benchmark datasets to evaluate the performance of our presented approach.The experimental results indicate that DCSIS can generate the higher quality photorealistic images and obtain the better quantitative results.Comparisons with some state-of-the-art baseline methods, it demonstrates that the new method is more effective and efficient in terms of the qualitative and quantitative results in most cases. Disclosure statement No potential conflict of interest was reported by the author(s). Funding The Figure 2 . Figure 2. Architecture of the generator network. Figure 3 . Figure 3. Architecture of the RGB semantic encoder. . The Cityscapes dataset is an image set including a variety of urban street scenes, including 35 semantic classes, of which 3000 images are used for training and 500 images are used for verification, and the image resolution is set to 256 × 128.The CelebAMask-HQ dataset is a high-resolution face dataset with fine-grained mask annotations, containing 19 semantic classes, and the image resolution is set to 128 × 128.The ADE20K dataset is a huge dataset with dense annotations, containing 150 semantic classes, 20,210 images for training and 2000 images for validation, and the image resolution is set to 128 × 128. Table 1 . Quantitative with competing methods on different datasets.
5,196.8
2023-10-04T00:00:00.000
[ "Computer Science" ]
Resonant entanglement of photon beams by a magnetic field In spite of the fact that photons do not interact with an external magnetic field, the latter field may indirectly affect photons in the presence of a charged environment. This opens up an interesting possibility to continuously control the entanglement of photon beams without using any crystalline devices. We study this possibility in the framework of an adequate QED model. In an approximation it was discovered that such entanglement has a resonant nature, namely, a peak behavior at certain magnetic field strengths, depending on characteristics of photon beams direction of the magnetic field and parameters of the charged medium. Numerical calculations illustrating the above-mentioned resonant behavior of the entanglement measure and some concluding remarks are presented. Introduction Entanglement phenomenon is associated with a quantum non-separability of parts of a composite system.Entangled states appear in studying principal questions of quantum theory, they are considered as key elements in quantum information theory in quantum computations and quantum cryptography technologies; see e.g.Refs.[1,2].In laboratory conditions the entanglement of photon beams is usually created and studied using some kind of crystalline devices.In spite of the fact that photons do not interact with an external magnetic field, the latter field may indirectly affect photons in the presence of a charged environment.This opens up an interesting possibility to continuously control the entanglement of photon beams.Studying this possibility in the framework of an adequate QED model, we have discovered that such entanglement has a resonant nature, namely, a peak behavior at certain magnetic field strengths depending on characteristics of photon beams and parameters of the charged medium.This is the study presented in this article.The article is organized as follows: In Sec. 2, we outline details of the above-mentioned QED model.This model describes a photon beam that consists of photons with two different frequencies, moving in the same direction and interacting with a quantized charged scalar field (KG field) placed in a constant magnetic field.Particles of the KG field we call electrons in what follows and the totality of the electrons is called the electron medium.Photons with each frequency may have two possible linear polarizations.In the beginning, we consider the electron subsystem consisting of only one charged particle.Both quantized fields (electromagnetic and the KG one) are placed in a box of the volume V = L 3 and periodic conditions are supposed.We believe that in this case the model already describes the photons interacting with many identical electrons, and the quantity ρ = V −1 may be interpreted as the density of the electron medium.In this article, we essentially correct exact solutions used in our previous consideration of similar models; see Ref. [3] and references there.In a certain approximation, solutions of the model correspond to two independent subsystems, one of which is a quasi-electron medium and another one is a set of some quasi-photons.In the new solutions the orders of smallness of contributions to quasi-photon states used in calculating the entanglement measures are accurately determined and an adequate expression for the spectrum of quasi-electrons derived.Namely, the latter made it possible to detect the resonant behavior of the entanglement measure at some resonant values of the external magnetic field.Finally, in Sec. 4, numerical calculations illustrating the above-mentioned resonant behavior of the entanglement measure and some concluding remarks are presented.Technical details related to Hamiltonian diagonalization are placed in the Appendix 6. QED model and its solutions Consider photons with two different momenta k s = κ s n, s = 1, 2 (frequencies), moving in the same direction n = (0, 0, 1) and interacting with quantized charged scalar particles-electrons placed in a constant magnetic field B = Bn, B > 0, potentials of which in the Landau gauge are: A ext (r) = −Bx 2 , 0, 0 .In what follows, we use the system of units where ℏ = c = 1. Photon vectors are denoted as |Ψ⟩ γ , |Ψ⟩ γ ∈ H γ .The Hamiltonian of free photon beam reads: Electrons are described by a scalar field φ (r) interacting with the external constant magnetic field A µ ext (r).The magnetic field does not violate the vacuum stability.After the canonical quantization, the scalar field and its canonical momentum π(r) become operators φ(r) and π(r).The corresponding Heisenberg operators φ(x) and π(x), x = (x µ ) = (t, r), satisfy the equal-time nonzero commutation relations [ φ(x), π(x ′ )] t=t ′ = iδ(r − r ′ ).These operators act in the electron Fock space H e constructed by a set of creation and annihilation operators of the scalar particles and by a corresponding vacuum vector |0⟩ e .Electron vectors are denoted as |Ψ⟩ e , |Ψ⟩ e ⊂ H e . The Fock space H of the complete system is a tensor product of the photon Fock space and the electron Fock space, H = H γ ⊗ H e .Vectors from the Fock space H are denoted by The Hamiltonian of the complete system (composed of the photon and the electron subsystems) has the following form: Consider the amplitude-vector (AV) φ(x) = e ⟨0| φ(r) |Ψ (t)⟩, which is on the one side a function on x (the projection of a vector |Ψ (t)⟩ onto a one-electron state), on the other side AV is a vector in the photon Fock space.In the similar manner, one could introduce many-electron or positron amplitudes and interpreted them as AVs of photons interacting with many charged particles.However, we neglect the existence of such amplitudes in the accepted further approximation, they are related to processes of virtual pair creation.In such an approximation, one can demonstrate that AV φ(x) satisfies the following equation: It is convenient to pass from the AV φ(x) to a AV Φ(x) = U γ (t) φ(x), U γ (t) = exp(i Ĥγ t), which satisfies a KG like equation (KGE): where ε = αρ, α = e 2 /ℏc = 1/137, and ρ is the density of the electron media.The quantity ε characterizes the strength of the interaction between the charged particles and the photon beam.We suppose that both ε and α are small, this supposition defines the above mentioned approximation. One can see that in the model under consideration, we have three commuting integrals of motion Ĝµ = i∂ ν + n ν Ĥγ , µ = 0,1,3; n µ = (1, n); Ĝ0 can be interpreted as the operator of the total energy and Ĝµ , µ = 1, 3 as momenta operators in the directions x 1 and z. Recall that Î is an integral of motion if its mean value with respect to any φ satisfying the KGE does not depend on time.If Î is an integral of motion, then Î, Pµ P µ = 0. If Î is an integral of motion, then, apart from satisfying the KGE, the wave function could be choose as an eigenfunction of Î.Then we look for AV Φ(x) that are also eigenvectors for the integrals of motion Ĝµ , where g 0 is the total energy and g 1,3 are momenta in x 1 and z directions.From Eq. ( 6) it follows Consequently, the operator Ĥχ (u) commutes with the operator Pµ P µ on solutions Φ(x), and therefore is an integral of motion. A solution to Eq. ( 6) has form where the function χ(x 2 ) must satisfy the following equation: In order to solve the latter equation, we pass to a description of the electron motion in the magnetic field in an adequate Fock space, see Ref. [4].We introduce new creation â † 0 and annihilation â0 Bose operators, These operators commute with all the photon operators a † s,λ and âs,λ , s = 1, 2, λ = 1, 2. We denote the totality of the free photon and the introduced electron creation and annihilation operators as a † s,λ and âs,λ , s = 0, 1, 2, where â † 0,λ = â † 0 δ λ,1 and â0,λ = â0 δ λ,1 .The corresponding vacuum vector |0⟩ reads: The operator Ĥχ (0) can be represented as a quadratic form in terms this totality of the creation and annihilation operators, As it is demonstrated in Appendix 6, there exists a linear canonical transformation of the operators a † s,λ and âs,λ , s = 0, 1, 2, given by Eqs.(49) which diagonalizes the Hamiltonian Ĥχ (0), where the quantities τ k,λ satisfy the conditions It is possible to demonstrate that after an unitary transformation, the integral of motion Ĥχ (u) can be separated in two parts Ĥq−ph (u) and Ĥe (u): Each of these parts are also integrals of motion due to relations Eqs. ( 7), ( 9) and (15).The operator Ĥe corresponds to the quasi-electron subsystem, while the operator Ĥq−ph to the subsystem of quasi-photons. It is useful to consider operators Pµ , witch are also integrals of motion.If we assume that at ϵ → 0 the photons do not interact with the electronic medium, then in such a limit the operators Pµ are the energy-momentum operators of a the free electrons i∂ µ , and the operator n µ Ĥq−ph (u) is the energy-momentum operator of the free photons Ĥγ .It is therefore appropriate to refer to Pµ as the quasi-electron energy-momentum, and to n µ Ĥq−ph (u) as the energy-momentum of the quasi-photons. Then we can choose AV Φ(x) to be eigenvectors for the integrals of motion Ĥe (u), Ĥq−ph (u) and Pµ , Further, we interpret the eigenvalues p µ as momenta of quasi-electrons.It follows from Eq. (17) that Φ(x) is an eigenvector for the operator Ĥχ (u), Substituting (8) into Eq.( 18), we obtain an equation for the function χ(x 2 ), which has the following solutions: Equations ( 17),( 9) and ( 19) are consistent if which implies: and Taking into account Eq. ( 20) from ( 22) we obtain the spectrum of quasi-electrons in the constant magnetic field: Since τ 0 (ϵ = 0) = ω, the well-known spectrum of a relativistic spinless particle in the constant magnetic field, follows from Eq. ( 23), For small ε the roots τ k,λ are: In this approximation, the spectrum of the quasi-electrons in the constant magnetic field has form: Using Eqs. ( 25) and ( 26) we obtain for small ε expressions for matrices (59) defining the canonical transformation (49): Substituting χ(x 2 ) given by Eq. (20) in equation ( 8) for Φ(x), for small ε we obtain: 3 Photon entanglement problem General We recall that a qubit is a two-level quantum-mechanical system with state vectors (two The so-called computational basis |Θ⟩ s , s = 1, 2, 3, 4, reads: A pure state |Ψ⟩ AB ∈ H AB is called separable iff it can be represented as: anti-parallel polarizations, λ 1 and λ With account taken of Eqs. ( 49) and ( 28) we can see that the last equation (34) implies for small ε: Then, it follows from Eq. ( 35) that Taking into account expansion (49) for state (34), we obtain: We believe that the corresponding free photon nonentangled beam after passing through the macro region, which consists of the electron media in the presence of the magnetic field, is deformed to this form, and there exists an analyzer detecting a two photon state for measuring the entanglement of the initial free photons.The two photon state | Φq−ph (λ 1 , λ 2 )⟩ is represented by the first term in Eq. (36), where D is a normalization factor.It follows from Eq. (28 ).Then at ∆κ = |κ 2 − κ 1 | ≫ 1, the two photon state (37) can be reduced to the following form: In terms of the computational basis, the state (38) can be rewritten as follows: Let us calculate the entanglement measure where µ a , a = 1, 2, are eigenvalues of the operator ρ(1) , and In fact, we have to calculate the quantity y to obtain the entanglement measure M (λ 1 , λ 2 ). At small ε they read: Further, it is convenient for us to choose a reference frame relative to which the momentum p 3 of electrons in the charged medium is zero, p 3 = 0. Then the quantity ω 0 is related to the magnetic field B as: We note that the quantity y given by Eq. ( 41) is singular, if The corresponding to such ω 0 strengths of the magnetic field B, will be called resonant ones. There exist two such resonant values, B = B 1 at ω 0 = κ 1 for λ 1 = 1 and B = B 2 at ω 0 = κ 2 for λ 1 = 2: When B = B 1 , the expansions take place.Similarly, when B = B 2 , the expansions hold.They have a different character than the one given by Eqs.(25) for similar roots.We suppose that at B = B 1 or B = B 2 analytical properties of roots (25) change as functions of the parameter ε. From ( 45) and ( 46), we find that when a resonant value B is reached, the entanglement manifests itself already in a lower order in ε compared to expression (41).For B = B 1 , we have: whereas for B = B 2 , we obtain: It can be seen that if the photon polarizations are the same, λ 1 = λ 2 then the entanglement measure is equal to zero, M (1, 1) = M (2, 2) = 0. 4 Illustrative numerical calculations and some final remarks In our numerical calculations, we consider all the electrons in the charge medium located on zero Landau level N 0 = 0 and the beam of two photons with polarization λ 1 = 2 and λ 2 = 1. It follows from Eq. ( 41) that the resonant entanglement is related to the frequency of that photon whose polarization vector is directed along the Ox axis at B > 0. If you change the direction of the magnetic field, B < 0, then the resonant entanglement will be related to the frequency of that photon whose polarization vector is directed along the Oy axis.Therefore in the case under consideration we have the resonant value of the magnetic field is B = B 2 , see Eq. (44). On the first plot the entanglement measure M (2, 1) is calculated as a function of the magnetic field B for the fixed first photon frequency ν 1 = 10 3 nm, and different second photon frequency ν 2 = 2πκ −1 2 .The electron density is chosen to be ρ = 10 14 el m −3 .We see that the entanglement measure increases with increasing the magnetic field strength B < B 2 .When the magnetic field reaches its resonant value B = B 2 , the entanglement measure experiences a jump.A further increase in the magnetic field B > B 2 leads to a smooth decrease in the entanglement.We also see that the entanglement measure decreases as the difference in photon frequencies increases.In the work [3] the entanglement of two photons in the absence of a magnetic field was considered and it was shown that the measure of the entanglement is the same for λ 1 = 1, λ 2 = 2 and λ 1 = 2, λ 2 = 1.Here it is demonstrated that the presence of the magnetic field removes the degeneracy in photon Note that the measure of entanglement increases with increasing density of the electronic medium and with increasing of pre-resonance values of the magnetic field.In our calculations the entanglement measure does not exceed 0.1.However, such a magnitude of the entanglement is usual in laboratory experiments, for example, similar magnitudes appear when an entangled biphoton Fock state of photons is scattered inside an optical cavity; see Refs.[7,8].We stress that performed numerical calculations are intended to illustrate the existence of a possible resonant entanglement within the framework of the chosen model and the approximations made.On the other hand, if our consideration motivates possible experiments to detect the effect of resonant entanglement then there may be an incentive to refine the corresponding model, in particular, the analytical formulas (41), ( 47) and (48) under weaker restrictions on the density of the electron medium and frequencies of the photons. Acknowledgments The work is supported by Russian Science Foundation, grant No. 19-12-00042.D.M.G. thanks CNPq for permanent support. In contrast to Eqs. (52), it is linear in the matrices u and v which allows one relatively easy its analysis.One can see that system (54) is joint if positive numbers τ = (τ s,λ ) for each possible set s = 0, 1, 2 and λ = 1, 2 satisfy the equations: We now suppose that roots τ of Eq. (55) are at the same time, solutions of the equation with the initial conditions: (59) Substituting it into Eqs.( 52) and taking into account Eq. (57 ), we derive the following expressions for the quantities q k,σ : 1 |a⟩ ⟨a| = I, where I = diag (1, 1).E.g. two levels can be taken as spin up and spin down of an electron; or two polarizations of a single photon.A system, composed of two qubit subsystems A and B with the Hilbert space H AB = H A ⊗ H B where H A/B = C 2 is a four level system.If |a⟩ A and |b⟩ B , a, b = 0, 1, are orthonormal bases in H A and H B respectively, then |αb⟩ = |a⟩ ⊗ |b⟩ is a complete and orthonormalized basis in H AB , |αb⟩ = |a⟩ ⊗ |b⟩ = (a 1 b 1 , a 1 b 2 , a 2 b 1 , a 2 b 2 ) T . the state | Φq−ph (λ 1 , λ 2 )⟩ as the von Neumann entropy of the reduced density operator ρ(1) of the subsystem of the first photon, Figure 1 : Figure 1: The entanglement measure as a function of the magnetic field. polarizations and the entanglement measure depends on the direction of photon polarizations in the beam.Increasing the magnetic field strength increases entanglement, as long as the magnetic field value is below a certain resonant value, which is determined by the frequency of the photon having polarization λ = 1, see Eq. (44).The resonant value of B increases with decreasing of photon frequencies.But these values are not large, for example, for photons with frequencies ν 2 corresponding to the ultraviolet range 380 nm -10 nm, the resonant values range from 6 A/m to 225 A/m.On the second plot the entanglement measure M (2, 1) is calculated as a function of the electron medium density for the fixed first photon frequency ν 1 = 10 3 nm, and different second photon frequencies ν 2 .The magnetic field B is chosen to be B = 2 A/m which is less the corresponding resonant values. Figure 2 : Figure 2: The entanglement measure M (2, 1) as a function of the electron medium density.
4,449.2
2023-11-02T00:00:00.000
[ "Physics" ]
Modal and Vibration Analysis of Filter System in Petrochemical Plant Filter systems are widely used in petrochemical plants for removing solid impurities from hydrocarbon oils. The backwash is the cleaning process used to remove the impurities on the sieves of the filters without a need to interrupt the operation of the entire system. This paper presents a case study based on the actual project of a filter system in a petrochemical plant, to demonstrate the significant effect of vibration on the structural integrity of piping. The induced vibration had led to the structural fatigue failure of the pipes connecting the filter system. A preliminary assessment suggested that the vibrations are caused by the operation of backwashing of the filter system. A process for solving the vibration problem based on the modal analysis of the filter system using the commercial finite element software for simulation is therefore proposed. The computed natural frequencies of the system and the vibration datameasured on site are assessed based on the resonance effect of the complete system including the piping connected to the filters. Several approaches are proposed to adjust the natural frequencies of the system in such a way that an optimal and a reasonable solution for solving the vibration problem is obtained. Introduction A filter system plays an important role in petrochemical plant for removing solid particles and impurities from hydrocarbon oils.In the system, the pipes of various geometrical properties are connected to pumps and filters to transport oil and oil products for treatment [1,2].The temperature of the oil and the oil products in the system can be as high as 200 ∘ C. As the flow and collision of oils in the pipes cause severe vibrations, this led to fatigue and fracture of structural pipe members and connections after being subject to a number of load cycles [3].As a result, there is a need to examine carefully the most appropriate approach to reducing the unexpected vibrations to prevent the oil leakages and structural member failure in the system. One of the earliest studies of vibration problems in pipes was conducted by Ashley and Haviland [4].They used a beam model to establish differential equations for analyzing the pipe motion.On the other hand, Niordson [5] built a shell model to derive differential equations for solving pipe vibration, where the results were found to be comparable to that of Ashley and Haviland [4].A more complicated study of pipe vibration due to the effect of fluid-structure interaction was carried out based on travelling-wave method by Païdoussis and Denise [6].Lavooij and Tusseling [7] however derived differential equations using the method of characteristics for solving pipe vibration.For the past 2 decades, the research of pipe vibration has been directed towards the nonlinear analysis, Gorman et al. [8] studied the nonlinear vibration of pipeline system based on a series of simple models comprising pipes of various geometrical properties. In a petrochemical plant, the filter system is used to filter solid particles and impurities from hydrocarbon oils.The solid particles will clog the sieves after some period of time of operation and thus will affect the flow of oils in pipes.However, it is impossible to replace the sieves or to remove the filter for cleaning while the filter system is in operation.Therefore, a backwashing process is used to clean the sieves of New addition system (Group F) Exciting system (Groups A-E) Figure 1: The exciting and addition filter system.the filters.With full particles clogging the sieves and reduced speed of oil flows, a pressure difference between inlet and outlet of the filter system will be generated.The backwashing process will then be activated due to the difference of these working pressures.The backwashing process uses the product oil as a backwashing liquid to flush the filter nets at high pressure so that the solid particles on the sieves could be removed.Once the backwashing liquid hits the filter nets and pipes after it passes through the filters, it causes the filter and the piping system to vibrate.In general, the magnitude of vibration is insignificant and cannot be observed.However, with the increasing production of oil products to meet the demand, the petrochemical company redesigned the system with one additional filter system (Group F) installed next to the existing filters (Groups A to E) as shown in Figure 1.Despite the magnitude of vibration being considered small during the process of backwashing, the oil leakages on pipes and connections however appear after a few years of operation. An internal stress analysis of the pipes was thereafter performed to rectify the unexpected vibration problem that resulted in fatigue cracking on pipes.The internal stress analysis is a conventional approach commonly adopted for analyzing pipe abnormality under various load conditions based on structural mechanics [9,10].In recent years, Wu et al. [11] carried out a stress analysis model of tunnel pipes under various load combinations.Huang et al. [12] conducted a stress analysis model of elastic laying pipelines in mountainous areas.There is a need to note that the exciting forces of these cases could not easily be determined and measured.On the other hand, the internal forces of the pipelines were typically small that would not result in structural ultimate failure.Most of the structural failures of these pipelines occurring within the elastic limit of materials were actually due to fatigue under load cycles which applied for a period of time.Thus, the stress analysis in this case was not suitable to assess the failure behavior of the filter system.The objective of the current study is to determine a suitable solution with the adjustment of period of vibration to reduce the amplitude of vibration of the system. In the current study, the filter and the piping system were connected and modelled as a complete system.A modal analysis is conducted to determine the natural frequencies of the system.The exciting forces of the vibration include the pulses of the oil in the pipes and the interaction effect among the oil, the pipes, and the filter system.The response vibration of the complete system due to backwash was measured and presented in a form of time spectrum.The time spectrum was then transformed into the frequency domain by means of fast Fourier transformation (FFT).The FFT spectrum allowed the response frequencies that contributed to vibration to be determined.The peaks of the spectrums presented the frequencies to the natural frequencies of the complete system, where the unexpected vibration is due to the effect of resonance.With the comparison of the FFT spectrums of response vibration to the natural frequencies of the complete system, the region of resonance could be identified [13].The natural frequencies that cause the resonance and the locations of resonance vibrations could therefore be obtained.As a result, the filter and the piping system could be designed with a specific natural frequency to reduce resonance vibration.Apart from the design of the system to the required natural frequency, the construction time to integrate the new and the existing filter and piping system based on the actual site conditions could also be optimized.In this paper, a method for solving the vibration problem of the complete system with a certain amount of exciting force is summarized and presented. Modal Analysis 2.1.Modelling of Filter and the Piping System.The filter and the piping system are modelled using commercial software SolidWorks and ANSYS, as shown in Figure 2, with the modal analysis conducted based on ANSYS. There are 6 groups of filter and piping system.Each group comprises 8 filter elements.The detail of each filter system is shown in Figure 3.The oil flows through the filter from the bottom of the system from Pipe 1 (inlet) to Pipe 2 (outlet).After the sieves are clogged with solid particles, the backwash operation was activated at a pressure difference between the top and bottom of the filters at 10 N/cm 2 The backwashing oil enters the system from Pipe 3 to Pipe 4. Pipes 5 and 6 are gas pipes for gas backwashing after the filter nets could not be cleaned using the backwash liquid.These two pipes were usually empty.The 5 pipes were fixed onto a large bracket, which was connected to other brackets on the floor.In the modelling, the brackets were considered fixed on the ground.The filters were made of stainless steel.Each single filter comprised 28 filter elements with diameter 25 mm and length 814 mm.The filter elements were threaded into a common flange.The pipes are made of mild steels with geometrical properties as shown in Table 1.Apart from the filters and the pipes, the brackets representing the locations of the constraints would have a significant effect on the natural frequencies of the complete system.These brackets are made of mild steels with cross-sectional properties presented in Figure 4.The complete filter and piping system and their support brackets can be seen in Figure 5. Modal Analysis. The modal analysis was carried out to determine the natural frequencies of the complete system.In the analysis, there were nodes in the entire system.Each node contained six (6) degrees of freedom with 3 translations and 3 rotations in -, -, and -axis direction [14][15][16].A total of 6 degrees of freedom were considered in this case.For the complete filter and piping system, it would have 6 natural frequencies.The equations of motion can be expressed as follows [14][15][16][17]: [ where To obtain the natural frequencies of the system in the modal analysis, the exciting force vector was set to 0. The matrix representation of transformation after the combination of (1) can be rewritten as follows [1,[14][15][16]: where [total], [total], and [total] are 6 × 6 matrices. On the basis of equations of motion, the natural frequencies of the system could be obtained from the FEA of the model using the commercial software ANSYS.The natural frequencies of the system below 15 Hz are listed in Table 2.The modal orders are arranged in accordance with the natural frequencies of the system.For the free vibration at one of these natural frequencies, a definite relationship between the amplitudes of all the nodes exists.These relationships were known as the mode shapes.If the frequency of exciting force was close to or at one of these natural frequencies, the effect of resonance was considered, and the amplitudes of the nodes were maximum [18].However, with the consideration of damping in real system structure, the resonance frequencies do not quite match the natural frequencies, and the difference between the two frequencies was actually quite small [14][15][16].On the other hand, the amplitudes of the nodes would not increase to infinite.The relationship of the amplitudes of the nodes was close to the mode shape at that natural frequency, The mode shapes of the first 5 modal orders are shown in Figures 6-10. Vibration Measurement Results After the system was redesigned with one additional filter and the piping system (new system), the unexpected vibrations occurred at Group F during the backwashing process.The main exciting force of the vibrations was the pulses of the oil and the interaction effect of the oil and the pipes.The locations of vibration measurement of the system of Group F are illustrated in Figure 11. The computed time spectrums were transformed into the frequency domain by FFT.The FFT spectrums showed the most power response frequency.The FFT spectrums of Locations 1 to 3 are shown in Figures 12-14.The peaks in the spectrums indicated the most power location at that frequency. From the displacement spectrums, it can be seen that the peaks were located at the frequencies between 5 Hz and 10 Hz.The comparison with the results of the modal analysis indicated that several natural frequencies were found smaller than 10 Hz.It can be deduced from the observation that the system was in the region of resonance.On the other hand, it can be seen from Figure 15 that the first 5 modes of vibration are well matched, while their respective natural frequencies are between 5 Hz and 10 Hz at 5.4101, 5.503, 5.6867, 7.7515, and 9.5334, respectively.The data also suggested that the resonance of the system was the main cause of the unexpected vibration.However in order to maintain the efficiency of backwashing process, the exciting force would not be subject to change.Hence, the solution for solving the vibration problem in this case was to propose a suitable method based on the natural frequency of the system. Solutions From the modal analysis and the vibration measurement, the natural frequencies of the system should be increased above 10 Hz to avoid the effect of resonance.From the observation of the system performance on site, it can be seen that the vibration actually occurred on free span long pipes without any supports.These pipes were located in the critical vibration of the system.Thus, adding adequate constraints to stiffen these pipes was the first approach to solving the vibration problem. Solution 1. The first approach is to add adequate constraints by connecting the pipes together.This was carried out by connecting the long pipes (Location 4) using 50 mm diameter pipes.After the pipes are connected, the natural frequencies of the system slightly increase, as shown in Table 3. However, the vibration of the modal orders with natural frequencies below 10 Hz no longer occurred at Location 4. As shown in Figures 16 and 17, the first 2 modes of vibration showed that the vibration locations of the long pipes in the first few modal orders had been shifted. Displacement (m) Frequency (Hz) x-direction y-direction z-direction The second approach is to connect the long pipes above the passage with 2 additional supports at both sides, as shown in Figure 18.With this arrangement, the natural frequencies of the system are found to increase as shown in Table 4.The vibration locations of the first several modal orders were located along the long pipes in Location 5. The 3rd approach is to add additional constraints at Location 1 by connecting the long pipes with 50 mm diameter pipes, as shown in Figure 19.With the additional constraints, all the natural frequencies of the system are found higher than 10 Hz, which is outside the range of resonance (Table 5). Solution 2. The process of Solution 2 was identical to that of Solution 1.The locations of additional constraints are similar to those in Solution 1, as shown in Figure 20.The difference between the 2 solutions was that the constraints of the long pipes in Solution 2 were supported on the floor and other parts of the entire structure system.The natural frequencies of Solution 2 are higher than 10 Hz (Table 6).The purpose of proposing Solution 2 is to provide an alternative taking into the consideration of construction process of the existing additional filter and piping system. Solution 3. For Solution 3, the pipes were connected using spring bumpers to control the increasing stress of the structure system with reduced vibration.The bumpers also absorbed part of the vibration energy.However, the spring bumpers did not increase the natural frequencies of the system effectively.The natural frequencies of the new structure with additional filter and piping system are Displacement (m) Frequency (Hz) x-direction y-direction z-direction Further, an elastic restraint such as a spring bumper was only effective in one direction, which requires a more complex construction and a longer construction time.Based on the simulation results, Solution 1 or 2 should be chosen as the optimal solution.The choice of a suitable solution would depend on the kind of constraints and the requirement on site. Structural Modification. The structural modification combined Solutions 1 and 2 based on site condition.The long pipes in Location 4 in Solution 1 were connected with each other.The pipes above were supported and fixed onto a large bracket.As the pipes in Location 5 were very close to the foundation of the system, they were fixed to the short brackets on the floor (Figure 21).A vibration measurement of Location 2 was carried out to inspect the effect of the modification of the system.The FFT spectrums are plotted as shown in Figure 22.The peaks of the spectrums did not exist and the displacement of vibrations was reduced.Thus by changing the natural frequencies, the system could be made outside the region of resonance. Conclusion The current study provides three (3) solutions to reduce the unexpected vibration of a backwashing system of a petrochemical plant.The comparison of the results between the site measurement and the modal analysis suggested that the cause of vibration of the system can be speculated as being resonance. All the 3 proposed solutions will result in the increment of natural frequencies of the system.However, from the simulation, it can be seen that the fixed constraint is more effective than the elastic restraint in changing the natural frequencies to reduce the vibration of the structure system.The proposed Solution 3, with spring bumpers to constrain the pipes, has a longer construction time and is less effective in increasing the natural frequencies.With the consideration of analysis results and the site conditions, Solutions 1 and 2 are the optimal solutions for solving the resonance problem of backwashing system.The combined Solutions 1 and 2 provide the best solution based on the site situation. Figure 2 :Figure 3 : Figure 2: A filter and piping system model. Figure 6 : Figure 6: Vibration mode of the 1st modal order. Figure 22 : Figure 22: The FFT spectrums of original and new structure at Location 2. Table 1 : Outer diameter and thickness of pipes. Table 2 : Natural frequencies of existing system. Table 3 : Natural frequencies of the system (1st approach). Table 4 : Natural frequencies of the system (2nd approach). Table 5 : Natural frequencies of the system (3rd approach). Table 6 : Natural frequencies of the new system. Table 7 : Natural frequencies of the new structure. Table 8 : Natural frequencies of original and new system. Table 8 . Unlike Solution 3, both Solutions 1 and 2 increase the natural frequencies to a level higher than 10 Hz.Solution 3 could not achieve the same effect even with additional constraints.It can therefore be concluded that a fixed constraint would perform better in changing the natural frequencies than an elastic restraint.
4,174.8
2017-08-22T00:00:00.000
[ "Engineering" ]
Proactive Content Delivery with Service-Tier Awareness and User Demand Prediction : Cost-effective delivery of massive data content is a pressing challenge facing modern mobile communication networks. In the literature, two primary approaches to tackle this challenge are service-tier differentiation and personalized proactive content caching. However, these two approaches have not been integrated and studied in a unified framework. This paper proposes an integrated proactive content delivery scheme that jointly exploits the availability of multiple service tiers and multi-user behavior prediction. Three optimal algorithms and one heuristic algorithm are introduced to solve the cost-minimization problems of multi-user proactive content delivery under different modelling assumptions. The performance of the proposed scheme is systematically investigated to reveal the impacts of proactive window size, service-tier price ratio, and traffic cost model on the system performance. Introduction The rapid proliferation of smart phones and mobile Internet has driven an explosive growth of mobile data traffic demand.According to Cisco's report [1], global mobile data traffic will reach 49 exabytes per month by 2021.Among various types of mobile applications, content delivery (e.g., web browsing, video streaming) consumes the majority of the mobile data traffic.A Cisco report [1] estimated that video content will account for 78% of the world's total mobile traffic in 2021.However, the high price of mobile data plan (e.g., cost per Mbyte) is still one of the main factors prohibiting the ubiquitous adoption of mobile video applications.Therefore, significant research interests have been attracted in designing a mobile content delivery network that is cost-friendly to massive content delivery services. Contradicting the high price of mobile data plan, the overall utilization of the mobile communication network's capacity is relatively low.This is because the mobile traffic demand varies significantly across space and time [2][3][4][5], while the network is typically built to accommodate the peak traffic demand.Consequently, a large amount of "redundant capacity" (i.e., the difference between the actual traffic load and the network capacity) is not used during off-peak hours [6], resulting in a low overall utilization of the network.It is widely anticipated that improving the network utilization can help to reduce the cost per bit for mobile operators and ultimately the price per bit for mobile users. Previous studies on mobile content delivery have either taken an ISP-centric perspective or a CP-centric perspective.To our best knowledge, studies that unify both perspectives are still rare.In this paper, we propose a content delivery scheme that integrates both perspectives.Our scheme can simultaneously exploit the availability of differentiated services tiers and the predictability of user behavior.The main contributions of our paper are as follows.First, we propose a proactive content delivery scheme with service-tier awareness and user behavior prediction for the purpose of cost reduction.Second, considering a baseline scheme of proactive content delivery with one time-slot, we derive the optimal content delivery policy that can minimize the long-term cost.Third, considering a generalized scheme of multi-time-slot proactive content delivery, we propose a near-optimal heuristic algorithm for cost reduction.The performances of the proposed schemes are systematically evaluated to reveal key insights into the impacts of various system parameters on the cost. The remainder of this paper is organized as follows.Section 2 describes the system model.Sections 3 and 4 formulate and analyze the problems of proactive content delivery in single-time-slot and multi-time-slot cases, respectively.Numerical results are presented in Section 5. Finally, conclusions are drawn in Section 6. Model of Communication Service Tiers We consider a system consisting of a CP, an ISP, and N users.The content data is delivered from the CP to users via the ISP, as shown in Figure 1.For simplicity, we assume that the ISP offers two service tiers: a primary traffic (PT) service and a secondary traffic (ST) service.For concreteness, we further assume that the ST only utilizes the redundant capacity of the network [6].This assumption has two implications.First, ST has a strictly lower priority than PT, therefore the unit cost of ST (e.g., dollar per kilo bytes) is also cheaper than PT.The ratio of ST cost over PT cost is denoted as β, where 0 ≤ β ≤ 1.Second, the capacity of ST is upper bounded by the redundant capacity of the network.The total system capacity is dependent on the infrastructure deployment and network planning of the ISP.Once a network is rolled out, the system capacity is relatively stable.Redundant capacity is given by the difference between the system capacity and the primary traffic volume.Because the primary traffic volume fluctuates over time, the redundant capacity also changes dynamically.In practice, redundant capacity can be estimated by subtracting the pre-defined system capacity by the primary traffic load, which can be measured in real-time.We note that our paper focuses on the problem of proactive content delivery, which has a time-scale of seconds or minutes.Within such a time scale, the volume of redundant capacity can be treated as fixed.Therefore, our model captures the daily traffic fluctuation by a single parameter Cr t , which indicates the currently available redundant capacity, i.e., the upper limit for ST at time t.we derive the optimal content delivery policy that can minimize the long-term cost.Third, considering a generalized scheme of multi-time-slot proactive content delivery, we propose a nearoptimal heuristic algorithm for cost reduction.The performances of the proposed schemes are systematically evaluated to reveal key insights into the impacts of various system parameters on the cost.The remainder of this paper is organized as follows.Section 2 describes the system model.Sections 3 and 4 formulate and analyze the problems of proactive content delivery in single-time-slot and multi-time-slot cases, respectively.Numerical results are presented in Section 5. Finally, conclusions are drawn in Section 6. Model of Communication Service Tiers We consider a system consisting of a CP, an ISP, and N users.The content data is delivered from the CP to users via the ISP, as shown in Figure 1.For simplicity, we assume that the ISP offers two service tiers: a primary traffic (PT) service and a secondary traffic (ST) service.For concreteness, we further assume that the ST only utilizes the redundant capacity of the network [6].This assumption has two implications.First, ST has a strictly lower priority than PT, therefore the unit cost of ST (e.g., dollar per kilo bytes) is also cheaper than PT.The ratio of ST cost over PT cost is denoted as  , where    0 1.Second, the capacity of ST is upper bounded by the redundant capacity of the network.The total system capacity is dependent on the infrastructure deployment and network planning of the ISP.Once a network is rolled out, the system capacity is relatively stable.Redundant capacity is given by the difference between the system capacity and the primary traffic volume.Because the primary traffic volume fluctuates over time, the redundant capacity also changes dynamically.In practice, redundant capacity can be estimated by subtracting the pre-defined system capacity by the primary traffic load, which can be measured in real-time.We note that our paper focuses on the problem of proactive content delivery, which has a time-scale of seconds or minutes.Within such a time scale, the volume of redundant capacity can be treated as fixed.Therefore, our model captures the daily traffic fluctuation by a single parameter t Cr , which indicates the currently available redundant capacity, i.e., the upper limit for ST at time t.Within each service tier, the total traffic cost ( ) C L is a function of the traffic load L. The cost is interpreted as the cost to the ISP for secondary service provision (i.e., transmit more data using redundant capacity).It is assumed that such a cost of the ISP is proportional to the cost of CP to access communication services provided by the ISP.Two cost models are considered in our paper.One is the simple case of volume-based or linear cost, which means the cost per unit traffic remains unchanged regardless of the traffic load L. In this case, we have , where the cost is linearly proportional to the traffic load.Another case is quadratic cost, where  2 ( ) C L k L .This is a commonly used model in the literature [18] to reflect the fact that the cost to the ISP to support higher data rates Within each service tier, the total traffic cost C(L) is a function of the traffic load L. The cost is interpreted as the cost to the ISP for secondary service provision (i.e., transmit more data using redundant capacity).It is assumed that such a cost of the ISP is proportional to the cost of CP to access communication services provided by the ISP.Two cost models are considered in our paper.One is the simple case of volume-based or linear cost, which means the cost per unit traffic remains unchanged regardless of the traffic load L. In this case, we have C l (L) = k l L, where the cost is linearly proportional to the traffic load.Another case is quadratic cost, where C q (L) = k q L 2 .This is a commonly used model in the literature [18] to reflect the fact that the cost to the ISP to support higher data rates scales non-linearly with the data rate.Such a nonlinear scaling is rooted in Shannon's capacity formula: once the physical bandwidth is fixed, the data rate can be improved by increasing the transmit power, but with diminishing returns.In the literature, the cost-traffic volume function is commonly approximated by a quadratic function for analytical convenience [18]. Model of User Behavior We assume that time is slotted into unit intervals and indexed by t.It is assumed that the CP is able to make probabilistic predictions on the users' content request behavior based on historical trace.The prediction tells that user n (n ∈ {1, 2, . . . ,N}) will consume a total of ξ n,t amount of data at time slot t with probability p n,t , where 0 ≤ ξ n,t < ∞ and 0 ≤ p n,t ≤ 1.A random binary variable is used to indicate whether the nth user's request actually occurs at time t, i.e., It is assumed that multiple users' arrival and content consumption behaviors are independent from each other.Furthermore, user demands are assumed to be cyclic-stationary.This assumption is supported by various measurements showing that the user demand fluctuates in a periodic pattern [40,43] (e.g., on a daily basis).As a result, we can group multiple time slots into a cyclic period.The number of time slots in a period is denoted as T. It follows that Protocols of Proactive Content Delivery We propose a protocol that is simultaneously aware of the service tiers and user behavior predictions.This requires certain degrees of collaboration and information sharing between the ISPs and CPs.At time slot t, the protocol uses the PT service tier to satisfy users' instantaneously content demand in the current slot.This is called reactive content delivery (RCD).Meanwhile, if redundant capacity is available, the protocol will proactively push a portion of the forecasted content demand in the upcoming several time slots using the ST service tier.This is called proactive content delivery (PCD).As the process iterates, the content demand at time t will be partly delivered by RCD via the PT service tier and partly by PCD via the ST service tier.Unlike traditional proactive caching schemes, the main difference here is that RCD and PCD are associated with the PT service tier and ST service tier, respectively. Suppose that PCD is conducted over a length of W time-slots, where 1 ≤ W ≤ T and τ ∈ {1, 2, . . . ,W}.When W = 0, the content delivery mechanism is purely reactive, which serves as our baseline case.The case of W = 1 is called single-slot PCD (SPCD), while the more general case of 1 < W ≤ T is called multi-slot PCD (MPCD).We use x n,t (τ) to denote the portion of data expected for user n at time t + τ but is proactively pushed to the user at time-slot t.Here τ denotes how many time slots are ahead for proactive caching.The main parameters in this paper are summarized in Table 1.Portion of proactively delivered data to be consumed at time-slot t + 1 (unit: MB) x n,t (τ) Portion of proactively delivered data to be consumed at time-slot t + τ (unit: MB) Ratio of the cost of the ST service tier over the PT tier Problem Formulation This section considers the case of proactive content delivery with single time-slot, where forecasted user demands can be sent one time-slot ahead using the ST service tier.At a given time-slot t, the cost is composed of two parts.One is the cost generated by RCD through the PT service tier, and the other part is the cost generated by PCD through the ST service tier.The time-average expected cost can be written as: where we define a N × T matrix x, the elements of which are x n,t , ∀n, t.In Equation ( 3), x n,t+1 represents the portion of proactively pushed data for the next time slot t + 1, and the expectation is taken over the random variable I n,t .The received data for each user should not exceed the user's demand at time t, i.e., 0 and the total amount of proactively pushed data cannot exceed the upper limit of the redundancy capacity at the current time-slot t, i.e., The main objective is to minimize the total cost over the feasible space of x.The optimization problem can be formulated as x n,t ≤ Cr t ∀n, t For comparison purposes, also consider the baseline case of pure RCD.The time-average expected cost in this case is given by where ∑ N n=1 ξ n,t I n,t is the actual traffic load requested at time t.In this case, the system is purely reactive to the users' request and there is no decision variable to be optimized. Linear Cost Model Assuming the linear cost model, we can substitute C l (L) into Equation (3) to yield = 1 We note that the property of cyclic-stationary user demand (i.e., x n,t = x n,t+T ) is used in Equation ( 8) to give ∑ T t=1 x n,t+1 = ∑ T t=1 x n,t .From Equation ( 8), we can see that the optimization problem in Equation ( 6) becomes a linear programming problem, such that the problem can be easy solved by classic methods such as the dual interior point method. A closer look at Equation ( 8) reveals a key insight that both the cost and the PCD decision variable x are determined by the relative difference between the cost ratio β and users' arrival probabilities p n,t .When p n,t > β, PCD for the nth user is beneficial for cost reduction; when p n,t < β, PCD for the nth user becomes harmful because there is a higher likelihood that the pushed data will not be actually consumed by the user, so that the resource used for PCD is wasted.When p n,t = β, PCD for the nth user makes no difference. Quadratic Cost Model When the cost is a quadratic function of the traffic load, the costs increase rapidly as the load increases.In this case, PCD becomes more useful because it helps to smooth the traffic load and reduce fluctuations over time.Substituting C q (L) into (3) yields: x n,t+1 x m,t+1 We can see that in this case, we no longer have a simple intuitive solution for x.However, it can be proved that the problem in Equation ( 9) is a convex optimization problem (see Appendix A).Hence, the optimal solution can be readily solved using standard convex optimization techniques. Problem Formulation As a generalization from the single-time slot case, portions of user's predicted demand can be pushed to users by multiple time-slots ahead through the ST service tier.The time-average expected cost in this case is given by: x n,t (τ) (10) where the decision variable x is a N × T × W matrix, whose elements are given by x n,t (τ), ∀n, t, τ. Here, what differs from the single-time-slot case is that user n's cached data at time t is the accumulated data pushed from the previous W time-slots. PCD for each user is constrained by the individual user demand, i.e., x n,t−τ (τ) ≥ 0, in addition, the total amount of PCD data of all users at any time-slot t cannot exceed the current redundant capacity, i.e., x n,t (τ) ≤ Cr t (12) the optimization problem can then be formulated as: x n,t (τ) ≤ Cr t ∀n, t, τ Linear Cost Model Substituting the linear cost function C l (L) into Equation (10) we get In ( 14), the equality (b) follows by x n,t (τ).We can see that the optimization problem reduces to a linear programing problem.Similar to the case of single time-slot, the effectiveness of PCD still depends on the relative difference between the traffic cost ratio β and user n's arrival probability p n,t .However, the proactive data user n received from different time-slot, i.e., x n,t−τ (τ), depends on the redundant capacity of the previous W time-slots.This requires proper monitoring of real-time redundant capacity over multiple time slots. Quadratic Cost Model Substituting the quadratic cost function C q (L) into Equation (10), we have This yields a complicated non-linear optimization problem and there is no straightforward proof for its convexity.However, because the utility function can be easily evaluated in closed-form, general purpose heuristic search algorithms such as the pattern search [44] can be used to solve the problem effectively. Simulation Results This section presents numerical results to our previous analysis.For illustration purposes, we set T = 10 and N = 3. User n's demand at time t is drawn from a uniform distribution on [0, 500]; the arrival probability of user n at time t follows a uniform distribution on [0, 1].The scaling constants in the linear and quadratic cost models are given by k l = 2 and k q = 0.005, respectively.The case of pure RCD, where there is no proactive caching, is also presented as a performance benchmark. Case of Single Time-Slot Using the linear cost model, Figure 2 shows how the time-average expected cost and the redundant capacity utilization changes as a function of the ST/PT cost ratio β.The results are obtained by solving the linear optimization problem defined in Section 3.2 and averaging over 100 realizations.It is observed that a smaller value of β leads to a lower cost and a higher utilization of the redundant capacity.This is expectable because a smaller value of β would better encourage the use of PCD using the ST service tier.When β = 1, which means the two service tiers have the same cost, there is no performance gain to use PCD at all.Moreover, we can see that larger amount of redundant capacity helps to reduce the cost because more user demand can be accommodated via the ST service tier. Using the linear cost model, Figure 2 shows how the time-average expected cost and the redundant capacity utilization changes as a function of the ST/PT cost ratio  .The results are obtained by solving the linear optimization problem defined in Section 3.2 and averaging over 100 realizations.It is observed that a smaller value of  leads to a lower cost and a higher utilization of the redundant capacity.This is expectable because a smaller value of  would better encourage the use of PCD using the ST service tier.When Using the quadratic cost model, Figure 3 shows how the time-average expected cost and the redundant capacity utilization changes as a function of the ST/PT cost ratio β.The results are obtained by solving the convex optimization problem defined in Section 3.3.The general trend observed in Figure 3 is similar to that in Figure 2, i.e., a smaller value of β leads to a lower cost and a higher utilization of the redundant capacity.However, a key difference to Figure 2 occurs when β approaches 1, where PCD is shown to be useful for cost reduction even when the cost of ST and PT are the same.For example, at Cr t = 400 and β = 1, the time-average cost can be reduced by nearly 32% (as opposed to 0% in Figure 2) and the redundant traffic utilization is about 43% (as opposed to 0% in Figure 2).This is because the PCD can help to smooth the user demand in time, while a more balanced user demand yields a lower cost under the quadratic cost model.Using the quadratic cost model, Figure 3 shows how the time-average expected cost and the redundant capacity utilization changes as a function of the ST/PT cost ratio  .The results are obtained by solving the convex optimization problem defined in Section 3.3.The general trend observed in Figure 3 is similar to that in Figure 2, i.e., a smaller value of  leads to a lower cost and a higher utilization of the redundant capacity.However, a key difference to Figure 2 occurs when  approaches 1, where PCD is shown to be useful for cost reduction even when the cost of ST and PT are the same.For example, at t Cr =400 and  =1, the time-average cost can be reduced by nearly 32% (as opposed to 0% in Figure 2) and the redundant traffic utilization is about 43% (as opposed to 0% in Figure 2).This is because the PCD can help to smooth the user demand in time, while a more balanced user demand yields a lower cost under the quadratic cost model. Case of Multiple Time-Slot Figure 4 shows three figures related to the performance of multi-time-slot PCD under the linear cost model.The results are obtained by solving the linear optimization problem defined in Section 4.2.The general conclusions drawn from Figure 4 are the same as that in Figure 2, i.e., PCD is not useful when there is no cost difference between ST and PT service tiers.Apart from this, Figure 4a, 4b, and 4c further reveal the impact of proactive window size on the performance.It is observed that increasing the window size does help to further reduce the cost, but the improvement is limited and The general conclusions drawn from Figure 4 are the same as that in Figure 2, i.e., PCD is not useful when there is no cost difference between ST and PT service tiers.Apart from this, Figure 4a-c further reveal the impact of proactive window size on the performance.It is observed that increasing the window size does help to further reduce the cost, but the improvement is limited and becomes insignificant when W is greater than five.In Figure 4c, we can see that when the value of β increases, the effectiveness of cost reduction by increasing W decreases.This suggests that when the costs of the two service tiers are comparable, increasing the proactive window size W will become less effective for cost saving.Finally, Figure 5 shows three figures related to the performance of multi-time-slot PCD under the quadratic cost model.The results are obtained by solving the non-linear optimization problem defined in Section 4.3 using pattern search.Compared with Figure 4, Figure 5 shows that using PCD is always useful for cost reduction regardless of the values of  .Even when 1   , the cost can still be reduced by 53% thanks to the load smoothing effect.Moreover, increasing the window size also helps for load smoothing, and is hence considered beneficial for all values of  .Table 2 further demonstrates the smoothing effect of multi-time-slot PCD on network traffic load.Given , the variances of the actual traffic across different time slots is shown as a function of the window size.We can see that increasing W helps to reduce the variance of the traffic load, but has diminishing returns especially when W becomes greater than five.Finally, Figure 5 shows three figures related to the performance of multi-time-slot PCD under the quadratic cost model.The results are obtained by solving the non-linear optimization problem defined in Section 4.3 using pattern search.Compared with Figure 4, Figure 5 shows that using PCD is always useful for cost reduction regardless of the values of β.Even when β = 1, the cost can still be reduced by 53% thanks to the load smoothing effect.Moreover, increasing the window size also helps for load smoothing, and is hence considered beneficial for all values of β.Table 2 further demonstrates the smoothing effect of multi-time-slot PCD on network traffic load.Given Cr t = 400 and β = 0.5, the variances of the actual traffic across different time slots is shown as a function of the window size.We can see that increasing W helps to reduce the variance of the traffic load, but has diminishing returns especially when W becomes greater than five.The above simulation results show that both single time-slot and multiple time-slot PCD can bring good performance gain for CP.The performance gain increases with lower cost rate  and larger window size W.However, the performance gain is fundamentally constrained by the volume of redundant capacity.In practice, this means close cooperation must be established between CP and ISP so that the volume of redundant capacity in the current network can be measured and shared in real time.For the ISP, our model helps to improve the overall utilization of network infrastructure and generate additional revenue.For CP, our model helps to attract users and promote content consumption by reducing the cost of content delivery per bit.In summary, our model can offer a winwin situation for ISP and CP. Conclusions This paper proposes a personalized PCD scheme that aims to minimize the total cost of content delivery by means of multiple service-tier transmission and multi-user behavior prediction.The The above simulation results show that both single time-slot and multiple time-slot PCD can bring good performance gain for CP.The performance gain increases with lower cost rate β and larger window size W.However, the performance gain is fundamentally constrained by the volume of redundant capacity.In practice, this means close cooperation must be established between CP and ISP so that the volume of redundant capacity in the current network can be measured and shared in real time.For the ISP, our model helps to improve the overall utilization of network infrastructure and generate additional revenue.For CP, our model helps to attract users and promote content consumption by reducing the cost of content delivery per bit.In summary, our model can offer a win-win situation for ISP and CP. Figure 1 . Figure 1.Illustration of the system model. Figure 1 . Figure 1.Illustration of the system model. 1 Figure 2 . Figure 2. (a) The time-average expected cost as a function of the ST/PT cost ratio β; (b) the redundant capacity utilization as a function of the ST/PT cost ratio β (linear cost model, varying redundant capacity Cr t ). Electronics 2018, 7 , 15 Figure 2 . Figure 2. (a) The time-average expected cost as a function of the ST/PT cost ratio  ; (b) the redundant capacity utilization as a function of the ST/PT cost ratio  (linear cost model, varying redundant capacity Figure 3 . Figure 3. (a) The time-average expected cost as a function of the ST/PT cost ratio  ; (b) the redundant capacity utilization as a function of the ST/PT cost ratio  (quadratic cost model, varying redundant capacity t Cr ). Figure 3 . Figure 3. (a) The time-average expected cost as a function of the ST/PT cost ratio β; (b) the redundant capacity utilization as a function of the ST/PT cost ratio β (quadratic cost model, varying redundant capacity Cr t ). Figure 4 Figure 4 shows three figures related to the performance of multi-time-slot PCD under the linear cost model.The results are obtained by solving the linear optimization problem defined in Section 4.2. Figure 4 . Figure 4. (a) The time-average expected cost as a function of the ST/PT cost ratio  ; (b) the redundant capacity utilization as a function of the ST/PT cost ratio  ; (c) the time-average expected cost as a function of the proactive window size W (linear cost model, Figure 4 . Figure 4. (a) The time-average expected cost as a function of the ST/PT cost ratio β; (b) the redundant capacity utilization as a function of the ST/PT cost ratio β; (c) the time-average expected cost as a function of the proactive window size W (linear cost model, Cr t = 400, varying window size W). Figure 5 . Figure 5. (a) The time-average expected cost as a function of the ST/PT cost ratio  ; (b) the redundant capacity utilization as a function of the ST/PT cost ratio  ; (c) the time-average expected cost as a function of the proactive window size W (quadratic cost model, Figure 5 . Figure 5. (a) The time-average expected cost as a function of the ST/PT cost ratio β; (b) the redundant capacity utilization as a function of the ST/PT cost ratio β; (c) the time-average expected cost as a function of the proactive window size W (quadratic cost model, Cr t = 400, varying window size W). Table 1 . Main parameters used in our model. Table 2 . The variance of traffic demand. Table 2 . The variance of traffic demand.
6,774.6
2019-01-02T00:00:00.000
[ "Computer Science" ]
“Here Are the Rules: Ignore All Rules”: Automatic Contradiction Detection in Spanish : This paper tackles automatic detection of contradictions in Spanish within the news domain. Two pieces of information are classified as compatible, contradictory, or unrelated information. To deal with the task, the ES-Contradiction dataset was created. This dataset contains a balanced number of each of the three types of information. The novelty of the research is the fine-grained annotation of the different types of contradictions in the dataset. Presently, four different types of contradictions are covered in the contradiction examples: negation, antonyms, numerical, and structural. However, future work will extend the dataset with all possible types of contradictions. In order to validate the effectiveness of the dataset, a pretrained model is used (BETO), and after performing different experiments, the system is able to detect contradiction with a F 1 m of 92.47%. Regarding the type of contradictions, the best results are obtained with negation contradiction ( F 1 m = 98%), whereas structural contradictions obtain the lowest results ( F 1 m = 69%) because of the smaller number of structural examples, due to the complexity of generating them. When dealing with a more generalistic dataset such as XNLI, our dataset fails to detect most of the contradictions properly, as the size of both datasets are very different and our dataset only covers four types of contradiction. However, using the classification of the contradictions leads us to conclude that there are highly complex contradictions that will need external knowledge in order to be properly detected and this will avoid the need for them to be previously exposed to the system. Introduction One of the worst problems in the current information society is disinformation. It is a wide-ranging problem that alludes to the inaccuracy and lack of veracity of certain information that seeks to deliberately deceive or misdirect [1]. This phenomenon spreads on a viral scale and can therefore result in massive confusion about the real facts. Disinformation often involves a set of contradictory information that misleads users. Being able to automatically detect contradictory information becomes essential when the amount of information is so large that it becomes unmanageable and therefore confusing [2]. Contradiction, as described in [3], occurs between two sentences A and B when there exists no situation whatsoever in which A and B are both true. Therefore, in natural language processing (NLP), the task of contradiction identification implies detecting natural language statements conveying information about events or actions that cannot simultaneously hold [4]. In the current context, the automatic detection of contradictions would contribute to detect unreliable information, as finding contradictions between two pieces of information dealing with the same factual event would be a hint that at least one of the two pieces of news is false. A definition of different types of contradictions were presented in [3], where the authors defined a typology for English contradiction, finding two main categories: (1) those occurring via antonymy, negation, and date/number mismatch, which are relatively simple to detect, and (2) contradictions arising from the use of factive or modal words, structural and subtle lexical contrasts, as well as world knowledge (WK). The task of automatic detection of contradictory information is tackled as a classification problem [5], when two pieces of text are talking about the same fact, within the same temporal frame. If we define a statement as s = (i, f , t), where i refers to the information provided about fact f occurring at the time t, we will classify two pairs of text as • Compatible information: two pieces of text, s 1 and s 2 , are considered compatible if, given s 1 = (i 1 , f 1 , t 1 ) and s 2 = (i 2 , f 2 , t 2 ), the following statement holds true: • Contradictory information: two pieces of text, s 1 and s 2 , are considered contradictory if, given s 1 = (i 1 , f 1 , t 1 ) and s 2 = (i 2 , f 2 , t 2 ), the following statement holds true: • Unrelated information: two pieces of text, s 1 and s 2 , are considered unrelated if, given s 1 = (i 1 , f 1 , t 1 ) and s 2 = (i 2 , f 2 , t 2 ), the following statement holds true: Thus, a news item is classified as contradictory when given the same fact (It is considered that the same fact in two different news items could be expressed with different event mentions.) within the same time frame, the fact-related information is incongruent in the two news items being considered. Nowadays, the coronavirus crisis has heightened both the need for reliable and not contradictory information. However, it is frequent to find different information about the same fact in different media, sometimes biased by a certain political spectrum. For example, here is a real case of contradiction in two different Spanish media outlets about the same information. The date of publication for the two news items taken from OkDiario and El Pais is the 19 March 2021: Source "El Pais" (https://elpais.com/opinion/2021-03-19/confianza-en-las-vacunas. html, accessed on 22 March 2021): ". . . La Agencia Europea del Medicamento ha ratificado que la vacuna de AstraZeneca es segura y eficaz y que los beneficios que aporta superan claramente a los posibles riesgos. Despeja así las dudas surgidas ante la notificación de una treintena de casos de trombosis. . . " (". . . The European Medicines Agency has confirmed that AstraZeneca's vaccine is safe and effective and that the benefits clearly outweigh the possible risks. This clears up the doubts that arose after the notification of some thirty cases of thrombosis. . . ") These two pieces of information concerning vaccination are contradictory, as while the first states that episodes of thrombosis have occurred after inoculation, the second rules out that a relationship exists between the cases of thrombosis that have occurred and the vaccine. This type of disinformation caused by the contradiction of information between the traditional media is potentially dangerous, as it may cause a public health problem generated by a reluctance to take up the offer of vaccination against COVID-19. Therefore, there is a need to alert users of these contradictions. Most of the resources and systems for contradiction detection are developed in English [6][7][8][9]. However, despite the fact that Spanish is one of the most widely spoken languages in the world, there are no powerful resources to carry out the task of detecting contradictions from the direct perspective of this language. Currently, XNLI [10] is a cross-lingual dataset, which is divided into three partitions: training, developed, and test. The training set is developed in the English language, and the development and test sets are in 15 different languages. The XNLI has been used to create contradiction detection systems for training in English and predicting in other languages, obtaining good performance results. Each example in XNLI is classified as Contradiction, Entailment, or Neutral. However, to deal with contradictions it is important to consider their wide range and large variety of features [3]. Therefore, the purpose of this paper is to demonstrate that differentiating between the different types of contradictions can help to perform a more specific treatment of them, thereby enhancing capability to detect them in a broader way without having many previous examples of them. The XNLI dataset does not distinguish between different types of contradictions in its annotation, and the Spanish language is only available in the development and test sets manually translated from English. Both of these facts may affect the performance of models created from XNLI dataset for different languages. Besides, in this sense, the novelty of the proposed work is that we focus the proposal beyond covering the detection of the contradiction in Spanish, towards being able to detect what type of contradiction it is. Furthermore, the contradiction detection system can be applied to detect different types of disinformation such as incongruent headlines or news published by different media, whether traditional or social, that seek to inform about the same fact but the information provided is inconsistent, and thereby inaccurate and unreliable. The main contributions of this research are the following: • First, as there is a lack of Spanish resources created from scratch for this task, a new Spanish dataset is built with different types of compatible, contradictory, and unrelated information for the purpose of creating a language model that is capable of automatically detecting contradictions between two pieces of information in this language. The novelty of this dataset and what differentiates it from others is the fact that in addition to detecting contradictions, each contradiction is annotated with a finegrained annotation, differentiating between different types. Specifically, four of the types of contradictions defined in [3] are covered: antonymy, negation, date/number mismatch, and structural. In addition, the dataset is based on the study of incongruent headlines in traditional media, and it contains different types of contradictions between headlines and body texts in the Spanish language. • Second, a set of experiments using a pretrained model as BETO [11] has been applied to build the language model and validate its effectiveness. Note that at this stage of the research, covering only four types of contradictions is a real limitation of our dataset due to the wide spectrum of contradictions existing between texts. However, it allows the structure and design of preliminary systems for detecting contradiction in Spanish. The creation of an automatic process for classifying contradictions between texts, scaling from trivial to complex cases, could contribute to the design of hybrid systems operating in human-machine environments, providing additional information to humans about the type of contradiction encountered in an automatic system, which is the future line of our research. The rest of the paper is organized as follows. Section 2 describes the previous work and existing resources on contradiction. Section 3 presents the definition of the dataset benchmark. Section 4 describes the model, the evaluation setup used, and experiments conducted in this research. Section 5 presents the results and discussion. Finally, our conclusions and future work are presented in Section 6. Related Work In this section, a brief review of contradiction detection methods is presented. Besides, some research in the field for specific domains is introduced. Finally, because the most important aim of this research is the creation of a new dataset for this field, a review of the main existing resources is provided below. • Linguistic features approaches The most common approaches to contradiction detection in texts use the linguistic features extracted from texts to build a classifier by training from the annotated examples, such as the works in [3,5,12,13]. Early research on contradiction detection within the field of natural language processing was reported by the authors of [12] whose work tackled contradictions by means of three types of linguistic information: negation, antonymy, and semantic and pragmatic information associated with discourse relations. After evaluation experiments, over 62% of accuracy was obtained due to the fact that there are more types of contradictions possible in texts. Linguistic evidences such as polarity, numbers, dates and times, antonymy, structure, factivity, and modality features were used by the authors of [3] to detect contradiction. An approach for detecting three different types of contradiction (negation, antonyms, and numeric mismatch) was proposed in [5]. This approach deploys a Recurrent Neural Network (RNN) using long short-term memory (LSTM) and Global Vectors for Word Representation (GloVe) and included four linguistic features extracted from the text: (1) Jaccard Coefficient, (2) Negation, (3) IsAntonym, and (4) Overlap Coefficient. Simple text similarity metrics (cosine similarity, f1 score, and local alignment) were used as baseline in [13], obtaining good results for contradiction classification. This approach used two datasets built with examples of tweet pairs. A model to detect contradiction and the architecture that enables validation of the model was proposed by [4]. The model defined the extraction of semantic relations between a pair of sentences and verified some rules to detect contradictions. Furthermore, this author defined contradiction measures by considering the structure of relations extracted from texts and the level of uncertainty attached to them. Other authors [14] combined shallow semantic representations derived from semantic role labeling (SRL) with binary relations extracted from sentences in a rule-based framework, and the authors of [15] extended the analysis using background knowledge. A contradiction-specific word embedding (CWE) model and a large-scale corpus of contrasting pairs were proposed in [16]. This approach improved the results in contradiction detection in SemEval 2014 [17]. This research concluded that traditional word embedding learning algorithms have been highly successful in accomplishing the main NLP tasks but most of these algorithms are not powerful enough for the contradiction detection task [16]. Contradiction Detection in Specific Domains There is also some specific domain research regarding contradiction detection. In medical domain, the authors of [18] detected contradiction by comparing subject-relation-object tuples of a text pair in medical research. This work detected 2236 contradictions automatically, but these contradictions were checked manually and only 56 were correct. A classification system based on Support Vector Machine (SVM), with some features (negation, antonyms, and similarity measures) that help to detect contradiction in medical texts was created in [19]. This system detected antonyms and negation contradiction but not numerical contradiction. These results improved the state-of-the-art in a medical dataset. Regarding the tourism domain, other research provides an analysis of the type of contradictions present in online hotel reviews. In addition, a model for the detection of numerical contradiction is proposed for the tourism industry [20]. Contradiction Detection Resources Currently, the availability of large annotated datasets for contradiction detection are mainly present in English [21], such as SNLI [6], MultiNLI (including multiple genres) [7], or even the cross-lingual dataset XNLI [10]. These datasets have allowed the training of complex deep learning systems, which require very large corpora to obtain successful results. There are numerous studies that use these resources to create Recognizing Textual Entailment (RTE) systems. These systems usually use Transformers Learning models like BERT [22] and RoBERTa [23] to improve their predictions. BERT and RoBERTa are multi-layer bidirectional Transformer encoders that are designed to pre-train from text without labels. These pretrained models have the advantage of being able to be fine-tuned with just one additional layer of output, a feature that enables them to be used to create state-of-the-art models in various NLP tasks. In addition, there is research that merges deep learning models with external knowledge. The Wordnet relations were introduced in [8] to enrich neural network approaches in natural language inference (NLI), which is a previous step in contradiction detection. In another sense, the research developed in [9] introduces SRL information that allows the improvement of models based on Transfer Learning. To the authors' knowledge, there are few studies that address the detection of contradictions in languages other than English, such as those in [21,24]. Machine translation of SNLI from English into German was done in [21]. They built a model on the German version of SNLI and the results of the predictions are very similar to the same model trained on the original SNLI version in English. A large-scale database of contradictory event pairs in the Japanese language has been created by [24]. This database is used to generate coherent statements for a dialogue system. As for multilinguality, current research in NLI is mainly conducted in English. Concerning other languages, cross-lingual datasets were provided in [25] and XNLI in [10]; however, they relied on translation-based approaches or multilingual sentence encoders. The detection of contradictions is a very complicated task within the NLP [21]. It would be convenient to have powerful datasets in Spanish that allow the creation of specific systems to detect contradictions in Spanish. Furthermore, existing datasets do not determine the different types of contradictions, whereas considering a fine-grained annotation in the contradictions would be more effective for dealing with them. Given these considerations, one of the main aims of this work is the development of a Spanish dataset that contains a balanced number of compatible, contradictory, and unrelated information in a first step, and subsequently, differentiating the different types of possible contradictions. The process followed to build the dataset is described in detail in the next section. ES-Contradiction: A New Spanish Contradiction Dataset Our dataset (ES-Contradiction) is focused on contradictions that are likely to appear in traditional news items written in the Spanish language. Unlike other datasets, in the dataset proposed in this work, contradictions are annotated by distinguishing the type of contradiction according to its specific characteristics. Thanks to this fine-grained classification, complex contradictions can be treated in more precisely in future. In order to create the ES-Contradiction dataset, news articles from a renowned Spanish source were automatically collected, including the headline and body text. According to the journalistic structure of a news item, the headline is the title of the news article, and it provides the main idea of the story. Normally, in one sentence it summarizes the basic and essential information about the story. The main objective of the title is to attract the reader's attention. A headline is therefore expected to be as effective as possible, without losing accuracy or becoming misleading [26]. Therefore, finding contradictions between headlines and body texts is a crucial task in the fight against the spread of disinformation. In the current state of the dataset, news is focused on two domains-economics and politics, although the ultimate goal will be automatic cross-domain contradiction detection. Dataset Annotation Stages The dataset was built in four stages, subsequently outlined and detailed: (1) Extracting information from data source, (2) modifying news headline according to the different types of contradictions, (3) classifying the relationship between headline and body text (Compatible or Contradiction), and (4) randomly mixing headlines and body texts (Unrelated). (a) Original headline: "El Gobierno se compromete a subir los salarios a los empleados públicos tras los comicios" ("The Government pledges to raise public employees' salaries after the elections") (b) Modified headline: "El Gobierno se compromete a bajar los salarios a los empleados públicos tras los comicios" ("Government pledges to cut public employees' salaries after the elections") • NUMERIC (Con_Num): This amendment consists of changing numbers, dates, or times appearing in the headline. (a) Original headline: "La economía británica ha crecido un 3% menos por el brexit, según S&P" ("UK economy has grown by 3% less due to Brexit, says S&P") (b) Modified headline: "La economía británica ha crecido un 5% menos por el brexit, según S&P" ("UK economy has grown by 5% less due to Brexit, says S&P") These alterations will change the semantic content of the sentence, making it contradictory to the previous headline and body text. The annotation process was carried out by two independent annotators that were trained by an expert annotator. 3. Classifying the relationship between the headline and the body text: The semantic relationship between the headline and the body text was annotated in two phases: The first phase consisted of classifying the information into Compatible (compatible information) or Contradiction (contradictory information). In the second phase, in the case of Contradiction, the type of contradiction was also annotated (Negation, Antonym, Numeric, Structure). This stage involved four annotators who are trained to detect semantic relationships between pairs of texts. 4. Aleatory mixing headline and body text: The news items reserved in the first stage were used to generate unrelated examples (Unrelated). The headline was separated from the corresponding body text and all the headlines were randomly mixed with the body texts. In the mixing process, it was verified that the headline is not mixed with the corresponding body text. This step was done automatically without the intervention of the annotators. Dataset Description The dataset consists of 7403 news items, of which 2431 contain Compatible headlinebody news items, 2473 contain Contradictory headline-body news items, and 2499 are Unrelated headline-body news items. This represents a balanced dataset with three main classification items. The dataset split sizes for each annotated class are presented in Table 1. We partitioned the annotated news items into training and test sets. Training 1703 1733 1755 Test 728 740 744 Total items 2431 2473 2499 As can be seen in Table 2, our dataset contains examples of each type of contradiction. However, it is important to clarify that there are few examples of structure contradiction, given the complexity of finding sentences that allow for this type of modification. Dataset Validation Due to the particularities of the dataset annotation process, it was necessary to validate the second and third stages of the process. For the second stage, a super-annotator validation was conducted, while for the third stage, an inter-annotator agreement was carried out. We randomly selected 4% of the Compatible and Contradiction pairs (n = 200) to carry out the dataset validations. Super-Annotator Validation For the second stage, it was not possible to make an inter-annotator agreement because this stage consists of headline modifications and the possible variations are infinite. In this case, a manual review of the modified headlines is performed by the Super-Annotator to detect inconsistencies with the indications in the annotation guide. Only 2% of the analyzed examples present inconsistencies with the annotation guide, corroborating the validity of this stage. Inter-Annotator Agreement In order to measure the quality of the third stage annotation, an inter-annotator agreement between two annotators was performed. In cases where there was no agreement, a consensus process was carried out among the annotators. Using Cohen's kappa [27] a k = 0.83 was obtained, which validates the third-stage labeling. Experiments and Evaluation Metrics A system capable of detecting contradictions is highly relevant as it would enable the improvement and support of other tasks that involve detecting contradictory pairs (fact-checking or stance detection). To test the validity of the newly created Spanish contradiction dataset in this task, a baseline was created that is based on the BETO (https://github.com/dccuchile/beto, accessed on 22 March 2021) model described in [11] that was previously pretrained in a Spanish dataset. Wikipedia texts and all OPUS Project sources [28] with Spanish texts are used as training data. The model used is based on the BERT [22] model, and it performs a series of optimizations similar to those performed in the RoBERTa model [23]. As with the BERT model, the input sequence to the model is the headline text concatenated with the body text. The flexibility provided by BERT-based models allows us to create competitive baselines by fine-tuning the model on the dataset to be predicted [22]. Experimental Setup The model was implemented using the Simple Transformer (https://simpletransformers. ai/ (accessed on 3 March 2021)) and PyTorch (https://pytorch.org/ (accessed on 3 March 2021)) libraries. In our experiments, the hyperparameter values of the model are maximum sequence length of 512, batch size of 4, training rate of 2e-5, and training performed for 3 epochs. These values were established after the cross-validation experiment (see Section 5.2). Experiments The main objective of the experimentation proposed in this research is to demonstrate that a model is able to learn how to automatically detect contradiction types and contradictions with high accuracy from the ES-Contradiction dataset. The BETO model has been configured as indicated in Section 4.1, and the following experiments were performed: Table 2 to detect the types of Contradiction between pairs. The training and test set are used for training and testing. 5. Comparison between XNLI and our dataset: This experiment trains by using machine translation into Spanish of the XNLI dataset (https://github.com/facebookresearch/ XNLI, accessed on 29 March 2021), and uses the Spanish test set of the XNLI corpus and our test set. The XNLI dataset has 3 classes: (Entailment, Contradiction, and Neutral). Therefore, it was necessary to match them with our dataset. The Neutral class of the XNLI dataset and the Unrelated class of our dataset were eliminated, whereas the Entailment class was associated with our Compatible class and the Contradiction class with our Contradiction class. Evaluation Metrics In order to evaluate the experiments, both a measure of F 1 class-wise and a macroaveraged F 1 (F 1 m) as the mean of those per-class F scores are used, which also enables the imbalance among the less represented classes to be addressed. The advantage of this measure is that it is not affected by the size of the majority class. Additionally, accuracy (Acc) is also obtained. Results and Discussion This section presents the results obtained in each of the experiments described in Section 4. The values are expressed in percentage mode (%). Predicting All Classes This experiment is performed on the entire dataset to predict the 3 classes previously defined. The system created is capable of detecting the Unrelated class with a high level of precision and achieves significantly good results in the Compatible and Contradiction classes. Table 3 presents the results. The results obtained in the Unrelated class indicate that the system is capable of detecting with excellent F 1 m these types of examples, corroborating the results obtained in the literature on this type of semantic relation between texts [30]. The other two classes have room for improvement, by using, for instance, external knowledge. A future line of work would consist of including resources that detect antonyms and synonyms in line with [31] for the purpose of improving the results of the Contradiction class. Furthermore, including syntactic and semantic information could improve the detection of other more complex contradictions, such as structural ones, without the need for such large datasets. K-Fold Cross-Validation A k-fold cross-validation experiment aims to estimate the error and select the hyperparameters of the model [29]. This is achieved by training and testing the model with all available data for training. Table 4 shows the results of the cross-validation for each fold. The experiment conducted with our best fine-tuning model obtains a mean accuracy of 88.94% and a standard deviation of 1.234%. The prediction of the contradiction classification model in the test set should have an accuracy close to the mean obtained in the crossvalidation because the standard deviation is very low. Furthermore, the training and test set of the ES-Contradiction are very similar as they were formed by splitting the original dataset. Detecting Contradiction vs. Compatible Information In this experiment, the Unrelated class is removed from the ES-Contradiction dataset to measure the accuracy of the approach in terms of distinguishing between compatible or contradictory information, assuming that the information is related. The results are shown in Table 5. The approach obtains similar results in both predicted classes. This is due to the quality of the training examples and the balanced number of examples from each class in this dataset. As indicated in the discussion of the first experiment, the results for predicting classes could be improved by introducing external semantic information, similar to the introduction of SRL [9] and the use of Wordnet relations [8], both of which improve the results of deep learning models. Detecting Specific Types of Contradictions This experiment aims to analyze the detection capability of the approach by contradiction types. Table 6 shows the results obtained exclusively for the detection of contradiction types. The structural contradiction class (Con_Str) is the one that obtains the lowest accuracy results and F 1 m. This contradiction type is considered one of the most complicated contradictions to detect compared with the other contradictions [3], which is in line with our results. In addition, the Con_Str class, due to the scarcity of training examples, contains the lowest number of examples in this dataset, so the model can learn more about other more representative classes. It is highly likely that contradictions such as the structure contradiction need external semantic knowledge to improve detection results. Comparison between XNLI and ES-Contradiction In order to demonstrate the generality of our proposal, a series of experiments has been performed using the XNLI dataset and ES-Contradiction dataset in different training-test configurations. The XNLI dataset is divided into training, development, and test set. The training set is developed in the English language. The development and test sets are in 15 languages, including Spanish. To carry out this experiment, machine translation of the training set into Spanish and the test set in Spanish were used. Table 7 presents the results of each trained system. The best results are highlighted in italic. The models of line 1 and 2 are trained using the XNLI training set, the difference being that the first line predicts the ES-Contradiction dataset test set, and the other one, the XNLI test set. The prediction results are quite close for both of them, but the Contradiction class is detected with a higher accuracy and F 1 m. Comparing lines 1 and 4, considering our dataset as the test set with the four types of contradictions, as expected the system trained on our dataset is substantially better than the system trained on the XNLI training set. The result indicates that the XNLI dataset does not manage to cover all the contradictions contained in our dataset, even though it is more than 40 times the size of the training set of ES-Contradiction dataset and is composed of examples from different genres. The XNLI training set is exactly the same as the MultiNLI training set. It has been developed manually by parsing a sentence from a non-fiction article and creating three sentence variants: definitely correct, might be correct, and definitely incorrect [7]. The procedure for creating the training set of MultiNLI dataset follows an annotation guide that is sufficiently general to avoid bias in the dataset. However, this lack of specificity may cause a shortage of examples of various types of contradiction, resulting in an imbalance of contradiction types. Table 8 shows the accuracy by type of contradiction of the model trained in row 1 of Table 7. Table 8. Accuracy obtained for detecting each specific type of contradiction with the model in row 1 of Table 7. System Con-Neg Con-Ant Con-Num Con-Str In the prediction of the type of contradiction (Con_Neg, Con_Ant, and Con_Num), this model achieves significantly good results; even in the class Con_Neg they are very good (88.50% accuracy). However, in the prediction of the class Con_Str, they are very low (48.48% accuracy); this result could be due to the lack of examples of this type in the XNLI dataset. Finally, the system trained on the ES-Contradiction dataset failed to obtain good enough results in order to predict the XNLI test set. This system only obtains 32.28% F 1 m to predict the contradictions of the XNLI test set. The need to include new types of contradictions in the ES-Contradiction dataset is evidenced, specifically those that allow the creation of robust contradiction detection systems for the real-world and enable prediction with higher accuracy in the XNLI dataset. Unlike the XNLI dataset, the ES-Contradiction dataset in its first version could not be used to create a real system of contradiction detection. However, the annotation of contradiction types has enabled us to detect which contradictions are more difficult to tackle and how models may need external knowledge to improve the results. By future inclusion of other types of contradiction in our dataset (factive, lexical, WK, and more examples of structure contradiction), we could assess what kind of knowledge is useful to include in the reference models within this task, and thereby make progress towards the creation of a powerful system for detecting contradictions. Extending the XNLI dataset with the types of contradictions contained in the ES-Contradiction dataset is not an appropriate option as the XNLI Spanish language training set is automatically translated, which could incorporate several biases into automatic detection systems. Furthermore, the currently annotated examples do not have this finegrained annotation of our proposal. Conclusions This work has built the ES-Contradiction dataset, a new Spanish language dataset that contains contradiction, compatible, and unrelated information. Unlike other datasets, in the ES-Contradiction dataset, contradictions are annotated with a fine-grained annotation that distinguishes the type of contradiction according to its specific characteristics. The contradictions currently covered in the dataset created are negations, antonyms, date/numerical mismatch, and structural contradictions. However, all the contradictions presented in [3] are the final goal of this research. The main purpose is to create an automatic process for classifying contradictions between texts, scaling from trivial to complex cases, and giving each contradiction a precise and customized treatment. This would avoid the need to have large datasets that contemplate a multitude of examples for each of the contradictions. BETO model is used to create our system. Beto is a Transfer Learning model based on BERT. Five different experiments were performed with our system indicating that it is able to detect the four types of contradictions with a F 1 m of 92.47% and contradiction types with a F 1 m of 88.06%. As for the detection of each specific type of contradiction, our system obtains the best results for negation contradictions (98% F 1 m), whereas the lower results are obtained for structural contradictions (69% F 1 m), corroborating that the best results are obtained from the classes with the largest number of examples with more simple contradictions. Our results leave a great margin for improvement that can be tackled with the inclusion of external knowledge that enables improvement on the prediction of contradiction types. Furthermore, as for the generalization of the system, we compared the system by training it on the XNLI dataset and training it on ES-Contradiction dataset. The system trained on our dataset was not able to detect with high accuracy the XNLI test set, which indicates that in this first version it is not possible to create a powerful contradiction detection system. The negative results in the generalization tests of our corpus were expected, as it only covers four types of contradictions existing in texts. On the other hand, the system trained on the XNLI dataset managed to detect the contradictions in our dataset with high accuracy, especially in the most common types of contradictions, which therefore will also be the largest number of examples. However, when analyzing by contradiction types, we detected that the structure contradiction is not detected correctly. With this experiment, we found that the XNLI dataset, although much larger than ours, does not cover all types of contradictions, which indicates a need to deal with more complex contradictions in a more specific manner. The results obtained show that the created Spanish contradictions dataset is a good option for generating a language model that is able to detect contradictions in the Spanish language. This language model was capable of distinguishing between the specific type of contradiction detected. In order to create a powerful contradiction detection system in Spanish, it is necessary to extend our dataset with other types of contradictions and add specific features. This will enable us to detect, with greater precision, not only structural contradictions, but also other more complex contradictions that are possible in a real scenario for which the system is not previously trained.
8,136.8
2021-03-30T00:00:00.000
[ "Computer Science" ]
Criterion for dry spot development in isothermal liquid film on a horizontal substrate The paper proposes the criterion for development of dry spots in isothermal liquid films on a horizontal substrate and the formulas for gravity and surface tension forces applied at a given contact angle in the plane of the substrate on the liquid rim element surrounding a dry spot. The balance of these forces determines further pattern of initial small dry spot – whether it will disappear or develop into a large spot. Introduction Studies of rupture of thin liquid films on solid surfaces are important for modeling of multiphase flows in microfluidic devices, heat exchange systems, mining industry, and for biomedical applications such as dynamics of the tear film in the eye [1].When the interface approaches the wall, the film can rupture, resulting in the formation of a three-phase contact line.The overall dynamics of many types of multiphase flows encountered in various applications depends on these local phenomena [2].In cooling systems based on thin-film flows driven by either gravity or shear stresses at the liquid-gas interface, formation of dry spots results in significant reduction of heat flux from the heated wall, as shown experimentally in [3].Liquid films, flowing under gravity over a localized heater on vertical or inclined flat plates, rupture at sufficiently high heat fluxes generated by the heater [4,5]. An important aspect of the problem of the liquid film rupture is the substrate wettability effect, [6].In most theoretical models describing the isothermal rupture of a liquid film, the equilibrium contact angle appears as a major parameter determining the critical film thickness [7].Experiments on the rupture of the liquid film in the absence of heating qualitatively confirm the dependence of the critical thickness of the film on the contact angle [8].The equilibrium contact angle is used as basic parameter in many models of rupture of the heated liquid film [9].However, this is inconsistent with some experimental works [3].It should be noted that submicron liquid film formed on the interface of dry spots can make a major contribution to the total heat and mass transfer due to intensive evaporation [10,11,12].In some cases, rupture of the film is accompanied by formation of ultrafine residual liquid film, which exists for a very limited time [5,13]. Current research provides a theoretical analysis of the effect of the equilibrium contact angle, surface tension and liquid weight on the critical size of dry spots in isothermal liquid film of a given thickness on a horizontal flat substrate. Analysis of the forces acting on the liquid rim surrounding the dry spot Let the motionless liquid film rest on a horizontal substrate under the action of gravity.Assume that at some point, a dry spot, having the shape of a circle, appears in the film.The dry spot is usually surrounded by a liquid rim. Figure 1 shows a general view of a dry spot surrounded by a rim, and Fig. 2 presents a cross section of the rim by a plane passing through the center of the dry spot. Each element of the rim cut off by central angle įij is subjected to the action of the gravity and the force caused by surface tension on a curved surface of the rim.Let us write expressions for these forces acting within the plane of the plate.We introduce the following notation: ρ is the fluid density, r0 is the radius of the dry spot, g is the acceleration of gravity, and h is the thickness of the liquid film. Gravity causes static pressure in the film: The total action of pressure forces on the rim element in the plane of the substrate is equal to and is directed towards the center of the rim.Capillary force N acting on the rim element in the plane of the substrate can be represented in the following form, Here σ is the surface tension of the liquid, k1 and k2 are the main curvatures of the rim surface at the points of contour L of the rim cross section. According to Meunier's theorem [14], the curvature associated with the axial symmetry of the rim, can be written as where r is the distance from a given point of the contour L to the axis of symmetry of the rim. The second curvature related to the shape of the contour L, can be expressed as Where ߴ ൌ గ ଶ െ ߰ is the angle between the substrate and the tangent to the contour L. Consequently, here R(ȥ) is the radius of the local curvature of the contour L. The contribution to the total force from the surface tension at the solid-liquid interface can be shown to be zero. It is natural to assume that the contour L is a smooth curve with a continuous change (not necessarily monotonic) of the angle ϑ from -ș0 to 0, where ș0 is contact angle.Then the second integral in equation (6) does not depend on the shape of L and is equal to (1-cos ș0), so that equation ( 6) takes the form Here ‫ܭ‬ ൌ ‫‬ Condition of the critical state of a dry spot in the film Under the forces acting on the rim elements, the rim begins to extend symmetrically, absorbing the liquid film, or conversely shrink.We neglect the liquid flow within the rim assuming that each element of the rim moves along the corresponding radius of the rim integrally.Then the behavior of Ă small dry spot in the film is determined by the sign of the total force acting on the rim element towards the radius of the dry spot.The spot disappears if this force is directed toward the center of the rim, and grows in size if the force is directed from the centre.Hence, the equality is the condition for the equilibrium state of the rim.Substituting the expressions for summands in this equality, we obtain the equation for determining the critical parameters of dry spot in the liquid film or in a dimensionless form where Bo = σ/ȡgh 2 is the Bond number. In the overwhelming number of works dealing with the modeling of dry spots and rivulets flowing along the interfaces of dry spots, the interfaces of the cross section of the rim or rivulet are approximated by circular arcs.Let us assume that the curved interface of the rim cross section is composed of two arcs of a circle with a radius R0 (Fig. 2). Thus, we believe that the cross-section of the free interface of the film in the area of a dry spot has the shape of an arc of a circle with a certain radius R0, which is adjacent to the plate substrate at an angle ș0.The most visible characteristic of the initial perturbation, obviously, is the rim height H, which is associated with the radius R0 by the relationship Let assume the following scenario for the initial dry spot formation.A gas (steam) bubble appears in the liquid film directly on the wall.It is separated from the atmosphere by a thin film membrane.At some point, the membrane collapses and a round dry spot without a rim appears in the film. In this case H = h and the cross section of the rim is a half segment of a circle with a radius R0 that is adjacent to the substrate at an angle equal to θ0.Then After the cross-sectional shape of the rim is known, the K integral in the expression for N (7) takes a definite value where . Substituting all the necessary values in equation (10), we obtain the equation for determining the critical parameters of an initial perturbation in the following form Solving equation ( 14) with respect to r0 » K, we see that for a given geometry of the free interface of the rim, critical value of the dry spot radius is determined by the liquid film thickness, Bond number and contact angle.Figure 3 shows the dependences of r0/h versus (θ0) for three values of Bond number.For clarity, Table 1 shows film thicknesses for water at various Bond numbers selected for the calculations. Conclusions Critical ratios of the dry spot radius to the liquid film thickness are determined by Bond number and contact angle.In this case, at the contact angles greater than ș0 = 70-80 degrees, the effect of Bond number on the ratio r0/h is quite small, and the values of r0/h are also small.However, at moderate and small values of ș0, the role of the Bond number is extremely important, and the values of r0/h become substantially greater than unity, i.e. for small values of ș0 even large dry spots tend to disappear. Figure 1 . Figure 1.Schematic diagram of the liquid flow. Figure 2 . Figure 2. The cross-section of the rim surrounding the dry spot. Figure 3 . Figure 3.The calculation results of critical size of dry spot in the liquid film depending on the contact angle and Bond number.
2,076.2
2016-01-01T00:00:00.000
[ "Physics" ]
The Development and Validation of the Programming Anxiety Scale The main goal of the current study is to develop a reliable instrument to measure programming anxiety in university students. A pool of 33 items based on extensive literature review and experts' opinions were created by researchers. The draft scale comprised three factors applied to 392 university students from two different universities in Turkey for exploratory factor analysis. The number and character of the underlying components in the scale were determined using exploratory factor analysis. After exploratory factor analysis, confirmatory factor analysis was conducted on the draft scale using a sample of 295 university students. Confirmatory factor analysis was carried out to ensure that the data fit the retrieved factor structure. The internal consistency coefficient (Cronbach's alpha) was calculated for the full scale and each dimension for reliability analysis. For convergent validity, the factor loading of the indicator, the average variance extracted, composite reliability, and maximum share variance values were calculated. Additionally, convergent validity was tested through (1) comparison of mean values of factors and total programming anxiety depending on gender and (2) correlation analysis of factors, total programming anxiety, and course grade of students. The Fornell & Larcker criterion and the Heterotrait-Monotrait correlation ratio were utilized to assess discriminant validity. According to analysis results, the Programing Anxiety Scale (PAS) comprised 11 items in two factors: classmates and self-confidence. Similarly, results revealed that The PAS has good psychometric properties and can be used to assess the programming anxiety of university students. Introduction With the development of the internet and mobile technologies in the last two decades, breakthroughs have been experienced in many fields of computer science such as big data, artificial intelligence, blockchain, bioinformatics, wearable technologies, cloud computing, 3D printers, robotics, and virtual reality. Responsibilities of computer science such as automating the processes, facilitating communication, providing better products and services, assisting the world to be more productive have caused human beings to be more dependent on software (Santos, Tedesco, Borba, & Brito, 2020). As a result, all developed and developing countries are required to raise qualified individuals who can maintain the software used and produce practical solutions to new problems encountered in the future (Demirer & Sak, 2016). One of the conditions for the success of this task is to provide students with programming skills, which is considered one of the requirements of being a well-educated and knowledgeable citizen (Al-Makhzoomy, 2018;Kert & Uğraş, 2009). However, according to several studies, most computer science students regard programming courses as complicated and intimidating (Bennedsen & Caspersen, 2007;Connolly, Murphy, & Moore, 2009;Jenkins, 2002;Owolabi, Olanipekun, & Iwerima, 2014;Robins, Rountree, & Rountree, 2003;Wiedenbeck, Labelle, & Kain, 2004). Moreover, studies show that programming courses have high dropout and failure rates (Bennedsen & Caspersen, 2007;Luxton-Reilly et al., 2019). Since learners' self-belief plays a fundamental role in intellectual development (Berland & Lee, 2011;Pajares, 1992), Jiang, Zhao, Wang, and Hu (2020) believe that this trauma happens when students lose their self-efficacy in programming, which negatively affects learning outcomes. Connolly et al. (2007) propose a cognitive model to explain how programming anxiety influences students' emotional, behavioral and physiological reactions (see Figure 1). The mental model asserts that students' automatic thoughts are activated in programming situations, directly influenced by their core and intermediate beliefs. Eventually, automatic thoughts affect their emotional, behavioral, and physiological reactions. According to Connolly et al. (2007), a fear of programming may commence caused by core beliefs for a student sensitive to programming anxiety. Then, intermediate thoughts of students could emerge as a fear of what other students might think about their performance and ability. Finally, automatic thoughts arise in programming situations and trigger negative thoughts and reactions. In addition to the cognitive model for programming anxiety, Rogerson and Scott (2010) also depict an iceberg model to explain factors affecting fear of programming. According to the iceberg model, the fear of programming is induced due to the nature of programming. Rogerson and Scott (2010) cite those internal factors such as motivation, attitude, self-efficacy, and attribution often have a part in building negative perceptions of programming. At the same time, peers, teaching methodology, timing, lectures, and tutors constitute external factors. (Connolly et al., 2007) International Journal of Computer Science Education in Schools, April 2022, Vol. 5, No. 3 ISSN 2513 The number of studies on programming anxiety has risen dramatically in the last decade. Some of these studies examined factors associated with programming anxiety, while others investigated the impact of programming anxiety on student performance and motivation. According to S Sinožić and Orehovaki (2018), the absence of programming experience, fear of programming, and a misperception of programming languages as very complex are all powerful determinants of programming anxiety among novices. Similarly, unfamiliar subjects in programming courses make students avoid programming, and programming makes them feel uncomfortable (Olipas, Leona, Villegas, Cunanan & Javate, 2021). According to studies, learners' programming anxiety levels aggregated as they were presented to programming concepts and principles. (Campbell, 2018;Dasuki & Quaye, 2016). Several studies have also connected programming anxiety to academic performance, perceived self-efficacy, encountering errors when developing programs, gender, peers, test anxiety, mathematics, and computer anxiety. For example, Olipas et al. (2021) found a negative association between participants' academic performance and programming anxiety in a study of 348 students. Hsu and Gainsburg (2021) and Wilfong (2006) explain that self-efficacy plays a vital role in performance in programming courses, and self-efficacy has a mediating effect on the relationship between anxiety and performance. Results of a systematic review of the literature conducted by Nolan and Bergin (2016) illustrate correlates of programming anxiety as programming as a subject, test anxiety, computer anxiety (volume of computer usage), and using mathematics frequently in coding. Additionally, the students' incapacity to debug their programs increase their programming anxiety (Dasuki & Quaye, 2016;Nolan & Bergin, 2016). Some researchers mention the effects of peers on programming anxiety. According to Nolan and Bergin (2016), when programming students learn to program in a laboratory with many peers, this circumstance can be stressful. Falkner, Falkner, and Vivian (2013) explored how collaborative practices in programming courses can cause fear and tension in learners. They concluded that working in groups prevented students from feeling comfortable in classes. There are also studies in the programming literature on the effects of gender on programming anxiety. According to Olipas and Luciano's (2020) study, female students show more programming anxiety than male students. Chang (2005) also explored a possible association between the perceived complexity of programming tasks and programming anxiety with 307 participants. According to the findings, there was a strong association between these two variables, indicating that as the perceived complexity of programming assignments increased, so did students' perceived programming anxiety levels. Many studies in the literature state that programming anxiety is one of the factors that cause students to fail and lose interest in programming courses. It is reported that programming anxiety is critical in determining students' success in a programming course (Connolly et al., 2007;Figueroa & Amoloza, 2015;Kinnunen & Malmi, 2006;Nolan, Bergin & Mooney, 2019;Owolabi et al., 2014;Scott, 2015). With self-beliefs being the case, Kinnunen and Simon (2012) assert that learners' self-beliefs are developed due to the experiences students have while they engage in programming activities rather than the resulting quality of the programs they write. As a consequence of negative experiences and self-appraisals, learners lack the time or have no motivation to program (Kinnunen & Malmi, 2006;Scott, 2015). Similarly, Maguire et al. (2017) assert that programming anxiety causes a lack of confidence and plays a crucial role in discouraging students from carrying out programming independently. Results of the study of Özmen and Altun (2014) show that while students with a low level of programming anxiety spend extra time on programming and code more qualified programs, students with a high level of anxiety devote limited time on programming practices and avoid learning programming. Similar results have been cited by Scott (2015), concluding that programming anxiety inhibits time spent practicing programming and decreases course participation (Bergin & Reilly, 2005). Scott and Ghinea (2014) investigated the possible adverse effects of programming anxiety on students' programming practice. Participants of the study were 239 university students. The findings revealed that students are frequently concerned when undertaking debugging activities. In the light of all these studies in the literature, it is essential to measure the programming anxiety levels of students with reliable and valid instruments to determine students' anxiety levels and help learners overcome their anxiety and frustration in programming courses. However, despite anxiety's critical role in programming, research on anxiety scale development has been deficient. Information about these measurement instruments is summarized in Table 1. As presented in Table 1, three scales are prepared to measure primarily programming anxiety. All of the scales are based on self-reported data. In addition to these scales, it was noted that computer anxiety or information technology (IT) anxiety scales were adapted for measuring programming anxiety in several studies (see Olipas & Luciano, 2020;Scott &Ghinea, 2014, andOrehovacki, Radosevic &Konecki, 2012). Furhermore, Demir (2021) recently adapted Choo and Cheung's (1991) programming anxiety scale into Turkish. Purpose of the Study Studies show that reducing anxiety can enhance academic performance and achievement (Hattie, 2008). The same is true when it comes to improving the efficiency of programming courses. It is vital to identify learners' programming anxiety and work closely with students with high anxiety to develop the learning outcomes of programming courses at the highest level. In this sense, there is a need for reliable measurement tools designed to measure programming anxiety to make meaningful conclusions from the analysis. As a result, the current research aims to create a proper and reliable tool to measure programming anxiety in university students. Method The Computer Programming Anxiety Scale was developed and validated in three phases, illustrated in Figure 2. In summary, dimensions of the draft scale were identified, and the item pool was generated in the first phase. In the second phase, content and phase validity were assessed. In the last stage, exploratory and confirmatory factor analysis was conducted, and construct validity was evaluated. Phase 1: Identifying dimensions & item generation Clark and Watson (1995) recommend beginning scale development by clearly conceptualizing the target construct and clarifying its breadth and scope. The researchers conducted a comprehensive literature review and content analysis to identify different dimensions of programming anxiety. With this respect, models and explanations related to programming anxiety were examined to develop a clear conceptualization. Furthermore, related constructs including computer anxiety, math anxiety, and test anxiety were investigated. The Computer Programming Anxiety Scale (Choo & Cheung, 1991) was used to identify programming anxiety dimensions. At the same time, scales developed to measure students' anxiety, such as programming anxiety, computer anxiety, test anxiety, math anxiety, and foreign language learning anxiety, were investigated. Depending on the studies on programming anxiety, three dimensions were proposed: (1) classmates, (2) self-confidence, and (3) errors. The "Classmates" subscale measures students' anxiety in the presence of more proficient students. The "Programming confidence" subscale measures students' feelings of inadequacy while programming. "Errors" subscale measures students' anxiety when confronted with errors during programming. Next, a pool of 33 items was constructed to capture negative emotions during program development and debugging. The rationale underpinning including as many items as possible in the draft scale was that the number of items at the start should be twice as numerous as the final scale (Nunnally, 1994). To obtain precise and unambiguous items that reflect the specified conceptual definitions, item wording rules suggested by Carpenter (2018) were applied. This cyclical item development procedure yielded a total of 33 items, each rated on a 5-point scale from 1 ("never true") to 5 ("always true"). In this regard, "seldom true" was scored as 2, "sometimes true" was 3, and "often true" was 4. Phase 2: Development of the scale 2.2.1 Content validity Content validity of the draft scale was tested by interviewing five experts, three of whom were from the field of instructional technology, one from the field of Turkish language, and one from the field of psychological counseling and guidance. An expert opinion form was created in this phase. The experts were requested to rate each scale item using this form on a four-point rating scale (1 = not relevant; 2 = item requires so much revision that it is no longer relevant; 3 = item is suitable but needs minor changes; 4 = highly relevant). Data gathered from the expert opinion was used to quantify the content validity process and calculate Content Validity Index (I-CVI; Polit, Beck, & Owen, 2007). The I-CVI was calculated by dividing the experts who provided a 3 or 4 by the total number of experts for each item (Lynn, 1986). Nine items with an I-CVI-score of less than one were excluded from the draft scale using Lynn's (1986) criteria. In addition, based on the experts' recommendations, two of the retained items were revised to simplify the language. This operation yielded 24 items in the final pool (classmates: 8 items; self-confidence: 9 items; errors: 7 items). Table 2 depicts the item pool on the draft scale. Face validity The questionnaire's face validity was assessed quantitatively. To evaluate the qualitative face validity, nine college students enrolled in a programming course were interviewed face to face and participants rated the items based on clarity and relevancy. Despite some minor errors, all of the interviewees concurred on the clarity and comprehensibility of all of the items. Translation of the Scale The Programming Anxiety Scale items were created and written in Turkish at first. The data were collected utilizing this original scale. The translation of the original scale to English was carried out after the data collection process. A mixed translation strategy utilizing the back-translation method and the committee approach that was distinct from Jones, Lee, Phillips, Zhang & Jaceldo (2001) was used in the translation process. The researchers initially translated each item in the original version into English. Next, three Turkish/English bilingual professors thoroughly inspected each translated item. With the help of the multilingual teacher group, any necessary modifications to problematic items were performed. The bilingual experts agreed on the translated and original versions of the scale. Item8 I am concerned about not being able to write a program and being ridiculed by my classmates. Self-confidence Item9 I think I do not understand programming well. Item10 It makes me anxious to feel that I memorize programming topics instead of learning the logic. Item11 It makes me anxious to feel that I quickly forget what I have learned in programming lessons. Item12 I have doubts about creating the steps (algorithm) necessary for the solution while coding the program. Item13 I have concerns about my programming abilities. Item14 I feel confused when the program lines become complicated. Item15 I don't trust myself in writing programs. Item16 I get nervous when we talk about programming. Item17 It scares me that there are too many topics to learn in the programming lesson. Errors Item18 I get worried when I can not understand error messages. Item19 The number of errors in my program makes me worried. Item20 I am worried about encountering errors in my programs. Item21 I feel worried when my program fails to run. Item22 Debugging programs is a major worry for me. Item23 I get worried about debugging my software over and over again. Item24 It makes me worried to think that my codes will have bugs. The draft scale was validated primarily in two phases. Since using the same data set for exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) is not generally accepted as the correct method in the literature (Fokkema & Greiff, 2017), data were collected from two distinct sample groups of university students (first and second sample). While the data from the first sample was used to investigate the scale's underlying factor structure in the EFA phase, the data from the second sample was used in the CFA phase to cross-validate the EFA results. All subjects were recruited using a convenience sampling method. Analytical Strategy The PAS was validated using Boateng, Neilands, Frongillo, Melgar-Quiñonez, & Young's (2018) scale development recommendations. The number and character of the underlying components in the scale were determined using EFA, which CFA followed to ensure that the data fit the retrieved factor structure. The construct validity was then examined after a reliability analysis. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) and Bartlett's test were implemented to assess the adequacy of the study group (Tabachnick & Fidell, 2001). The tests showed that the data was suitable for EFA. In addition, considering Hair, Black, Babin, & Anderson's (2010) suggestions on the cutoff value for factor loadings and commonalities, these values were determined as .50 and .30, respectively. Items that loaded only one factor without any cross-loadings were kept. The research employed SPSS 22 software for EFA. For CFA, Maximum likelihood estimation was adopted to calculate the structure parameters using AMOS 22 software. Exploratory Factor Analysis The data's appropriateness for factor analysis was assessed before doing EFA. For this reason, Mahalanobis distance values for probable multivariate outliers were determined. Thirty-nine instances were eliminated from the analysis because their Mahalanobis values were above the necessary chi-square value of 54.05 (df = 26, alpha=.001) (Pallant, 2007). In addition, KMO was used to determine the adequacy of the sample size, and Bartlett's Test of Sphericity was used to determine whether the datum was suitable for factor analysis. The KMO sampling adequacy metric was .951, higher than the acceptable value of .60. Barlett's Test of Sphericity was also statistically significant (x 2 =3790.593, p=.000), demonstrating that the data was considerably factorable (Pallant, 2007). In the first stage of EFA, principal axis factoring was employed for 24 items with direct oblimin rotation. The direct oblimin rotation approach was chosen because of the associated factors (Costello & Osborne, 2005;Gorsuch, 1983). After the EFA process, the correlation matrix was investigated for multicollinearity issues. Correlations in the .80's or .90's (Field, 2018) were examined, and six items (Item18, Item19, Item20, Item21, Item23, and Item24) in the "errors" factor with a correlation coefficient greater than .8 were excluded from the scale. Next, EFA was conducted again for the remaining 18 items. Four items (Item2 and Item6 in classmates and Item13 and Item16 in self-confidence) with high factor loadings in multiple factors were excluded (Burns & Machin, 2009;Howard, 2016). EFA was executed with the remaining 14 items. Based on the cutoffs for the eigenvalues and inspection of the scree plot, a two-factor model was identified that explained 69.4% of the variance in programming anxiety. The two-factor model was confirmed by a parallel analysis with 5000 randomly generated data matrices through Parallel Analysis Web Application (Patil, Surendra, Sanjay, & Donavan, 2017). These factors were labeled: (1) Classmates and (2) Self-Confidence. Table 3 includes the complete list of factor loadings. Internal consistency reliabilities (i.e., Cronbach's alpha coefficient) for the full scale and the subscales were .95, .90, and .94, respectively. Debugging programs is a major worry for me. .822 Confirmatory Factor Analysis The 14-item Programming Anxiety Scale's two-factor model was subjected to CFA using second sample data. Before the analysis, data were screened for missing values and outliers. With this respect, 13 respondents were detected unengaged in the scale evidence by getting the same response for every item. Thus, 14 cases were deleted from the sample. Next, CFA was conducted with 282 samples using maximum likelihood estimation. After CFA, three items (i.e., Item1, Item9, and Item11) had standardized parameter estimates smaller than the recommended value of .50 (Hair et al., 2010). Furthermore, AVE values for factors below .50 were calculated, indicating the absence of convergent validity. After removing these three items, 11 item structure of the PAS was re-subjected to CFA. The diagram regarding the factor structure of programming anxiety with new item codes (See Appendix) and the parameter estimates was given in Figure 2. Figure 2. Standardized Coefficients for the Two-Factor Model of PAS According to Hu and Bentler (1999), researchers employ numerous goodness-of-fit metrics to analyze a model. In this study, the Chi-square goodness of fit test, Root Mean Square of Error of Approximation (RMSEA), Standardized Root Mean Square Residuals (SRMR), Goodness of Fit Index (GFI), Normative Fit Index (NFI), and Comparative Fit Index (CFI) were employed to assess model fit (Brown, 2006;Browne & Cudeck, 1993;Hair et al., 2010;Kline, 1998). Table 4 presents the fit statistics for the confirmative factor analysis. When Table 4 was analyzed, the chi-square value (X 2 = 114.226, X 2 / df = 2.79, p = .000) was found to be significant. RMSEA value of .071 indicate good adaptation (Brown, 2006;Browne & Cudeck, 1993). GFI, CFI, and NFI values greater than .90 indicate a good fit (Hair et al., 2010;Kline, 1998). The CFA results indicate that the structural model is a good fit. As shown in Fig. 2., each item loaded significantly on its particular dimension and was relatively large (.50 and above). Assessment of Reliability and Validity To assess convergent validity, standardized factor loadings, CR, and AVE were calculated (Hair et al., 2010). Cronbach alpha values for factors and the whole scale were calculated for reliability analysis (Taber, 2018). The condition that factor loading values greater than .5, Cronbach alpha values greater than .7, and AVE and CR values greater than .5 and .7 were taken into account (Taber, 2018;Hair et al., 2010). Table 5 illustrates that all constructs are reliable since they fulfill the above criteria. Furthermore, each construct's Cronbach's alpha exceeds the recommended value. The Cronbach's alpha value of the whole scale was calculated as .901, which fulfilled the reliability criterion. Furthermore, convergent validity was assessed through (1) comparison of mean values of factors and total programming anxiety depending on gender (Table 6) and (2) correlation analysis of factors, total programming anxiety, and course grade of students (Table 7). As shown in Table 6, female students had more programming anxiety than male respondents. Gender differences in programming anxiety were also investigated using a t-test. The test results showed that while there was no significant difference between male and female students in self-confidence factor (t = .767, p>.05), there was a significant difference in classmates factor (t =4.058, p=.000) and total programming anxiety (t = 2.518, p=.012). Correlation analysis results (Table 7) revealed that classmates (r=-.194) and self-confidence (r=-.315) factors, as well as total programming anxiety (r=-.284), were negatively associated with course grade, p<.01. All of these correlations were found as weak (Schober, Boer & Schwarte, 2018;Senthilnathan, 2019). The Fornell & Larcker and the HTMT criteria were used to test the scale's discriminant validity. Table 8 summarizes the results of the Fornell & Larcker criteria. In Table 8, each AVE's square root was given on the diagonal, and the correlation coefficients (off-diagonal) for each construct were displayed in the corresponding rows and columns. Fornel and Larcker (1981) state that the AVE values' square root should be higher than the correlations between the components included in the analysis. As shown in Table 8, this condition was satisfied, and the model met the Fornell & Larcker criterion for discriminant validity. In addition to the Fornell & Larcker criterion, discriminant validity was assessed through the HTMT coefficient. The HTMT coefficient was calculated as .705 in this model. According to Henseler, Ringle, and Sarstedt (2015), the HTMT coefficient should be less than .90 if the components to be evaluated are hypothetically close to one other. The HTMT coefficient was found to be below the threshold levels. Discussion and Conclusion In computing education research, accurate measurement is critical (Scott, 2015). On the other hand, few measurement tools are available to computer education researchers (Scott & Ghinea, 2014). The current research fills a gap in the literature by developing and validating a measurement tool to assess the computer programming anxiety of university students. The development and validation of the programming anxiety scale in the current study were carried out in harmony with the scale studies recently published in several fields (Nasir, Adil, & Kumar, 2021;Rosario-Hernández, Rovira-Millán, & Blanco-Rovira, 2022;Sun et al., 2022;Zarouali, Boerman, & de Vreese, 2021). EFA, DFA, reliability, and validity analysis resulted in a scale including 11 items, five items for classmates, and six for self-confidence. The minimum obtainable score from the Programming Anxiety Scale is 11, while the maximum score is 55. As the score obtained from the scale increases, programming anxiety also increases. Choo and Cheung's (1991) Computer Programming Anxiety Scale served as a guide to develop the present scale. In the current study, no factor related to errors was found, while there was a factor for error anxiety in the study of Choo & Cheung (1991) and Demir (2021), in which Choo & Cheung's (1991) scale was adapted into Turkish. Although seven items were included in the draft scale for this factor, six of these items showed high multicollinearity. They were removed from the draft scale due to the exploratory factor analysis, and one item was kept. Although the inclusion of an item about error anxiety shows that debugging is a source of anxiety for students (Dasuki & Quaye, 2016;Nolan & Bergin, 2016), it is worth examining why it is not included as a factor. Not having an "errors" factor may indicate that encountering errors is a source of anxiety regardless of the number of errors and the time spent for debugging, which was utilized as parameters in Choo & Cheung (1991) and Demir (2021). In other words, encountering errors in programs may exist as a single source of anxiety, regardless of the frequency of encountering errors, the number of errors, or the time it takes to debug. The fact that Demir's (2021) study does not contain any information about CFA makes it impossible to make inferences about whether the three-factor model shows a good fit in the Turkish version of the scale and make comparisons about the error factor. Within the scope of the current study, the only item related to error anxiety was included in the self-confidence factor. One of the reasons for this result may be the differences in perception of debugging as a process in programming activities between the novices and the relatively more experienced individuals. From this point of view, while debugging may be perceived as an independent process for novice programmers, debugging may be perceived as an integral process of programming and an element of self-efficacy perception. Considering a high degree of relationship between self-confidence and self-efficacy perception (Blanco et al., 2020;Malureanu, Panisoara & Lazar, 2021;Tsai, 2019), it is not surprising that an item in the error factor is included in the self-confidence factor. It is evident that enhanced debugging skills develop a programmer's confidence, and fear of making mistakes may be related to programming skills (Ahmadzadeh, Elliman, & Higgins, 2005;Connolly et al., 2009;Nolan & Bergin, 2016). From this point of view, the experience of the participant group of Choo & Cheung's (1991) study on programming was not explained in detail. The participant group was specified only as of grade 12 level. However, participants in both the EFA and CFA stages of the current study were relatively more experienced with programming than the participants in Choo & Cheung's (1991) study. They have developed at least one project and were familiar with the debugging process. This result may indicate that perceptions of debugging are related to programming experience. Another reason may be the attitudes of the participants towards programming. Choo & Cheung's (1991) participant group consisted of grade 12 junior high school students. In contrast, the current study participants comprised university students who perceived programming as a profession. The students' awareness that the programming profession will include debugging may have caused them to perceive debugging as a personal competence and an aspect of their career. The last reason for this result may be the software development environments (Integrated Development Environments-IDEs) and the resources and materials that can be facilitated for debugging. Scratch was used in Demir (2021) as the program development environment. On the contrary, Visual Studio was used in the current study. The nature of the errors encountered in Scratch and Visual Studio shows differences. The IDE used by Choo & Cheung (1991) was not specified. However, the number of resources found on debugging in the 1990s and today's resources are very different. Today, the internet is used for interpretations of error messages, and previous experiences of other people are utilized for solutions. Nowadays, even IDEs that translate error messages into users' native language exist. Therefore, the characteristics of IDEs and the resources may alter the perception of debugging anxiety. The validity of the developed scale was tested by comparing the results of studies examining the relationship between programming anxiety and theoretically related variables in the literature. In the current study, female students had more programming anxiety than male respondents. The t-test result also revealed that while there was no significant difference between male and female students in self-confidence, there was a substantial difference in classmates factor and total programming anxiety. This result is consistent with Olipas and Luciano's (2020) study. Similarly, consistent with the former findings ( (Connolly et al., 2007;Figueroa & Amoloza, 2015;Kinnunen & Malmi, 2006;Nolan et al., 2019;Owolabi et al., 2014;Scott, 2015), factors of the PAS and total programming anxiety was correlated with course grades of students. These results indicate that the developed scale is valid and reliable. Another issue is related to the fact that this scale was developed specifically for programming anxiety. Since this scale was designed specifically to measure programming anxiety, it differs from scales adapted to programming anxiety, such as computer anxiety, computer attitude, and IT anxiety (Chang, 2005;Olipas & Luciano, 2020;Owolabi et al., 2014). There were insufficient instruments to assess programming anxiety in the literature, and the current study offered a psychometrically reliable scale. With the help of the present scale, programming anxiety levels in students can be measured, methods and techniques that can reduce students' anxiety can be developed, and special attention can be paid to students with high programming anxiety. In addition, situations that increase programming anxiety in students can be investigated. The PAS developed in the current study is recommended for study groups that have somewhat experience in creating, coding, and debugging programming projects. Future research may concentrate on a different group of college students from various cultures and countries. Debugging programs is a major worry for me.
7,053.2
2022-01-01T00:00:00.000
[ "Psychology", "Computer Science" ]
College English Teaching Viewed from the Perspective of Intercultural Communication Foreign language teaching and learning is unlikely to be performed effectively without an appropriate understanding of its specific culture and the loss of intercultural awareness will lead to a negative influence on FLT. This article proposes the importance of cultural input in FLT, analyzes the factors of failure in cultivation of students’ capacity of intercultural communication and finally puts forward some methods and strategies to resolve these issues. Language is the most important communication tool for human beings, and is an important component of culture.The application of a language is influenced by its cultural background to a certain degree.When people communicate with a language, it always involves other cultural factors beyond the language itself, including social system, customs and habits, values, life style and norms of behavior, etc.Hence, the learning of a language can not go without the understanding of its culture, and only when the teacher goes deep into the cultural connotation implied by a language, that is, "the deep structure of culture" (covert culture) (Gu Jiazu, 2000), can he better guide students to generate a desirable communication schema, understand the cultural information contained in the text in the process of learning and ultimately improve their intercultural communication competence. Differences between Chinese and western culture Similar to language, culture can not be innately acquired, but has to be learned, but lack of understanding in English culture and cultural differences is the common weakness of Chinese students.Let's have a look at the following conversations between Chinese students and foreign teachers: Example 1: Chinese student: Hi, Jack, where are you going?Foreign teacher (feeling unhappily): I wish he weren't so curious about where I am going. Example 2: Chinese student: Let me help you with this bag.You're old.You must be tired.Foreign teacher (thinking): I am an English teacher.I should be able to make the right judgment. From these conversations, we can see that although the language employed by the student is absolutely correct, his communication with the foreign teacher is obstructed as a result of his lack of understanding in cultural differences, his mechanical copy of Chinese linguistic habits and inappropriate use of the language.Therefore, language teaching is by no means confined to grammar teaching.Even if the sentence is correct in terms of grammar, it may still cause awkwardness or misunderstanding if one does not understand its cultural connotation and its implied cultural background, and may even lead to difficulties in communication caused by non-linguistic factors.Therefore, in English teaching, a teacher should duly introduce to students the cultural background knowledge and awaken their intercultural awareness.For instance, in China, we call the elderly people "lao"(old), which is an expression of respect, whereas in western cultures, "old" means the last days of one's life.Thus, "old" is a taboo for western people who do not yield to age.For this reason, in British and American society, "senior citizens" is usually used as euphemism to refer to the elderly. It is true that people with different cultural backgrounds have coincidence in terms of the world outlook, values, behavior and way of thinking, but it is equally true that they have more differences.For example, the attached English and Chinese cultural meanings are basically the same when referring to the braveness of lion, the docility of sheep, the slyness of fox, the slowness of tortoise and the greediness of swine.However, in understanding the habitual nature of mouse, mule and night owl, the attached English and Chinese cultural meanings are widely divergent.In Chinese, we have such idioms as "look like a mouse and as shortsighted as a mouse", etc, which are used to describe furtiveness and a narrow vision, whereas in English, the "mouse" can refer to women and bashful people.In Chinese, night owl can be used to stand for portent of bad things to come, whereas in English, the night owl is a bird of intelligence.To realize the similarities and the differences of these aspects is of special importance to language learning.For example, the commercial economy in western society was separated from the patriarchal society of clan family at an earlier stage, and was transformed into an indenture system based society with the basis of property relation, characterized by a respect for personality, independence, equality, competition, freedom , personal privacy, and individualism.Thus, In English, "I" is always capitalized wherever it is placed.Furthermore, the traditional way of thinking in western culture is based on logical analysis and inference, and pursues accurate cognition, whereas Chinese traditional way of thinking is based on intuition and synthesis, focuses on a grasp of targets from an overall perspective, and lays particular stress on comprehensive thinking and thinking in images.All these differences lead to differences in Chinese and English in terms of discourse and text structure.That is, western people are inclined to use of linear thinking when they speak or write, tend to get straight to the point from the very beginning, and then give examples to demonstrate their point.By contrast, Chinese tend to state reasons, conditions and background, and then come to the point.In view of these factors, in English learning, Chinese students should pay attention to the differences of the two cultures as well as the differences in their modes of discourse so as to avoid communication failure. Misunderstanding in the connotative meanings of vocabulary In Chinese and western culture, there are quite a lot of vocabulary with similar conceptual meanings but widely divergent connotative meanings.Such vocabulary is established by usage in the life of people with different cultural background, and has fixed associative meanings which reflect the national character or cultural color.For example, (1) the vocabulary concerning animals.In Chinese culture, "crane" is the symbol of longevity.In traditional Chinese painting, in most cases, "crane" appears simultaneously with the pine, which is termed as "crane and pine for longevity".However, as for British and American people, crane can not arouse similar association, and even if there is any association, it is probably "crane" (a large machine that moves heavy things by lifting them in the air)", although in English, "crane (the machine)" and "crane (a kind of large bird with a long neck and long legs)" are homonym.(2) Vocabulary concerning color.Take the word "blue" as an example.In English, blue is usually connected with the meaning of being sad or depressed, and "blues" is a kind of music that is slow and sad, while "in a blue mood" means "in a sad or depressed mood".In addition, the word "blue" can also enable people to come to associate with aristocracy, because blue-blooded means "be of aristocratic birth".Neither of the connotative meanings of the word "blue" exists in the Chinese expression "blue".Besides, the connotative meaning refers to the meaning beyond the conceptual meaning, and is usually linked with the natural instincts and characteristics of objective things.For instance, idealism has two meanings in English, and one is a philosophical terminology without any commendatory or derogatory meaning, whereas in Chinese, "idealism" has a derogatory sense.The other meaning of idealism in English is "the behavior and beliefs of someone who has ideals", which has both its positive or negative connotations, whereas in Chinese, "idealism" usually has the meaning of being divorced from reality, with derogatory sense. In intercultural communication, because the two parties participating in communication are short of understanding in social and cultural tradition of the other party, and people with different cultural background have their particular conversational norms or rules, people tend to employ automatically and unconsciously their own way of speech, which may cause misunderstanding by the other party, and even lead to conflicts.This is termed as pragmatic failure by linguists.It is quite likely that westerners are tolerant of some grammatical mistakes we make in the process of communication, but the outcome of pragmatic failure is more serious.One who speaks fluent English is likely to be mistakenly believed to be familiar with the cultural background and value concept of this language, and his pragmatic failure is sometimes interpreted as an intentional speech act.On the other hand, "if westerners are aware of the secret about the deep structure of national culture hidden behind such a sort of problems, it is believed that they will not be that averse to such questions by Chinese, since mutual friendly expressions are welcomed at any time" (Gu Jiazu, 2000). To activate cultural schema related with the text The cultural schema discussed in this article can be understood as cultural knowledge beyond the text, including the knowledge structure constructed by cultural background, local customs and practices and concept of value, etc, and it plays a crucial role in language understanding and text reading.Therefore, teachers should activate students' cultural schema according to the subject matter and content of teaching materials so as to better understand the content of the text and grasp the language skills.Teachers should take the following aspects into consideration. To activate background knowledge The present College English teaching materials published in China are mostly original, and they also contain many intercultural elements, which create favorable circumstance for intercultural education.For example, when teaching the text "Longing for a New Welfare System"( Unit 3, Book 4, New Horizon College English), if a teachers gives a brief introduction to his students about the background knowledge of social welfare system in U.S.A before the illustration of the text, students may have a better understanding of the text.Similarly, the text "A Brush with the Law"(Unit 1, Book 3,College English) describes "I", a member of the Counter-Culture in 1960s who, when waiting for entrance into the university, wandered about and was suspected by police to commit a crime and was under arrest and on trial, but in the end, "I" was set free by the court.As a lead-in technique, teachers can introduce relevant laws in Britain by introducing the Counter-Culture and its characteristics, and help students make clear the cause and effect of a conflict and therefore, students might have a further understanding about the way of thinking and the value orientation of British people. To explore the connotative meanings of vocabulary Abundant cultural information is contained in vocabulary.The cultural awareness generated in the process of cultural development may affect the meanings of vocabulary, and endow the vocabulary with different connotative meanings in different languages.Here are some examples.In the text "What is Intelligence, Anyway", the American renowned writer Issac Asimov had a discussion on the relativity of intelligence.He said that when anything went wrong with his car, he always hastened to his car-repairer, watched him anxiously as he explored its vitals, and "listened to his pronouncements as though they were divine oracles".To British and American people, oracles often bring them such association: the oracle of Apollo at Delphi, which refers to the oracle released by Apollo at Delphi in the Greek myths, meaning quite smart and reliable forecast, or obscure and ambiguous inference.In contrast, although the Chinese expression "receiving an imperial edict" also expresses a respectful state of mind, but with less devotion and more reverence and awe. Besides, there are some figures of speech that make non-native speakers confused, although they reflect the culture of a nation from different perspectives.For example, in the textbook of College English, there are expressions such as "have butterflies in one's stomach "the last straw that breaks the camel's back", and other idioms, phrases, and proverbs, which all have their origins.In the process of teaching, teachers should explore the connotative meanings of vocabulary through the explanation of them, so as to enable students to experience the elegance of foreign culture and to strengthen their interest in learning. To provide students with practical skills Practical skills, such as the writing skill, are what students need to acquire in their after graduation, and is also what they are relatively lack of.Before the writing practice, teachers may introduce such skills by referring to the language used in the text and the culture reflected in the text, so as to meet students' needs. For instance, when teaching the text "My First Job", teachers may introduce some skills about how to write advertisements, application letters and resumes, etc. Due to the differences in culture, the writing of advertisements and application letters is not simply a matter of word-for-word translation between languages.Practical texts in Chinese and English are distinguished in terms of content and ways of expression.For example, Chinese advertisements are used to narrating the details and heightening the atmosphere, whereas English advertisements tend to go straight to the subject matter in brief language.What is more, "blowing one's own trumpet" is something that should be avoided in Chinese culture, whereas it becomes necessary for English application letters, which is meant to show that one is full of confidence, rich in experience and is a qualified candidate.Apart from the instruction in language points, teachers may demonstrate several example texts and ask students to design corresponding advertisements, application letters and resumes according to different text contents, and create the situation for conversations or interviews.They may also ask students to translate the authentic advertisements in newspapers from Chinese to English or from English to Chinese, write application letters that conform to the cultural norms of the target language to practice their practical skills and deepen their understanding in foreign culture. To improve the comprehensive quality of teachers in an all-round way Teachers play an irreplaceable role in the cultivation of intercultural communication competence.Some scholars once vividly proposed the formula: language + culture + teacher (catalyst) = communication competence of language (organic compound).Just as Chinese saying goes, "An accomplished apprentice owes his accomplishment to his great master".An English teacher should never settle for being familiar with grammar rules and a command of a vast amount of vocabulary.Only if the teacher is equipped with strong intercultural communication competence and solid foundation in Chinese and western culture, can he be a qualified teacher. To reform teaching methods and cultivate students' communication competence What students should learn from an English class is not merely language forms, but also vivid language itself as well as the covert culture in the language.In order to meet this requirement, reform and innovation should be carried out in such aspects as the concept of teaching, ways of teaching, etc. Teachers should try every means to set up various atmospheres for language practice such as role play, pair-work, group discussion and seminar, etc, and encourage students to place themselves in such a simulated situation as frequently as they can.In such a way, students may increasingly know the rule about what a person should say under specific circumstance, so that they may make better use of the way of thinking in English and experience the cultural connotation expressed by the target language. To facilitate teaching by means of CAAL The development of computer technology opens to us a window to know about the world, and also provides us with more effective teaching methods and means.At present, an overwhelming majority of universities and colleges in China are equipped with multi-media facilities, and the texts, narrations, photos, diagrams, video clips and music provided by the Internet feature figurativeness, diversity, novelty, richness and interest, which can bring students' intercultural awareness and interest in learning in full play.Meanwhile, teachers should encourage students to read more British and American literary works and English newspapers after class, accumulate relevant cultural knowledge, and watch more films in the original, which will surely bring much benefit to students. Foreign teacher: It's OK.I can manage.I'm not that old.Example 3: Foreign teacher: Your English is very good!Chinese student: No, no.my English is very poor.
3,600.8
2010-08-17T00:00:00.000
[ "Education", "Linguistics" ]
Densification of Magnesium Aluminate Spinel Using Manganese and Cobalt Fluoride as Sintering Aids Highly dense magnesium aluminate spinel bodies are usually fabricated using pressure-assisted methods, such as spark plasma sintering (SPS), in the presence of lithium fluoride as a sintering aid. The present work investigates whether the addition of transition metal fluorides promotes the sintering of MgAl2O4 bodies during SPS. At the same time, such fluorides can act as a source of optically active dopants. A commercial MgAl2O4 was mixed with 0.5 wt% of LiF, MnF2, and CoF2 and, afterwards, consolidated using SPS at 1400 °C. Although MnF2 and CoF2 promote the densification as effectively as LiF, they cause significant grain growth. Introduction Magnesium aluminate spinel is a material of interest for optical applications due to its excellent mechanical and optical properties. MgAl 2 O 4 has low density (3.58 g cm −3 ), typical fracture toughness of 1.9 MPa·m 0.5 , and high optical transmissivity in the visible to mid-infrared ranges [1][2][3][4][5][6][7]. Moreover, the spinel structure can host optically active elements, e.g., transition metal ions [8][9][10]. Having a symmetrical-cubic structure, transparent MgAl 2 O 4 ceramics with high optical homogeneity can be fabricated by removing the scattering centers, such as pores and impurities [7,[11][12][13]. Fabricating highly dense MgAl 2 O 4 is, however, difficult because of the slow diffusion of oxygen. Therefore, spinel is usually densified by two-stage sintering, i.e., pressure-less sintering followed by hot isostatic pressing (post HIPing). Alternatively, spinel can be produced via single-stage pressure-assisted sintering, such as hot pressing (HP) or spark plasma sintering (SPS) [2,[14][15][16][17]. Using the SPS method makes it possible to fabricate highly dense spinel bodies at a significantly lower temperature and a shorter time as compared with the other methods; this enables suppressing grain growth and producing high-quality samples. Lithium fluoride (LiF) is a conventional sintering aid in processing MgAl 2 O 4 ; it promotes the densification by producing transient liquid at low temperatures and introducing cation defects into the spinel structure. Moreover, LiF removes carbon contamination by forming volatile CF x species. [18][19][20][21][22]. However, lithium incorporation into the MgAl 2 O 4 structure can have a detrimental effect on optical properties, especially when spinel is doped with optically active elements [22,23]. Transition metal fluorides that melt at low temperatures are other suitable candidates to be used as additives for sintering of magnesium aluminate spinel. Such dopant provides double benefits. Materials 2020, 13 They assist densification through the formation of a transient liquid and, at the same time, introduce an optically active element into the MgAl 2 O 4 structure. In this study, magnesium aluminate spinel bodies were fabricated by spark plasma sintering of a commercial aluminate spinel powder using LiF, MnF 2 , and CoF 2 as sintering additives. The effect of sintering aids on densification behavior and final microstructure was investigated. Materials and Methods A commercial magnesium aluminate spinel powder, Baikalox S30CR (Baikowski, Paris, France) was used as the starting material in this study. The powder is characterized by a BET specific surface area of 26 m 2 g −1 and a median particle size (d 50 ) of 0.2 µm according to the data provided by the supplier. The spinel powder contains minute amounts of impurities, mainly, S(600), Na(41), and Ca (15) in wt. ppm. Lithium fluoride (LiF), manganese fluoride (MnF 2 ), and cobalt fluoride (CoF 2 ), ACS grade > 99.0, purchased from Sigma-Aldrich (St. Louis, MO, USA) were used as sintering aids. MgAl 2 O 4 ceramics doped with the sintering aids (0.5 wt%) were prepared by dispersing and mixing of powders in isopropanol, ACS grade >99.0%, using an ultrasonic homogenizer (UW2200, BANDELIN, Berlin, Germany). Then, the mixtures were transferred to a rotary evaporator and dried. Ready-to-press (RTP) powder was prepared by passing the dried mixture through a sieve with a screen mesh of 500 µm. Samples were consolidated using a spark plasma sintering machine (Dr. SINTER SPS-625, FUJI, Tokyo, Japan). The RTP powder was filled in a graphite die with an inner diameter of ca. 12 mm. The powder was separated from the die by graphite paper placed between powder, punches, and the die wall. The die was then wrapped in a carbon felt and placed between the moving rams of the SPS. The sintering schedules consisted of fast increases of the temperature to 600 • C in 3 min followed by heating of the sample at a constant heating rate of 100 • C min −1 to 1400 • C at which the shrinkage stops; therefore, sintering processes were carried out with no dwelling time to avoid unnecessary grain growth. The sintering was carried out under vacuum (5 to 9 Pa). A constant uniaxial pressure of 75 MPa was applied above 800 • C. The displacement of punches and the temperature were recorded during the whole heating/cooling step. The pellets' temperature was measured constantly by using an optical pyrometer focused on the hole drilled into the die wall. The coefficient of thermal expansion (CTE) of the system (e.g., graphite die, paper, and punches) was determined separately (i.e., in a run without the specimen) throughout the temperature range of this study (600 to 1400 • C) in order to account for the instrumental error. The sintered pellets were subsequently subjected to a heat treatment at 800 • C (heating rate 2.5 • C min −1 ) for 60 min in air in a muffle furnace to remove the residual carbon from the surfaces. The bulk density and apparent porosity of sintered bodies were measured using Archimedes' method in deionized water according to the ASTM standard (C329-88(2016)) [24]. All provided values are the means of at least 10 independent measurements. The melting temperature of sintering aids and their reactions with the spinel powder were studied using thermal analysis. The measurements were performed by a simultaneous thermal analyzer (STA 449 F1 Jupiter, Netzsch, Selb, Germany) in Differential Thermal Analysis, DTA, configuration, using alumina crucibles in flowing N 2 (20 mL min −1 ). Thermal Gravimetric analysis, TG, was performed simultaneously. Data were collected on ca. 100 mg of mixtures containing 10 wt% of sintering aids upon heating at a constant rate of 20 • C min −1 to 1350 • C. The samples' microstructure was examined using a scanning electron microscope, SEM, (JEOL 7600F, JEOL, Tokyo, Japan) equipped with an energy dispersive X-ray spectrometer (EDXS, Oxford Instruments, Abingdon, UK). Small fragments were collected from the fractured surface of samples and fixed on aluminium sample holders using conductive adhesive tape and coated with carbon to prevent charging. Figure 1 shows the SEM micrograph of the magnesium aluminate spinel powder; the powder consists of submicron agglomerates comprised of smaller nanoparticles with a median diameter of 90 ± 15 nm. However, the specific surface area indicates a somewhat smaller primary particle size of approximately 64 nm. Similarly, Maca et al. examined the primary particle size of the same commercial MgAl 2 O 4 powder, and reported an average particle size of 58 nm, by assuming that the primary particles have a spherical shape. The median particle size provided by the producer is, therefore, related to the size of agglomerates [25]. Results Materials 2019, 12, x FOR PEER REVIEW 3 of 10 90 ± 15 nm. However, the specific surface area indicates a somewhat smaller primary particle size of approximately 64 nm. Similarly, Maca et al. examined the primary particle size of the same commercial MgAl2O4 powder, and reported an average particle size of 58 nm, by assuming that the primary particles have a spherical shape. The median particle size provided by the producer is, therefore, related to the size of agglomerates [25]. Table 1 summarizes the measured density and porosity of samples produced from the powder mixture containing 0.5 wt% of additives and the additive-free sample; the theoretical density of samples was calculated using the density of magnesium aluminate spinel (3.58 g cm −1 ) and the density of a respective additive following the rule of mixtures. The density of LiF, MnF2, and CoF2 are 2.64, 3.98, and 2.70 g cm −3 , respectively. The measured density of an additive containing samples is within the range of experimental error comparable to the density of additive-free samples. While the residual porosity of additive-free samples is almost zero, the doped samples are characterized by limited amounts of closed porosity. Such behavior can be related to the evaporation of additives at high temperatures. Table 1. Relative density and apparent porosity of additive-free and transition metal fluoride-doped samples produced by spark plasma sintering (SPS) at 1400 °C (no isothermal dwell). The numbers in parenthesis represent standard errors. Sample Relative Figure 2 shows the temperature dependence of shrinkage and shrinkage rates of the powder mixtures containing 0.5 wt% of the additives and of the additive-free spinel powder during SPS; the shrinkage was determined by measuring the punch displacement upon heating at the constant heating rate of 100°C min −1 , between 600 °C and 1400 °C. The shrinkage rate was calculated point-bypoint, using Equation (1): where l represents linear shrinkage measured at the temperature T, lo is the original length of the sample, t represents time, and the variable ̇ stands for heating rate. The shrinkage curves are characterized by two main regions, a rapid shrinkage of ~5%, occurring around 800 °C, followed by continuous shrinkage up to ~23%, after which the curve reaches a plateau. While the latter is related to the densification by sintering, the former is attributed to the powder particles' rearrangement when pressure was applied. [26] The densification of all samples starts at around 850 °C. The sintering aids Table 1 summarizes the measured density and porosity of samples produced from the powder mixture containing 0.5 wt% of additives and the additive-free sample; the theoretical density of samples was calculated using the density of magnesium aluminate spinel (3.58 g cm −1 ) and the density of a respective additive following the rule of mixtures. The density of LiF, MnF 2 , and CoF 2 are 2.64, 3.98, and 2.70 g cm −3 , respectively. The measured density of an additive containing samples is within the range of experimental error comparable to the density of additive-free samples. While the residual porosity of additive-free samples is almost zero, the doped samples are characterized by limited amounts of closed porosity. Such behavior can be related to the evaporation of additives at high temperatures. Table 1. Relative density and apparent porosity of additive-free and transition metal fluoride-doped samples produced by spark plasma sintering (SPS) at 1400 • C (no isothermal dwell). The numbers in parenthesis represent standard errors. Sample Relative Density (%) Apparent Porosity (%) Figure 2 shows the temperature dependence of shrinkage and shrinkage rates of the powder mixtures containing 0.5 wt% of the additives and of the additive-free spinel powder during SPS; the shrinkage was determined by measuring the punch displacement upon heating at the constant heating rate of 100 • C min −1 , between 600 • C and 1400 • C. The shrinkage rate was calculated point-by-point, using Equation (1): T stands for heating rate. The shrinkage curves are characterized by two main regions, a rapid shrinkage of~5%, occurring around 800 • C, followed by continuous shrinkage up to~23%, after which the curve reaches a plateau. While the latter is related to the densification by sintering, the former is attributed to the powder particles' rearrangement when pressure was applied [26]. The densification of all samples starts at around 850 • C. The sintering aids clearly decrease the temperature at which the densification is completed. The shrinkage curve of additive-free spinel reaches a plateau indicating the end of shrinkage, at 1350 • C, whereas the shrinkage of samples doped with LiF, CoF, and MnF 2 stops at 1170, 1195, and 1250 • C, respectively. Moreover, the shrinkage rate of doped samples is significantly higher than that of the pure spinel, particularly at temperatures higher than 1000 • C (Figure 2b). measurements elucidating thermal processes on fluoride doped powders. The DTA curve of LiFdoped samples is characterized by a sharp endothermic peak at ~830° attributed to chemical reactions and melting of lithium fluoride, as discussed below (Equations 6 and 7). In contrast, the sample containing MnF2 exhibits no clear endothermic effect at the melting temperature of MnF2 (856 °C). The behavior of the CoF2 containing sample is similar, showing no thermal effect, which could be attributed to melting of CoF2 (i.e., at ca 930 °C). However, all samples exhibit an endothermic peak at 1240 °C, attributed to the eutectic melting of magnesium fluoride, indicating chemical reactions between sintering aids and MgAl2O4, yielding MgF2, as pointed out in the following text. The TG curves show that the weight of LiF samples decreases rapidly above 1050 °C, while samples containing MnF2 and CoF2 exhibit slower weight loss in the following two steps: a slow decline above 850 °C followed by a rapid decrease over 1050 °C. The onset of weight loss can be correlated with the small endothermic effect on DTS curves associated with melting of MgF2 (1263 °C). The observed weight loss was then associated with evaporation of MgF2 from the melt. On the basis of the literature data, vapor pressure of molten MgF2 reaches ~13 Pa at 1270 °C and ~130 Pa at 1434 °C, so its loss is expected to be significant. Figure 3 summarizes the results of DTA and TG analyses of powder mixtures containing 10 wt% of fluorides in the temperature interval between 600 to 1350 • C. These were carried out as reference measurements elucidating thermal processes on fluoride doped powders. The DTA curve of LiF-doped samples is characterized by a sharp endothermic peak at~830 • attributed to chemical reactions and melting of lithium fluoride, as discussed below (Equations (6) and (7)). In contrast, the sample containing MnF 2 exhibits no clear endothermic effect at the melting temperature of MnF 2 (856 • C). The behavior of the CoF 2 containing sample is similar, showing no thermal effect, which could be attributed to melting of CoF 2 (i.e., at ca 930 • C). However, all samples exhibit an endothermic peak at 1240 • C, attributed to the eutectic melting of magnesium fluoride, indicating chemical reactions between sintering aids and MgAl 2 O 4 , yielding MgF 2 , as pointed out in the following text. The TG curves show that the weight of LiF samples decreases rapidly above 1050 • C, while samples containing MnF 2 and CoF 2 exhibit slower weight loss in the following two steps: a slow decline above 850 • C followed by a rapid decrease over 1050 • C. The onset of weight loss can be correlated with the small endothermic effect on DTS curves associated with melting of MgF 2 (1263 • C). The observed weight loss was then associated with evaporation of MgF 2 from the melt. On the basis of the literature data, vapor pressure of molten MgF 2 reaches~13 Pa at 1270 • C and~130 Pa at 1434 • C, so its loss is expected to be significant. Figure 4 shows the X-ray diffraction pattern of pellets produced by SPS at 1400 °C; the XRD pattern of as-received spinel powder is also shown for comparison. According to the XRD experiments, magnesium aluminate spinel is the only crystalline phase present in the samples; the sintered samples are characterized by sharp and narrow diffraction maxima that imply the sintering procedure (heating up to 1400 °C with the heating rate of 100 °C min −1 , with no dwell time) increases the size of coherently diffracting domains (crystallites). The XRD patterns were analyzed further by using Rietvel refinement [27,28]. The lattice parameter of additive-free spinel is estimated to be 8.0798 Figure 4 shows the X-ray diffraction pattern of pellets produced by SPS at 1400 • C; the XRD pattern of as-received spinel powder is also shown for comparison. According to the XRD experiments, magnesium aluminate spinel is the only crystalline phase present in the samples; the sintered samples are characterized by sharp and narrow diffraction maxima that imply the sintering procedure (heating up to 1400 • C with the heating rate of 100 • C min −1 , with no dwell time) increases the size of coherently diffracting domains (crystallites). The XRD patterns were analyzed further by using Rietvel refinement [27,28]. The lattice parameter of additive-free spinel is estimated to be 8.0798 ± 0.0002 Å while the lattice parameter for the samples doped with LiF, MnF 2 , and CoF 2 are 8.0814 ± 0.0001 Å, 8.0833 ± 0.0003 Å, and 8.0833 ± 0.0001 Å, respectively. The incorporation of dopants into the spinel structure results in a slight increase of the lattice parameter due to size mismatch of doping cations and Mg 2+ and Al 3+ in the spinel crystal lattice. Figure 5 shows the fracture surface of additive-free and doped samples with the 0.5 wt% of LiF, MnF 2 , and CoF 2 . All doped samples exhibit significant grain coarsening. The LiF-doped spinel is, interestingly, characterized by a smaller grain size as compared with the spinel doped with MnF 2 and CoF 2 . Bright spots observed on fracture surfaces were studied by EDX. Figure 6 shows a typical EDX spectrum collected from a bright spot in a CoF 2 doped sample. The spots also contain, along with doping ions, a significant concentration of Sulphur, implying that the sulphate impurities in the spinel powder reacted with the dopant yielding sulfate phases during sintering. However, the content of sulphates was below the detection limit of X-ray diffraction, and the size of sulphate inclusions too small to be identified by EBSD. comparison. The experimental data are fitted by the model patterns obtained by Rietveld refinement of experimental data. Figure 5 shows the fracture surface of additive-free and doped samples with the 0.5 wt% of LiF, MnF2, and CoF2. All doped samples exhibit significant grain coarsening. The LiF-doped spinel is, interestingly, characterized by a smaller grain size as compared with the spinel doped with MnF2 and CoF2. Bright spots observed on fracture surfaces were studied by EDX. Figure 6 shows a typical EDX spectrum collected from a bright spot in a CoF2 doped sample. The spots also contain, along with doping ions, a significant concentration of Sulphur, implying that the sulphate impurities in the spinel powder reacted with the dopant yielding sulfate phases during sintering. However, the content of sulphates was below the detection limit of X-ray diffraction, and the size of sulphate inclusions too small to be identified by EBSD. Figure 2 shows that the onset of densification of all studied samples occurs at a lower temperature than in conventional sintering. The application of pressure during spark plasma influences densification in two ways. First, powder particles rearrange under pressure. Secondly, the densification mechanism is also affected, due to grain boundary sliding [24,27]. Consequently, the maximum in densification rate of SPS samples is achieved at a lower temperature as compared with that reported for conventional sintering of the same powder (1350 °C, S30CR, Baikowski) [29,30] . Discussion The LiF-doping results in a higher densification rate, and larger grains than in the additive-free spinel are formed. The interaction between LiF and MgAl2O4 above the melting LiF point, 840 °C, and the formation of liquid are described as follows [31,32]: The transient liquid enhances the densification through two main mechanisms, liquid redistribution facilitates particle rearrangement and the fluorine rich liquid enhances the mass transport. Moreover, lithium aluminate spinel can produce solid solution with magnesium aluminate spinel and introduces structural defects according to: Figure 2 shows that the onset of densification of all studied samples occurs at a lower temperature than in conventional sintering. The application of pressure during spark plasma influences densification in two ways. First, powder particles rearrange under pressure. Secondly, the densification mechanism is also affected, due to grain boundary sliding [24,27]. Consequently, the maximum in densification rate of SPS samples is achieved at a lower temperature as compared with that reported for conventional sintering of the same powder (1350 • C, S30CR, Baikowski) [29,30]. Discussion The LiF-doping results in a higher densification rate, and larger grains than in the additive-free spinel are formed. The interaction between LiF and MgAl 2 O 4 above the melting LiF point, 840 • C, and the formation of liquid are described as follows [31,32]: Materials 2020, 13, 102 8 of 10 The transient liquid enhances the densification through two main mechanisms, liquid redistribution facilitates particle rearrangement and the fluorine rich liquid enhances the mass transport. Moreover, lithium aluminate spinel can produce solid solution with magnesium aluminate spinel and introduces structural defects according to: Introduction of structural defects, such as oxygen vacancies, facilitates the movement of ions, particularly oxygen ions and, in turn, promotes the densification, as well as the grain growth. With the further increase of temperature, the evaporation rate of the transient liquid accelerates (above ca. 1100 • C, Figure 3b) and the liquid phase is effectively removed from the system. Consequently, the densification rate decreases. LiF : MgF 2 (l) → LiF(g) + MgF 2 (g) (4) Finally, gaseous MgF 2 reacts with LiAlO 2 forming spinel again: Considering densification behavior of MnF 2 and CoF 2 similar to that of the LiF doped samples, a similar sintering mechanism can also be expected for the transition metal fluorides. Surprisingly, although CoF 2 and MnF 2 have higher melting temperatures than LiF, the CoF 2 and MnF 2 doped spinel exhibits more extensive grain coarsening than the LiF doped ones. Interestingly, the DTA records show no endothermic effect, which would indicate melting of pure transition metal fluoride additives around the expected temperatures of their melting. Only a small endothermic effect corresponding to melting of MgF 2 implies chemical reactions between MnF 2 or CoF 2 and MgAl 2 O 4 yielding transition phases. Due to the similar ionic radius of magnesium, manganese and cobalt (r Mg 2+ = 72 pm, r Mn 2+ = 70 pm, and r Co 2+ = 75 pm), it can be assumed that the transition metal ions replace magnesium ions in the spinel crystal lattice, producing MgF 2 : The formation of a solid-solution with different divalent cations within the spinel structure results in spinel structure strain, as well as introduction of point defects, such as oxygen vacancies as a result of hosting divalent ions in octahedral sites. [7,33]. The TG results, Figure 3b, confirm that the weight loss of MnF 2 and CoF 2 doped samples begins at higher temperatures than in the LiF doped material (1250 • C vs. 1100 • C). The transient liquid is, therefore, present at grain boundaries for a longer time, providing a faster diffusion path for the elements, and resulting not only in efficient densification but also in more pronounced grain growth. This resulted in the increase of the median grain size from 0.8 µm in undoped ceramic to 10.3 µm, 14.0 µm, and 11.6 µm in LiF, MnF 2 , and CoF 2 doped spinel, respectively, as shown in Figure 5. Apart from the presence of transient liquid in spinels doped by transition metal fluorides, the finer grain size of LiF doped samples can be attributed also to Zener pinning effect of LiAlO 2 (Equation (1)) precipitated at grain boundaries [34]. Transition metal fluorides act as sintering aid during the densification of magnesium aluminate spinel and produce spinel structures containing optically active ions, e.g., Mn 2+ and Co 2+ , that can be used in applications such as white LEDs or Q-switches. [35,36] Further studies are required to evaluate whether and how the addition of transition metal fluorides affects the densification and final properties of magnesium aluminate spinel ceramics. Conclusions Highly dense magnesium aluminate spinel bodies doped with LiF, MnF 2 , and CoF 2 were produced using spark plasma sintering. Although the contribution of CoF 2 and MnF 2 to the densification of MgAl 2 O 4 is more complicated as compared with LiF, they promote the densification almost as efficiently as LiF, despite the higher melting points of transition metal fluorides. The MnF 2 and CoF 2 containing samples exhibit larger grains as compared with LiF-doped spinel spark plasma sintered under the same conditions.
5,647.6
2019-12-24T00:00:00.000
[ "Materials Science" ]
miR-21 attenuates lipopolysaccharide-induced lipid accumulation and inflammatory response: potential role in cerebrovascular disease Background Atherosclerosis constitutes the leading contributor to morbidity and mortality in cardiovascular and cerebrovascular diseases. Lipid deposition and inflammatory response are the crucial triggers for the development of atherosclerosis. Recently, microRNAs (miRNAs) have drawn more attention due to their prominent function on inflammatory process and lipid accumulation in cardiovascular and cerebrovascular disease. Here, we investigated the involvement of miR-21 in lipopolysaccharide (LPS)-induced lipid accumulation and inflammatory response in macrophages. Methods After stimulation with the indicated times and doses of LPS, miR-21 mRNA levels were analyzed by Quantitative real-time PCR. Following transfection with miR-21 or anti-miR-21 inhibitor, lipid deposition and foam cell formation was detected by high-performance liquid chromatography (HPLC) and Oil-red O staining. Furthermore, the inflammatory cytokines interleukin 6 (IL-6) and interleukin 10 (IL-10) were evaluated by Enzyme-linked immunosorbent assay (ELISA) assay. The underlying molecular mechanism was also investigated. Results In this study, LPS induced miR-21 expression in macrophages in a time- and dose-dependent manner. Further analysis confirmed that overexpression of miR-21 by transfection with miR-21 mimics notably attenuated lipid accumulation and lipid-laden foam cell formation in LPS-stimulated macrophages, which was reversely up-regulated when silencing miR-21 expression via anti-miR-21 inhibitor transfection, indicating a reverse regulator of miR-21 in LPS-induced foam cell formation. Further mechanism assays suggested that miR-21 regulated lipid accumulation by Toll-like receptor 4 (TLR4) and nuclear factor-κB (NF-κB) pathway as pretreatment with anti-TLR4 antibody or a specific inhibitor of NF-κB (PDTC) strikingly dampened miR-21 silence-induced lipid deposition. Additionally, overexpression of miR-21 significantly abrogated the inflammatory cytokines secretion of IL-6 and increased IL-10 levels, the corresponding changes were also observed when silencing miR-21 expression, which was impeded by preconditioning with TLR4 antibody or PDTC. Conclusions Taken together, these results corroborated that miR-21 could negatively regulate LPS-induced lipid accumulation and inflammatory responses in macrophages by the TLR4-NF-κB pathway. Accordingly, our research will provide a prominent insight into how miR-21 reversely abrogates bacterial infection-induced pathological processes of atherosclerosis, indicating a promising therapeutic prospect for the prevention and treatment of atherosclerosis by miR-21 overexpression. Introduction Atherosclerosis and its complications rank as the leading cause of death, representing nearly 29% of mortalities globally [1]. The large atherosclerotic plaque formation and subsequent rupture is the crucial mechanism underlying the onset of acute ischemic syndromes, including cerebral infarction, stroke, myocardial infarction, and sudden death [2][3][4]. It is commonly accepted that lipidladen foam cell accumulation and inflammation in vessel walls are the hallmarks of the early stage of atherosclerosis, and then trigger a series of atherosclerotic complications [5]. Lipid deposition is the characteristic of atherosclerosis, and then forms the lipid core and earliest detected lesion, the fatty streak. It is known that the increasing macrophage foam cell formation induces the production of a large lipid-rich necrotic core, followed by the rupture of vulnerable plaque and subsequent thrombogenesis, a key trigger for acute cardiovascular diseases [6]. Blocking lipid deposition dramatically dampens atherosclerotic coronary lesions, indicating a potential target for atherosclerosis and cardiovascular events by the decrease of lipid levels [7,8]. Macrophages are believed to possess a pivotal function in lipid-laden foam cell formation and inflammation during atherosclerosis progression and plaque destabilization [9,10]. It is well known that macrophages can be activated by lipopolysaccharide (LPS) to uptake oxidized low-density lipoprotein (ox-LDL), which is a necessary step for macrophage foam cell production and the subsequent fatty streak formation. As a component of Gramnegative bacteria cell walls, LPS has been gradually demonstrated to be associated with cardiovascular disease [11][12][13]. When injection with endotoxin LPS in apolipoprotein E (apoE) deficient mice, the atherosclerotic lesion size is significantly increased [12,14]. Importantly, LPS can induce macrophage inflammation response and secrete abundant pro-inflammatory cytokines, which aggravate the atherosclerosis progress and lead to the instability of vulnerable plaques. Chronic administration of LPS in ApoE−/− mice obviously increases the production of inflammatory cytokines (such as TNF-α, IL-1β, IL-6, and MCP-1) and enhances the development of atherosclerosis [14]. Treatment with melittin dramatically recovers LPSinduced atherosclerotic lesions by the suppression of pro-inflammatory cytokines and adhesion molecules, suggesting an important anti-atherogenic strategy [15]. MicroRNAs (miRNAs) are known to be highly conserved, small non-coding RNA molecules (approximately 18-24 nucleotides), and represent a new class of gene regulators, which can interact with the 3'-untranslated region (3'-UTR) of a target gene to inversely regulate their target gene transcription or translation. Emerging evidences have demonstrated that miRNAs exert prominent roles in the inflammatory process and lipid accumulation in patients with coronary artery disease [16][17][18]. For example, miR-147 can act as a negative feedback regulator for Toll like receptor 4 (TLR4)-induced inflammatory responses [19]. Among these members, more researches have been focused on miR-21 as its significant roles in heart, tissue injury, inflammation, and cardiovascular diseases [20][21][22]. Recent research has confirmed a notable upregulation of miR-21 in atherosclerotic plaques, indicating a pivotal effect on plaque destabilization [23]. However, the function of miR-21 in the progress of atherosclerosis and vulnerable plaques remains unknown. In this study, we aimed to explore the effects of miR-21 on LPS-induced lipid accumulation and inflammation responses in macrophages. Furthermore, the underlying mechanism involved in this process was also discussed. Cell culture and treatment Mouse RAW 264.7 monocyte/macrophage-like cell line was purchased from the American Type Culture Collection (ATCC, Manassas, VA). Cells were cultured at 37°C under 5% CO 2 in DMEM supplemented with 10% FCS and 100 U/mL streptomycin-penicillin. Before stimulation with the indicated dose and times of LPS, cells were stimulated with anti-TLR4 antibody (10 μg/ml), or 30 μM NF-κB inhibitor PDTC for 4 h, prior to incubation with ox-LDL (50 μg/ml). Cells from the third to fifth passage were used in this experiment. Transfection To specifically induce miR-21 expression in macrophages, the miRIDIAN™ miR-21 mimics was introduced. The miRIDIAN™ hairpin inhibitor was used to effectively silence the endogenous mature miR-21 function. The miR-2 mimics, scrambled control microRNA, anti-miR-210 inhibitor and anti-microRNA control inhibitor were obtained from Thermo Scientific (Lafayette, CO). For transfection, 0.4 nmol microRNA mimics or anti-microRNA inhibitors was mixed with 15 μl Geneporter 2 Transfection Reagent (GTS, San Diego), and were then transfected into 1 × 10 6 cells for 6 h. After incubation with fresh medium for 48 h, cells were used for further experiments. miR-21 overexpression and inhibition were assessed using quantitative PCR. RNA extraction and quantitative real-time PCR After stimulation with LPS, the total RNA from cells was isolated using the mirVana™ miRNA isolation kit according to the manufacturer's instructions (Roche Diagnostics, Mannheim, Germany). To quantify the expression levels of miR-21 in cultured and transfected cells, TaqMan miRNA assay kits (Applied Biosystems, Foster City, CA) were introduced. Briefly, the obtained RNA was reversetranscribed to synthesize the complementary DNA with the Oligo (dT) primer (Fermentas). Then, a sepcific primer for miR-21 and sno202 was obtained from Ambion to perform the TaqMan assays according to the manufacturers' protocol. Additionally, sno202 was introduced as a normalizing control. The relative expression was calculated using 2 -ΔΔ CT . Oil red O staining After stimulation with LPS (100 ng/ml) and ox-LDL for 24 h, the cultured and transfected macrophages were washed with PBS three times, following fixation with 4% paraformaldehyde/PBS for 15 min. Then, cells were rinsed with ddH 2 O, and then the neutral lipids were stained using the freshly diluted 0.5% Oil red O solution (Sigma) for 10 min at 37°C. Cells were then rinsed with water, and hematoxylin was introduced to label the cell nuclei. The Oil red O-stained lipids in macrophagederived foam cells were morphologically evaluated by microscopy. Lipid assay by high-performance liquid chromatography (HPLC) The cellular lipids (total cholesterol, TC; cholesterol ester, CE) were analyzed as previously described [24]. Briefly, cells were rinsed with PBS for three times, and then lysed with 0.9% NaOH solution followed by homogenization in ice bath for 10 s. The BCA kit was used to evaluate the protein concentration, an equal volume of trichloroacetic acid was introduced and centrifuged for 10 min. Using stigmasterol to construct a standard curve first, and then the extraction procedure was repeated. Then, the samples were re-suspended in 100 μl of isopropanol-acetonitrile (v/v, 20:80) for 5 min. Ultimately, all the samples were placed on Agilent 1100 series HPLC (Wilmington, DE). Western blotting Following rinses with PBS three times, the total protein extracts of RAW 246.7 cells were extracted using RIPA lysis buffer (Beyotime, Nantong, China), following the quantitative analysis of protein concentrations via the BCA assay (Pierce, Rockford, IL). For western blotting, the obtained protein was electrophoresed by SDS-polyacrylamide gel electrophoresis, and about 100 μg of protein was then transferred onto a polyvinylidene difluoride (PVDF) membrane in a semi-dry transblot apparatus. After incubation with buffer containing 5% nonfat dry milk in Trisbuffered saline with Tween (TBST) at 4°C overnight, the PVDF membrane was cultured with anti-TLR4 and anti-p65 NF-κB antibodies for 1 h at 37°C to probe the targeted protein. Following washed three times with TBST, HRPconjugated secondary antibodies were added for 1 h. The LumiGLo reagent (KPL, Gaithersburg, MD) was used to visualize the bound antibodies. The protein expression levels were normalized by β-actin. Enzyme-linked immunosorbent assay (ELISA) assay To analyze the levels of interleukin 6 (IL-6) and interleukin 10 (IL-10) in the transfected macrophages stimulated with LPS, the ELISA assay was introduced. Briefly, about 2 × 10 5 cells were seeded into 24 well plates and incubated at 4°C overnight. The transfected cells were stimulated with LPS for 24 h, and the concentrations of IL-6 and IL-10 in supernatants were measured using ELISA DuoSet Development systems according to the manufacturer's instructions (R&D Systems). Statistical analysis All assays were performed in triplicate and numerical results were presented as mean ± SEM. SPSS 11.0 was used to analyze the data. The statistical significance of differences between groups was analyzed by Student t-test. A p value less than 0.05 was considered statistically significant. LPS induced miR-21 expression in macrophages LPS is known to be critical for the progress of atherosclerotic plaques [13]. To assess the expression levels of miR-21 in LPS-induced macrophages, qRT-PCR was performed. As shown in Figure 1A, an obvious upregulation of miR-21 mRNA levels was observed at 8 h post LPS stimulation. Simultaneously, treatment with inchmeal increased times of LPS stimulation, the mRNA levels of miR-21 were gradually up-regulated, indicating that LPS triggered a time-dependent increase in the expression levels of miR-21 mRNA in macrophages. After exposure to various doses of LPS (0, 50 and 100 ng/ml), the mRNA levels of miR-21 were about 3.4-fold and 6-fold over control at 50 ng/ml-and 100 ng/ml-treated groups, respectively ( Figure 1B). Taken together, these results confirmed that LPS could trigger the expression of miR-21 in macrophages in a dose-and time-dependent manner. MiR-21 negatively regulated LPS-induced macrophage foam cell formation Numerous reports have been confirmed that LPS can enhance the ability of macrophages to become foam cells, which is a pivotal trigger for atherosclerosis [14,25]. To investigate the function of miR-21 in LPS-induced foam cell formation, we overexpressed and silenced the expression levels of miR-21 in macrophages. After transfection with miR-21 mimics, a dramatic increase in miR-21 mRNA was observed in macrophages compared with the control group ( Figure 2A). Moreover, anti-miR-21 inhibitor notably reduced the expression levels of miR-21 mRNA ( Figure 2B). To further analyze the roles of miR-21 in LPS-induced lipid accumulation in macrophages, we performed the HPLC assay. As shown in Figure 2C, overexpression of miR-21 prominently attenuated LPS-induced lipid deposition, and the ratio of CE/TC decreased from 49.82% to 26.86%. Furthermore, silencing miR-21 mRNA levels through transfection with anti-miR-21 inhibitor, remarkably accelerated LPS-induced ratio of CE/TC. Additionally, Oil red O staining analysis suggested that overexpression of miR-21 obviously dampened LPS-triggered macrophage uptake of ox-LDL, and abrogated the formation of lipid droplets ( Figure 2D). A corresponding increase in foam cell formation was also determined in anti-miR-21 inhibitorstransfected macrophages. Together, these results suggested that miR-21 overexpression inhibited LPS-induced foam cell formation, which was reversely enhanced when miR-21 mRNA levels were silenced in LPS-stimulated macrophages, indicating as a negative regulator of miR-21 in LPS-triggered macrophage foam cell formation. TLR4-NF-κB pathway was responsible for miR-21mediated lipid deposition in LPS-stimulated macrophage As a common receptor of LPS, TLR4 and its downstream signaling effector NF-κB are crucial for atherosclerotic plaque formation and coronary lesion progression [26,27]. To clarify the mechanism underlying miR-21regulated lipid accumulation in macrophages, TLR4 and NF-κB pathway was introduced. Western blotting analysis ascertained that transfection with miR-21 mimics strikingly impeded the expression levels of TLR4, as well as intranuclear NF-κB p65 levels ( Figure 3A). Consistently, downregulation of miR-21 expression significantly increased the activation of TLR4 in LPS-induced macrophages, concomitant with the activation of intra-nuclear NF-κB p65. Together, our data indicated that miR-21 could dampen LPS-induced activation of TLR4-NF-κB pathway in macrophages. To further assess the correlation between miR-21regulated TLR4 signaling and lipid accumulation in LPS-induced macrophage, we silenced the TLR4-NF-κB pathway. As shown in Figure 3B, pretreatment with specific anti-TLR4 antibody dramatically abrogated TLR4 expression. Simultaneously, a significant inhibition of NF-κB activation was also manifested by preconditioning with the NF-κB inhibitor PDTC ( Figure 3C). Further mechanism assays corroborated that anti-miR-21 inhibitor transfection significantly accelerated lipid deposition in LPS-stimulated macrophages, which was prominently attenuated by pretreatment with anti-TLR4 antibody, indicating that miR-21 silencing enhanced lipid accumulation by LPS-activated TLR4 pathway ( Figure 3D). Moreover, a similar decrease in lipid accumulation was validated in PDTC-treated groups. Taken together, these results told that miR-21 majorly regulated lipid-laden macrophage foam cell formation stimulated with LPS via TLR4-NF-κB pathway. miR-21 mediated the production of inflammatory cytokines by TLR4-NF-kB in LPS-induced macrophages During the progression of atherosclerotic plaques, the release of critical pro-inflammatory cytokines from macrophages such as IL-6, IL-12 and TNF-α, was considered to be pivotal [5,14]. To further assess the effects of miR-21 on LPS-induced inflammatory response in macrophages, we assessed the inflammation cytokine levels of IL-6 and IL-10 by ELISA assay. As shown in Figure 4A, LPS dramatically induced the production of IL-6 levels, and this increase was significantly decreased by miR-21 mimics transfection in macrophages. Furthermore, the levels of anti-inflammatory cytokine IL-10 was obviously enhanced compared to LPS-treated groups. Consistently, the inhibition of miR-21 expression with anti-miR-21 inhibitor transfection notably augmented IL-6 levels ( Figure 4B), as well as an obvious decrease in IL-10 levels ( Figure 4C). Therefore, all of these results showed that miR-21 down-regulated the pro-inflammatory cytokine IL-6 levels and up-regulated the anti-inflammatory cytokine IL-10 levels, indicating an important function on inflammatory response in macrophage-stimulated by LPS. The activation of TLR-4 induces the production of inflammation cytokines to regulate the immune responses, which is a prominent contributor to atherosclerotic plaque formation and instability by the LPS/TLR4 signal transduction pathway [28,29]. To further clarify the underlying mechanism involved in miR-21-regulated macrophage inflammatory response, TLR-4 signaling was included. After silencing the activation of TLR-4 by specific antibody, the expression levels of IL-6 were strikingly attenuated compared to control in LPS-stimulated macrophages ( Figure 4D). Moreover, blocking the activation of TLR-4 downstream effector NF-κB with PDTC, IL-6 levels was significantly impended in LPS and anti-miR-21treated groups. For IL-10, its expression was correspondingly increased after preconditioning with TLR-4 specific antibody and NF-κB inhibitor PDTC, compared with LPS stimulation plus con-Amb and DMSO. Taken together, our data suggested that miR-21 could regulate inflammation response in macrophage stimulated by LPS via suppressing TLR-4-NF-κB signaling. Discussion Cardiovascular disease has garnered increased interest and became the pre-eminent health problem worldwide as the leading cause of death and illness [1,3]. Atherosclerosis constitutes the single most crucial contributor to the outcome of cardiovascular diseases. Recently, numerous animal and cell experiments have focused on the miRNA profile in atherosclerotic processes, and an obvious upregulation of miR-21 has been demonstrated in atherosclerotic plaques [23,30]. However, its function on the developmental progress of atherosclerotic plaques remains unclear. In this study, our results have manifested that miR-21 levels were significantly increased, and can negatively regulate lipid accumulation and inflammation cytokine secretion in LPS-stimulated macrophages by TLR-4-dependent signaling. LPS, as one of the best studied immunostimulatory components of bacteria, has been proven to enhance lipid deposition and lipid-derived macrophage foam cell formation, as well as inflammatory cytokines release. All atherosclerosis. In this study, overexpression of miR-21 dramatically attenuated the ratio of CE/TC, indicating an obvious decrease in lipid accumulation in LPS-stimulated macrophages. Simultaneously, blocking of miR-21 expression accelerated LPS-induced lipid deposition in macrophages. Further analysis suggested that a notable reduction in foam cell formation was observed when overexpression of miR-21 in macrophages exposed to LPS. While in contrast to the control group, inhibiting miR-21 expression induced dramatically lipid droplets formation in macrophages stimulated with LPS. Together, our results suggested that miR-21 negatively regulated LPS-induced lipid accumulation in macrophages. Toll-like receptors (TLRs) exerts multiple roles in atherosclerosis, and is highly expressed in atherosclerotic plaque. Among these members, TLR4 has drawn more attention during the development progress of atherosclerosis. TLR4 is known as the receptor of LPS, and its deficiency significantly attenuated aortic atherosclerosis in ApoE−/− mice [31]. Moreover, lipid accumulation in circulating monocytes was significantly reduced in TLR4-deficient mice. Growing evidence indicates that TLR4 plays a very important role in macrophage foam cells formation, indicating a critical roles of TLR4 in atherosclerosis via regulating lipid deposition [32,33]. To elucidate the underlying mechanism involved in miR-21regulated lipid-laden macrophage foam cell formation, TLR4 pathway was discussed. As expected, miR-21 overexpression remarkably dampened the activation of TLR4 and its downstream NF-κB, while miR-21 expression inhibition reversely augmented the activation of TLR4-NF-κB. When blocking TLR4 expression with its specific antibody, LPS-induced lipid accumulation was strikingly decreased in macrophages transfected with anti-miR-21 inhibitor. Simultaneously, a similar reduction in lipid deposition was also confirmed when pretreatment with PDTC. Hence, these results suggested that miR-21 could negatively regulated lipid accumulation via TLR4-NF-κB pathway in LPS-stimulated macrophages, implying an important role in the development of atherosclerosis. During the past decade, a prominent role for inflammation in atherosclerosis and its implications have been appreciated. Inflammation ranks as a major characterization for atherosclerosis, and the release of abundant inflammatory molecules will give rise to abnormal foam cell formation and initiate the development of atherosclerotic lesions. Blocking macrophage inflammation by TGR5 activation attenuates atherosclerosis lesions, indicating a potential therapeutic aspect in anti-atherosclerosis [34]. LPS is known as a potent inducer of the inflammatory response. Therefore, we further analyzed the effect of miR-21 in LPS-triggered inflammation in macrophages. Following transfection with miR-21, the levels of pro-inflammatory cytokine IL-6 was dramatically attenuated, accompany with an increase of anti-inflammatory cytokine IL-10. The corresponding changes of IL-6 and IL-10 were also confirmed when silencing miR-21 levels, indicating an important function of miR-21 on LPSinduced macrophage inflammation. As a key component of innate immune response, TLR4 possesses a pivotal role in the initiation and progression of atherosclerosis, and can regulate the inflammatory response in macrophages via its downstream NF-κB signaling [29,35]. To further elucidate the underlying mechanism involved in miR-21-mediated inflammation cytokines secretion, we blocked the activation of TLR4 and NF-κB. After blocking TLR4 expression, the increase in IL-6 and decrease in IL-10 was significantly mitigated in miR-21-silencing cells. The similar changes in IL-6 and IL-10 were also corroborated when preconditioning with PDTC in anti-miR-21 inhibitor-transfected macrophages. Together, these results told that miR-21 could regulate macrophage inflammation and lipid accumulation via the TLR4-NF-κB signaling pathway. However, the mechanism involved in miR-21-induced inhibitory effect on TLR4-NF-κB is still unclear, which needs to be explored in our next plan. In conclusion, our research investigated for a potential role of miR-21 in atherosclerosis. In this study, LPS induced the expression of miR-21 in a time-and dosedependent manner. Further analysis manifested that miR-21 negatively regulated lipid-laden foam cell formation and inflammatory responses in LPS-stimulated macrophages through the TLR4-NF-κB pathway, indicating a critical roles of miR-21 in the progression of atherosclerosis. Hence, the beneficial clinical effects of miR-21 overexpression in the prevention and treatment of atherosclerosis deserve further investigations.
4,615.8
2014-02-07T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Importance of Surfactant Quantity and Quality on Growth Regime of Iron Oxide Nanoparticles. This study shows the influence of selected nonstandard surfactants on the growth and properties of magnetite nanoparticles. Particles were obtained using thermally decomposed iron (III) acetylacetonate in an organic environment. For synthesis, three different concentrations (4, 8, and 16 mmol) of tested surfactants were used. Five types of each long-chain carboxylic acid and amines were selected for stabilization of nanoparticles. Nanoparticles were characterized by X-ray diffraction, transmission electron microscopy, and infrared spectroscopy. Magnetic properties of the nanoparticles were tested by conventional room temperature Mössbauer spectroscopy with and without external magnetic field. TEM images clearly showed that application of tertiary amines causes the nanoparticles to form nanoflowers, in contrast to other compounds, which do not show such growth. Influence of surfactant amount on growth regime depends on the nature of the substances. Mössbauer spectroscopy confirms differences in magnetic core composition as a result of the surfactant amount present in synthetic procedure. Introduction Nowadays, surface stabilizers are widely used in different areas of human life. They are present in detergents, emulsifiers, foaming agents, as well as in antibiotics or herbicides. They possess the ability of adsorption, and thus can change the surface of the objects on which they are present [1,2]. Due to their special properties, some compounds can be used as substrates in nanoparticle synthesis, where they play the role of surface stabilizers, oxidation protecting agents, and separators in the aggregation process (in magnetic material cases) [3,4]. On the other hand, properly distributed particles can be easily manipulated by an external magnetic field [5]. In addition, when the size of particles decreases below submicron scale, they gain new extraordinary properties that are not typical of reference bulk materials, such as optical, electric, magnetic, and thermodynamic stability, etc. [6,7]. Great surface to volume ratio influences their abnormal chemical activity, and the surface termination establishes final properties which reflect in its application [8]. The existence of a large variety of nanostructures and possible surface stabilizers causes ongoing need for more and more detailed studies. Since the growing regime is determined by competition between surface and core precursors, the same growth scenario regimes cannot be observed even for similar "chemical" substances [9]. Our main goal is to find the best controlled growth of magnetite nanoparticles from selected compounds and recognize process dependence. Generally, iron oxide nanoparticles are now widely used in IT memory media, environmental protection, food products, medicine, etc. [10][11][12][13]. They can also be applied for targeted drug delivery in vivo, heating centers, contrast agents, biosensors, as well as adsorption centers of heavy metals in detectors of pollutants [14][15][16][17][18]. Their biocompatibility and antibacterial properties allow for usage in a wide range of clinical tests. However, their surface has to have specific characteristics to be useful. Magnetic nanoparticles can be also applied in other areas of human activity, such as electronics, data carriers, in industry, for production of antibacterial self-cleaning surfaces, or environment protection as water cleaners or catalysts [19][20][21][22]. Due to their wide range of possible application, magnetic nanoparticles are continuously heavily studied in order to obtain increased efficient materials (what can be realized via small size and narrow size distribution, proper chemical activity, and stability in various environments) [23]. Therefore, detailed studies of the composition and their relation on final useful properties are still needed. Literature searches show that with adjustment of the reaction conditions (time and temperature of synthesis, type of surfactant, and precursor concentrations), control of nanocrystal size and growth regime can be achieved [24,25]. For example, a high surfactant/precursor ratio favors the creation of a large number of small nanocrystals in comparison to existing particle growth. In addition, nanoparticles size can be controlled by binding different kinds of surfactants to the growing grains surface. As a result, short chain stabilizers allow for faster growth, which result in bigger particles, contrary to long chain compounds which favors a slower rate [26]. There is also a growing interest in the application of nanoparticles in magnetic hyperthermia. In this area not only core composition and size is important but also its surface termination which governs distance between heating centers [27][28][29]. Huge attention has been paid as well to tumor treatment via thermally stimulated drug delivery based on various types of functionalized magnetic particles [30,31]. In summary, any biorelated application needs well characterized materials where its description includes both fundamental knowledge and usage related tests. Interplay between the ratio of inorganic core precursors and organic surfactants can cause a few growth regimes, which form a well-specified size of nanoparticles compared to a scatter mixture. Our previous studies suggest that there is a clear dependence of selected surfactant on growth regime [3]. The present study describes the influence of the amount of used substrates on size and defines the properties of nanoparticles. Material and Apparatus In order to obtain magnetite nanoparticles (MNPs) with various surface stabilizers types and thicknesses, the following substrates were purchased from Sigma-Aldrich: Fe(acac) 3 , 1,2-hexadecanediol, phenyl ether, oleyl amine, lauric acid, palmitic acid, stearic acid, caprylic acid, trioctylamine, hexylamine, dioctylamine, and triethylamine. Acetone and oleic acid were purchased from Polish Chemical Compounds (POCH). The purification process and subsequent separation of nanoparticles from unreacted residuals were performed by simultaneous usage of acetone, sonication bath, and permanent magnet [32]. Crystallinity of all synthesized nanoparticles were checked by X-ray diffraction (XRD) (Agilent Technologies SuperNova diffractometer with a Mo microfocused source (K α2 = 0.0713067 nm)). Diameter, shape, and size distribution were estimated based on transmission electron microscopy (TEM) (FEI Tecnai G2 X-TWIN 200 kV microscope, Hilsboro, OR, USA). Verification of surface modification was upon infrared (IR) spectrometry (Nicolet 6700 spectrometer working in reflecting mode, Thermo Fischer Scientific, Waltham, MA, USA). Mössbauer spectra of all samples were obtained with the use of the spectrometer working in constant acceleration mode with a 57 CoRh radioactive source. For each experiment, a spectrum of metallic iron foil (α-Fe) was used as a reference. All samples were measured at room temperature and in the transmission mode. A hand-made permanent magnet setup was used as a source of homogenous 1.3 T external magnetic field in the sample position (for in field Mössbauer spectroscopy). Magnetite Nanoparticles Preparation Routine Studied in the paper magnetite (Fe 3 O 4 ), nanoparticles were obtained using a specific routine. First, chemicals Fe(acac) 3 , 1,2-hexadecanediol, and phenyl ether were mixed together in the same amounts for all trials. Then, one of three concentrations (4, 8, or 16 mmol) of one of the different surface stabilizers (oleic acid, lauric acid, palmitic acid, stearic acid, caprylic acid, trioctylamine, hexylamine, dioctylamine, triethylamine, or oleyl amine) were added, respectively. Throughout the whole synthesis, an inert atmosphere was maintained by continuous argon flow. The temperature of the synthesis was kept around 260 • C for 30 min to preserve boiling temperature of phenyl ether [33,34]. Afterwards, the nanoparticles solution was rinsed with deoxygenated acetone and dried in vacuum conditions with use of an evaporator until the powder was obtained. In Table 1, the summary and notation of prepared nanoparticles are displayed. The scheme of synthesis is presented in Figure 1. Studied in the paper magnetite (Fe3O4), nanoparticles were obtained using a specific routine. First, chemicals Fe(acac)3, 1,2-hexadecanediol, and phenyl ether were mixed together in the same amounts for all trials. Then, one of three concentrations (4, 8, or 16 mmol) of one of the different surface stabilizers (oleic acid, lauric acid, palmitic acid, stearic acid, caprylic acid, trioctylamine, hexylamine, dioctylamine, triethylamine, or oleyl amine) were added, respectively. Throughout the whole synthesis, an inert atmosphere was maintained by continuous argon flow. The temperature of the synthesis was kept around 260 °C for 30 min to preserve boiling temperature of phenyl ether [33,34]. Afterwards, the nanoparticles solution was rinsed with deoxygenated acetone and dried in vacuum conditions with use of an evaporator until the powder was obtained. In Table 1, the summary and notation of prepared nanoparticles are displayed. The scheme of synthesis is presented in Figure 1. Transmission Electron Microscopy Qualitative and quantitative analysis of nanoparticles morphology (size, shape, and size distribution) was done based on transmission electron microscopy. TEM images were collected in series and can be found in Figure 2 (carboxylic acids-panel (A), and amines-panel (B)). Materials 2019, 12, x FOR PEER REVIEW 4 of 12 Qualitative and quantitative analysis of nanoparticles morphology (size, shape, and size distribution) was done based on transmission electron microscopy. TEM images were collected in series and can be found in Figure 2 (carboxylic acids-panel (A), and amines-panel (B)). Detailed analysis of TEM images ( Figure 2) conclude that the application of different surfactants in various concentrations can determine morphology, diameter, shape, and size distribution of the nanoparticles. In general, usage of amines results in a more even shape of nanostructures. Furthermore, with an increase in concentration of surfactant, there was a decrease in average particle size. Some of the obtained nanoparticles strongly aggregate (Fe3O4 + 4 mmol of OA, Fe3O4 + 16 mmol of TEA) in indistinct shapes. The most regular, homogenous, and well-separated nanoparticles were obtained for Fe3O4 + 4 mmol of OLA, Fe3O4 + 4 mmol of HA, Fe3O4 + 16 mmol of HA, Fe3O4 + 4 mmol of DOA, and Fe3O4 + 16 mmol of DOA. In a few cases, self-organization ability can be observed, which is only present for well-defined nanoparticles with narrow size distribution [35]. From obtained images, it can be also observed that in the case of tertiary amines (trioctylamine, and triethylamine), particles grow as nanoflowers, regardless of the amount of used surfactant. Nanoparticles obtained in the presence of organic acids have wider size distribution and are less regular in shape in comparison to nanoparticles synthesized with amines. Here, a clear conclusion cannot be made because growth changes case by case. The size of particles increases with concentration of LA. On the other hand, presence of CA causes growth of rectangular or triangular particles. Quantitative analysis of the nanoparticles gives the sizes of each type of nanoparticle. Obtained values for each series are presented in Table 2. Detailed analysis of TEM images ( Figure 2) conclude that the application of different surfactants in various concentrations can determine morphology, diameter, shape, and size distribution of the nanoparticles. In general, usage of amines results in a more even shape of nanostructures. Furthermore, with an increase in concentration of surfactant, there was a decrease in average particle size. Some of the obtained nanoparticles strongly aggregate (Fe 3 O 4 + 4 mmol of OA, Fe 3 O 4 + 16 mmol of TEA) in indistinct shapes. The most regular, homogenous, and well-separated nanoparticles were obtained for Fe 3 O 4 + 4 mmol of OLA, Fe 3 O 4 + 4 mmol of HA, Fe 3 O 4 + 16 mmol of HA, Fe 3 O 4 + 4 mmol of DOA, and Fe 3 O 4 + 16 mmol of DOA. In a few cases, self-organization ability can be observed, which is only present for well-defined nanoparticles with narrow size distribution [35]. From obtained images, it can be also observed that in the case of tertiary amines (trioctylamine, and triethylamine), particles grow as nanoflowers, regardless of the amount of used surfactant. Nanoparticles obtained in the presence of organic acids have wider size distribution and are less regular in shape in comparison to nanoparticles synthesized with amines. Here, a clear conclusion Materials 2020, 13, 1747 5 of 12 cannot be made because growth changes case by case. The size of particles increases with concentration of LA. On the other hand, presence of CA causes growth of rectangular or triangular particles. Quantitative analysis of the nanoparticles gives the sizes of each type of nanoparticle. Obtained values for each series are presented in Table 2. X-ray Diffraction With the use of a micro-focused X-ray diffractometer, the crystal structure and concentration of resultant iron oxide nanoparticles grown in the presence of different surfactants was measured. Obtained XRD patterns in series with restriction of concentration are depicted in Figure 3, along with a summary of calculated average grains size, lattice constants, and strains ( Table 2). All diffractograms presented in Figure 3A show typical inverse spinel set of patterns indicating growth of magnetite/maghemite structure. According to Miller nomenclature, present peaks can be indexed as (220), (311), (400), (422), (511), and (440) [36]. The position on the 2θ angles axis and relative intensity of observed signals proves presence of magnetite/maghemite phases. No other iron oxides are observed, which justify universality of the chosen fabrication procedure. Zoom on (311) ( Figure 3B) peak allows to correlate TEM images with quality and eventual transition of the particle's crystal structure, which is reflected as a small shift of maximum of the peak on 2θ axis and changes of its linewidth. Materials 2019, 12, x FOR PEER REVIEW 6 of 12 All diffractograms presented in Figure 3A show typical inverse spinel set of patterns indicating growth of magnetite/maghemite structure. According to Miller nomenclature, present peaks can be indexed as (220), (311), (400), (422), (511), and (440) [36]. The position on the 2θ angles axis and relative intensity of observed signals proves presence of magnetite/maghemite phases. No other iron oxides are observed, which justify universality of the chosen fabrication procedure. Zoom on (311) ( Figure 3B) peak allows to correlate TEM images with quality and eventual transition of the particle's crystal structure, which is reflected as a small shift of maximum of the peak on 2θ axis and changes of its linewidth. Obtained values for each type of particle (lattice constant, strain, and grain size) are presented in Table 2. Lattice constant values vary between (8.35-8.40 ± 0.02) (Å), which is in agreement with the literature value of bulk magnetite and maghemite lattice constants (8.39 Å) [37], (8.33 Å) [38], respectively. However, in most cases, obtained numbers are closer to the magnetite value. Therefore, it can be concluded that the amount of maghemite phase is very small, and probably present only at the nanoparticles surface. Nanoparticle grain size and strain, which can indicate crystal imperfections, were calculated with the use of Williamson-Hall Equation (1) [39]: where β-full width at half maximum in (rad); ϴ-diffraction angle in (rad); λ-wavelength in (Å); D-grain size in (Å); ε-strain (dimensionless). From the obtained values (Table 2), it can be seen that the grain size of the nanoparticles strongly depends on the type and amount of used surface stabilizers. Therefore, not only is the type of surfactant important in the synthesis environment, but also the amount of organic ingredients used in the procedure is vital in the production of nanoparticles. Both factors influence the nanoparticles growth process, however their relative relation cannot be summarized in a general rule. Strain decreases with larger amounts of surfactant, but not in all cases (exclude LA, SA, TOA, Obtained values for each type of particle (lattice constant, strain, and grain size) are presented in Table 2. Lattice constant values vary between (8.35-8.40 ± 0.02) (Å), which is in agreement with the literature value of bulk magnetite and maghemite lattice constants (8.39 Å) [37], (8.33 Å) [38], respectively. However, in most cases, obtained numbers are closer to the magnetite value. Therefore, it can be concluded that the amount of maghemite phase is very small, and probably present only at the nanoparticles surface. Nanoparticle grain size and strain, which can indicate crystal imperfections, were calculated with the use of Williamson-Hall Equation (1) [39]: where β-full width at half maximum in (rad); θ-diffraction angle in (rad); λ-wavelength in (Å); D-grain size in (Å); ε-strain (dimensionless). From the obtained values (Table 2), it can be seen that the grain size of the nanoparticles strongly depends on the type and amount of used surface stabilizers. Therefore, not only is the type of surfactant important in the synthesis environment, but also the amount of organic ingredients used in the procedure is vital in the production of nanoparticles. Both factors influence the nanoparticles growth process, however their relative relation cannot be summarized in a general rule. Strain decreases with larger amounts of surfactant, but not in all cases (exclude LA, SA, TOA, TEA). Values of average grain size vary between (6-14 ± 2) nm. It is observed that, in the case of amines crystallites, diameter clearly decreases (oleylamine, hexylamine, dioctylamine) with concentration change, which is in line with TEM observation. Some values disagree with the TEM data, nevertheless general trends are well presented. The biggest disagreement is seen in TOA and TEA data. The average diffracting zone is not well correlated with particles size due to its internal structures. Such discrepancy can be accepted because comparison considers results of two methods in which sampling and results fitting is different. As a consequence, therefore, the amount of specified material is not the same for both and it is a source of inconsistency in obtained values. It should be noted that the fitted in procedure values are affected by uncertainty connected with maghemite and magnetite crystal parameters, composition distribution, crystal surface distortion, as well as greater influence of strain in smaller particles [40]. Strain values for all samples were evaluated, as well. The smallest particles and the one with irregular morphology show strain values higher than 3.5 × 10 −3 . IR Spectroscopy Magnetic nanoparticles obtained in performed studies were analyzed by IR spectroscopy. IR spectroscopy was required to detect induced changes on the nanoparticles surface caused by the usage of diverse surfactants. Obtained spectra were collected for all samples, but due to similarity between spectra, only one set from each kind (acid or amine) are presented in this paper (Figure 4). The number of specific bands for nanoparticles coated with palmitic acid or hexylamine were clearly found which proved successful surface functionalization. For clarity, the middle part of the spectra was omitted. amines crystallites, diameter clearly decreases (oleylamine, hexylamine, dioctylamine) with concentration change, which is in line with TEM observation. Some values disagree with the TEM data, nevertheless general trends are well presented. The biggest disagreement is seen in TOA and TEA data. The average diffracting zone is not well correlated with particles size due to its internal structures. Such discrepancy can be accepted because comparison considers results of two methods in which sampling and results fitting is different. As a consequence, therefore, the amount of specified material is not the same for both and it is a source of inconsistency in obtained values. It should be noted that the fitted in procedure values are affected by uncertainty connected with maghemite and magnetite crystal parameters, composition distribution, crystal surface distortion, as well as greater influence of strain in smaller particles [40]. Strain values for all samples were evaluated, as well. The smallest particles and the one with irregular morphology show strain values higher than 3.5 × 10 −3 . IR Spectroscopy Magnetic nanoparticles obtained in performed studies were analyzed by IR spectroscopy. IR spectroscopy was required to detect induced changes on the nanoparticles surface caused by the usage of diverse surfactants. Obtained spectra were collected for all samples, but due to similarity between spectra, only one set from each kind (acid or amine) are presented in this paper (Figure 4). The number of specific bands for nanoparticles coated with palmitic acid or hexylamine were clearly found which proved successful surface functionalization. For clarity, the middle part of the spectra was omitted. Selected IR spectra depicted in Figure 4 are typical for iron oxide nanoparticles coated with long-chain surfactants. Wide bands observed at 3500 cm −1 are typical for O-H bonds, which can be present due to adsorbed vapor water. Signals at 2900-2800 cm −1 are connected with the presence of C-H bonds in carbon chains of all tested organic compounds. Strong signals in the spectral range 1420-1530 cm −1 are typical for COO − bonds present in the acetylacetonate groups (iron oxide nanoparticles precursor). In terms of amines surfactants, signals at 1015-1020 cm −1 can be seen, which illustrate the presence of N-H bonds at the nanoparticles surface [41]. Every spectra shows intense signals at around 580 cm −1 , which clearly proves the presence of Fe-O bonds in magnetite [42,43]. In most cases, a proper ratio of Fe-O signals is observed to these typical for the surfactant carbon chain. It changes with the amount of used surfactant or/and particle size. Selected IR spectra depicted in Figure 4 are typical for iron oxide nanoparticles coated with long-chain surfactants. Wide bands observed at 3500 cm −1 are typical for O-H bonds, which can be present due to adsorbed vapor water. Signals at 2900-2800 cm −1 are connected with the presence of C-H bonds in carbon chains of all tested organic compounds. Strong signals in the spectral range 1420-1530 cm −1 are typical for COO − bonds present in the acetylacetonate groups (iron oxide nanoparticles precursor). In terms of amines surfactants, signals at 1015-1020 cm −1 can be seen, which illustrate the presence of N-H bonds at the nanoparticles surface [41]. Every spectra shows intense signals at around 580 cm −1 , which clearly proves the presence of Fe-O bonds in magnetite [42,43]. In most cases, a proper ratio of Fe-O signals is observed to these typical for the surfactant carbon chain. It changes with the amount of used surfactant or/and particle size. Mössbauer Spectroscopy Studies on the effect of coating on magnetic properties of magnetite nanoparticles were conducted by room temperature (RT) Mössbauer spectroscopy. Figure 5 shows measured spectra, which are depicted in a series depending on used surfactant and its concentration. The Mössbauer spectra present in Figure 6 were measured at RT in an external magnetic field of 1.3 T parallel to gamma beam direction. Studies on the effect of coating on magnetic properties of magnetite nanoparticles were conducted by room temperature (RT) Mössbauer spectroscopy. Figure 5 shows measured spectra, which are depicted in a series depending on used surfactant and its concentration. The Mössbauer spectra present in Figure 6 were measured at RT in an external magnetic field of 1.3 T parallel to gamma beam direction. As seen by collected Mössbauer spectra presented in Figure 5, magnetic response of each type of nanoparticle varies dependent on nanoparticle size and surfactant concentration, which is directly correlated to particle-particle interaction via surfactant layers. Surface stabilizers change growth regime in relation to its concentration in reacting volume. In addition, stabilizers can prevent the oxidation process during preparation of the nanoparticles. The shapes of the spectra measured for nanoparticles coated with OA, LA, PA, SA, and CA change from a broad sextet typical for superparamagnetic fluctuation of Fe magnetic moments for low concentration of the surfactant (4 or 8 mmol), to the spectra typical for bulk magnetite which can be described as superposition of the two sharp subspectra with distinguished hyperfine parameters (for 16 mmol). This comes from the Fe atoms located in the [A] and [B] position in the magnetite invers spinel structure [44]. Such conversion of the spectral shapes means that the increase of surfactant concentration led to the suppression of the superparamagnetic fluctuation of the Fe magnetic moments. For OLA, TOA, HA, DOA, and TEA the influence of surfactant concentration on the shape of the spectra is rather small. It was expected that the increase of the surfactant concentration should increase the superparamagnetic fluctuation because of the weakening dipole-dipole interaction between nanoparticles. That is related to the rise of the distance separating objects. Such scenario is also preventing the agglomeration process, which easily appears among magnetic grains. Our observations have shown the contrary. As seen by collected Mössbauer spectra presented in Figure 5, magnetic response of each type of nanoparticle varies dependent on nanoparticle size and surfactant concentration, which is directly correlated to particle-particle interaction via surfactant layers. Surface stabilizers change growth regime in relation to its concentration in reacting volume. In addition, stabilizers can prevent the oxidation process during preparation of the nanoparticles. The shapes of the spectra measured for nanoparticles coated with OA, LA, PA, SA, and CA change from a broad sextet typical for superparamagnetic fluctuation of Fe magnetic moments for low concentration of the surfactant (4 or 8 mmol), to the spectra typical for bulk magnetite which can be described as superposition of the two sharp subspectra with distinguished hyperfine parameters (for 16 mmol). This comes from the Fe atoms located in the [A] and [B] position in the magnetite invers spinel structure [44]. Such conversion of the spectral shapes means that the increase of surfactant concentration led to the suppression of the superparamagnetic fluctuation of the Fe magnetic moments. For OLA, TOA, HA, DOA, and TEA the influence of surfactant concentration on the shape of the spectra is rather small. It was expected that the increase of the surfactant concentration should increase the superparamagnetic fluctuation because of the weakening dipole-dipole interaction between nanoparticles. That is related to the rise of the distance separating objects. Such scenario is also preventing the agglomeration process, which easily appears among magnetic grains. Our observations have shown the contrary. In order to obtain more information on the magnetic arrangement of Fe magnetic moments, measurements in an external magnetic field were carried out ( Figure 6). In order to obtain more information on the magnetic arrangement of Fe magnetic moments, measurements in an external magnetic field were carried out ( Figure 6). Almost all measured in external magnetic field Mössbauer spectra (except TEA) do not show any superparamagnetic fluctuation of the Fe magnetic moments at RT. This means that external magnetic field of 1.3 T is enough to suppress superparamagnetic behavior in studied samples. Moreover, the intensities of the 2nd and 5th line in the spectra are reduced to almost zero. The reason for this is that the studied nanoparticles are magnetically very soft and the magnetic moments are easily arranged parallel to external magnetic field direction. Such reaction of magnetic moments on external magnetic field are typical for magnetite and/or maghemite nanoparticles in contrast to hematite nanoparticles, where magnetic moments tend to arrange perpendicular to external magnetic field [45]. Since the measurements in an external magnetic field cause suppression of superparamagnetic fluctuation of the Fe magnetic moments, the spectra become less complicated and have better resolution. Therefore, the shapes of the spectra can be distinguished as magnetite (two separate sextets) or maghemite (two combined sextets). The obtained results for nanoparticles coated with OA, LA, PA, SA, and CA show that the highest concentration (16 mmol) of the surfactant ensure growth of magnetite nanoparticles, while concentration of 4 and 8 mmol lead to maghemite or so called nonstoichiometric magnetite [46]. This means that the increase of surfactant concentration prevents the surface oxidation process during the preparation procedure. The scenario of the variable nanoparticles growth regime as a function of surfactant concentration can explain the suppression of superparamagnetic fluctuation for higher surfactant quantity. Magnetite nanoparticles are observed as being more stable with regard to superparamagnetic fluctuations than maghemite [47], which gives them advantage in eventual magnetic hyperthermia applications [48]. In the case of OLA, TOA, HA, DOA, and TEA such effect is not visible. The increasing of stabilizers concentration does not influence much on the shapes of measured spectra. Almost all measured in external magnetic field Mössbauer spectra (except TEA) do not show any superparamagnetic fluctuation of the Fe magnetic moments at RT. This means that external magnetic field of 1.3 T is enough to suppress superparamagnetic behavior in studied samples. Moreover, the intensities of the 2nd and 5th line in the spectra are reduced to almost zero. The reason for this is that the studied nanoparticles are magnetically very soft and the magnetic moments are easily arranged parallel to external magnetic field direction. Such reaction of magnetic moments on external magnetic field are typical for magnetite and/or maghemite nanoparticles in contrast to hematite nanoparticles, where magnetic moments tend to arrange perpendicular to external magnetic field [45]. Since the measurements in an external magnetic field cause suppression of superparamagnetic fluctuation of the Fe magnetic moments, the spectra become less complicated and have better resolution. Therefore, the shapes of the spectra can be distinguished as magnetite (two separate sextets) or maghemite (two combined sextets). The obtained results for nanoparticles coated with OA, LA, PA, SA, and CA show that the highest concentration (16 mmol) of the surfactant ensure growth of magnetite nanoparticles, while concentration of 4 and 8 mmol lead to maghemite or so called nonstoichiometric magnetite [46]. This means that the increase of surfactant concentration prevents the surface oxidation process during the preparation procedure. The scenario of the variable nanoparticles growth regime as a function of surfactant concentration can explain the suppression of superparamagnetic fluctuation for higher surfactant quantity. Magnetite nanoparticles are observed as being more stable with regard to superparamagnetic fluctuations than maghemite [47], which gives them advantage in eventual magnetic hyperthermia applications [48]. In the case of OLA, TOA, HA, DOA, and TEA such effect is not visible. The increasing of stabilizers concentration does not influence much on the shapes of measured spectra. Conclusions The presented results indicate the importance of choosing the proper type and amount of surfactant for nanoparticles synthesis. Surfactants influence not only the separation of the nanoparticles, but also their size, shape, and magnetic response to the external magnetic field. Particle growth shows dependence on the characteristic of the surfactant, as well as its amount in respect to core precursors. In general, amines ensure the growth of more evenly shaped particles in comparison to fatty acids. On the other hand, tertiary amines cause flower-like nanoparticle growth. Mössbauer spectra demonstrate that the higher surfactant concentration causes magnetite-like nanoparticle structure, while lower concentrations lead to magnetite-like nanoparticles. In field Mössbauer measurements show evident contribution from both the maghemite and magnetite part of the spectra. Detailed studies on physicochemical properties of magnetic nanoparticles can significantly contribute to the development of nanoparticles, which can be successfully applied in magnetic hyperthermia treatment. Therefore, the described data shows the important role of surfactants during the synthesis of iron oxide nanoparticles.
6,980.4
2020-04-01T00:00:00.000
[ "Materials Science" ]
Natural Language Processing with Improved Deep Learning Neural Networks As one of the core tasks in the field of natural language processing, syntactic analysis has always been a hot topic for researchers, including tasks such as Questions and Answer (Q&A), Search String Comprehension, Semantic Analysis, and Knowledge Base Construction. This paper aims to study the application of deep learning and neural network in natural language syntax analysis, which has significant research and application value. This paper first studies a transfer-based dependent syntax analyzer using a feed-forward neural network as a classifier. By analyzing the model, we have made meticulous parameters of the model to improve its performance. This paper proposes a dependent syntactic analysis model based on a long-term memory neural network. This model is based on the feed-forward neural network model described above and will be used as a feature extractor. After the feature extractor is pretrained, we use a long short-term memory neural network as a classifier of the transfer action, and the char-acteristics extracted by the syntactic analyzer as its input to train a recursive neural network classifier optimized by sentences. The classifier can not only classify the current pattern feature but also multirich information such as analysis of state history. Therefore, the model is modeled in the analysis process of the entire sentence in syntactic analysis, replacing the method of modeling independent analysis. The experimental results show that the model has achieved greater performance improvement than baseline methods. Introduction e study of grammar in computational linguistics refers to the study of specific structures and rules contained in language, such as finding the rules of the order of words in sentences and classifying words [1]. Linear laws in these languages can be expressed using methods such as Language Model and Part-of-Speech Tagging. For the nonlinear information in the sentence, we can use Syntactic Structure or Dependency Relation between words in the sentence to express. Although this analysis and expression of sentence structure may not be the ultimate goal of natural language processing problems, it is often an important step to solve the problem [2], which is used in such as search query understanding [3], Question Answering, QA [4] and Semantic Parsing and other issues have important applications. erefore, as one of the key technologies in many natural language application tasks, Syntactic Parsing [5] has always been a hot issue in the field of natural language processing research, and it has significant research significance and application value. Syntactic analysis is mainly divided into two types: syntactic structure parsing and dependency parsing [2]. e main purpose of syntactic structure analysis is to obtain a sentence parsing tree, so it is often referred to as full syntactic parsing, sometimes referred to as full parsing. e main purpose of dependency syntax analysis is to obtain a tree structure representation of the dependency relationship between words in a sentence, which is called a dependency tree. In the 1940s, researchers introduced the term "neural network" in order to express biological information processing systems [6]. e simplest one, the feed-forward neural network, also known as the multilayer perceptron model, has achieved good results in many application tasks, but due to the high computational complexity of the model, training is more difficult. With the continuous improvement of computer performance, it is possible to train large-scale and deep neural networks. As a result, the Deep Learning method has made a huge breakthrough in the research of multiple fields of machine learning. Deep learning learns from large-scale data to intricate structural representations. is learning is achieved by adjusting network parameters through error-driven optimization algorithms between different layers of artificial neural networks through backpropagation. In recent years, deep convolution network has made great breakthroughs in graphics and image processing, video and audio processing, and other fields. At the same time, recursive networks have also achieved good results in sequence data such as text and voice [7]. e recurrent neural network initially achieved good results in handwritten digit recognition [8]. e well-known word vector algorithm Word2Vec was originally obtained from the language model learned from RNN [9]. Due to the gradient disappearance defect of recurrent neural network (RNN), Long Short-Term Memory (LSTM) was proposed [10]. Due to the recent popularity of deep learning methods, LSTM has also been applied to work such as dialogue systems [11] and language models [12]. e neural network model with attention mechanism proposed recently [13] has attracted the attention of researchers. is attention mechanism has been successfully applied to machine translation [14] and text summaries [15] and has achieved certain results. e main contributions of this paper are the following: (i) We propose a feed-forward neural network in which the parameters propagate unidirectionally (ii) We use a neural network model as a classifier and use the reverse propagation algorithm as the learning algorithm (iii) We proposed a well-organized dataset to evaluate the proposed framework Rest the paper is structured as follows: Section 2 describes related work and critically analyzes and compares the work done so far. Section 3 is about the proposed methodology describing the materials and methods adopted in this study. Section 4 is about the validity of the proposed methodology and experimentation and discussions made about the results produced. e work done is finally concluded in Section 5. Related Work Concepts such as neural networks originated in the 1940s. After the 1980s, backpropagation was successfully applied to neural networks. In 1989, the backpropagation algorithm was successfully applied to the training of a convolutional neural network. As of 2006, the graphics processing unit was used in the training of convolution neural networks. As a result, a new upsurge of neural network research has been set off. e early neural network models of the 1940s were very simple, usually only had one layer and could not be learned. It was not until the 1960s that early neural networks were used for supervised learning, and the model became slightly more complicated and had a multilayer structure. In 1979, Fukushima [16] first proposed the concepts of convolution neural networks and deep networks. After that, related pooling and other methods were proposed one after another. In 1986, the backpropagation algorithm was proposed by Rumelhart et al. [17]. It greatly promoted the development of neural network research. e second is the emergence of several public datasets. e majority of the public datasets make the neural network no longer a toy model. In the field of computer vision, there is the famous ImageNet [18]. In the field of natural language processing, there is the dataset published by Twitter 2 and the data of Weibo 3 in the Chinese field. Bengio et al. [19] proposed the use of a recurrent neural network to build a language model. e model uses the recurrent neural network to learn a distributed representation for each word while also modeling the word sequence. is model has achieved better results in experiments than the optimal n-gram model of the same period and can use more contextual information. Bordes et al. [20] proposed a method for learning Structured Embeddings using neural networks and a knowledge base. e experimental results of this method on WordNet and Freebase show that it can embed structured information. Mikolov et al. [21] proposed continuous bag of words (CBOW): In this model, to predict the words in a sentence, the concept of word position in a sentence is used; this work also proposes a skip-gram model, which can use a word in a certain position in a sentence and predicts the words around it. Based on these two models, Mikolov et al. [21] open-sourced the tool word2vec4 to train word vectors, which has been widely used. Kim [22] introduced the convolution neural network to the sentence classification task of natural language processing. is work uses a convolution neural network with two channels to extract features from sentences and finally classify the extracted features. e experimental results show that the convolution neural network has a significant effect on the feature extraction of natural language. Similarly, Lauriola et al. [23] has critically studied and analyzed the use of deep learning in Natural Language Processing (NLP) and the models, techniques, and tools used so far have been summarized. Fathi and Shoja [24] also discuss the application of deep neural networks for natural language processing. Tai et al. [25] proposed a tree-like long and short-term memory neural network. Because traditional recurrent neural networks are usually used to process linear sequences, and for data types with internal structures such as natural language, this linear model may lose some information. erefore, this model uses long and short-term memory neural networks in the analysis tree and has achieved good results in sentiment analysis. In summary, the key limitations of existing deep learning-based approaches to natural language processing include the following: deep neural network models are difficult to train because they need large amounts of data, training requires powerful, expensive video cards, lack of a uniform representation method for different forms of the data, such as text and image and the ambiguity resolution in natural language text at the word, phrase, and sentence level. Moreover, deep learning algorithms are not good at inference and decision making, cannot directly handle symbols, they are data-hungry and not suitable with small data size, difficult to handle long-tail phenomena, black-box nature of the models makes them difficult to understand, and computational cost of the learning algorithms is high. Apart from the limitations, the good about deep neural networks include the following: efficiency in pattern recognition, data-driven approach, performance being high in many problems, little or no domain knowledge needed in system construction, the feasibility of cross-modal processing, and gradient-based learning. Material and Method In this section, we are going to discuss the recurrent neural network-based model. Feed-Forward Neural Network. As the first proposed neural network structure, the feed-forward neural network is the simplest kind of neural network. Inside it, the parameters propagate unidirectionally from the input layer to the output layer, as shown in Figure 1 as a schematic diagram of a fourlayer feed-forward neural network. Recurrent Neural Network. Recurrent Neural networks have been a hot research field in neural network research in recent years. e reason why Recurrent Neural Networks have become a research flashpoint is that the Feed-forward Neural Network or Multilayer Perceptron cannot grip data with time series relationships well. e time recursive structure of the Recurrent Neural Network permits it to learn the time series information in the data so that it can well solve this kind of job (see Figure 2). For each moment, the activation value of the hidden layer is calculated recursively as follows (t from 1 to N, n from 2 to N, N is the number of hidden layers): Among them, W is the parameter matrix (for example, W ih n represents the connection weight matrix from the input layer to the Nth hidden layer), b is the bias vector, and σ is the activation function. Calculate the output sequence of the hidden layer, and you can use the following formula to calculate the output sequence: y t � y y t . (2) e output vector y t is used to estimate the probability distribution Pr(x t+1 |y t ) of the input x t+1 at the next moment. e loss function L(X) of the entire network is expressed by the following formula: Similar to the feed-forward neural network, the partial derivative of the loss function to the network parameters can be obtained by using backpropagation through time, and the gradient descent method is used to learn the parameters of the network, which is shown in Figure 3. Due to the advantages of recurrent neural networks in time series, in recent years, many researchers in the field of natural language processing have applied recurrent neural networks to research such as machine translation, language model learning, semantic role tagging, and part-of-speech tagging and achieved good results. Realization of Learning Algorithm and Classification Model. As an essential part of the syntactical analyzer, the role of the classification model is to predict the analytical action. e role of the learning algorithm is the parameters of the learning model from training data. In this model, we use a neural network model as a classifier, obviously use the reverse propagation algorithm as a learning algorithm. In this section, the precise implementation of the classification model will be introduced, and some details of the model learning will be described later. e role of the embedded layer of the network is to convert the sparse representation of the feature into a dense representation. e embedding layer is divided into three parts: word embedding layer, part-of-speech embedding e three embedding layers obtain input from three different features corresponding to the input layer. It is worth noting that compared with the size of the dictionary, the value set of part of speech and dependency arc is relatively small, so the dimension of part of speech and arc embedding in the embedding layer is smaller than the dimension of word embedding. Specifically, the word feature in the analysis pattern c is mapped to d w as a dimensional vector e w ∈ R dw , and the embedding matrix is E w ∈ R d w ×N w . Among them, N w is the dictionary size. Similarly, part-of-speech features and dependency arc features are mapped to e p ∈ R d p and e l ∈ R d l after the conversion is completed, the layer outputs 48 dense features, each of which is a real vector. e hidden layer in the model connects the 48 output features x h of the embedding layer end-to-end to form a feature vector and perform linear and nonlinear transformation operations on it. Specifically, the nonlinear transformation function is a cubic activation function: Among them, W 1 ∈ R d h ×d x h is the parameter matrix of the hidden layer, and d x h � 18 * d w + 18 * d p + 12 * d l , b 1 is the bias vector. e last layer of the network is the softmax layer, whose role is to predict and analyze the probability distribution of actions: Among them, W 2 is the parameter matrix of the softmax layer, b 2 is the bias vector and τ is the set of all actions in the dependency syntax analysis system. After obtaining the probability distribution of the analysis action predicted by the model, the loss function of the network can be calculated. e same as the general multiclassification problem, we use the cross-entropy loss function: In fact, the classification task is to select a correct action from multiple analysis actions, so the loss function is simplified as follows: where A is the correct analysis sequence action set of the batch, λ is the regularization parameter, and Θ is the model parameter. e classifier in the dependency syntax analyzer is a neural network classifier, and its learning algorithm is the same as the general neural network learning algorithm, which is a backpropagation algorithm. Using the backpropagation algorithm, the gradient of the loss function to the parameters can be obtained, and then the gradient descent method is used to update the parameters of the model. Experiments and Discussion In this section, we are going to discuss the dataset and the experimental setup and evaluate the framework. Long and Short-Term Memory Neural Network. e recursive neural network is used to translate the input sequence to an output sequence, such as a sequence identification problem or sequence forecast problem. However, many of the actual use tasks expose difficulty in training recursive neural networks. Sequences in these issues often extent a lengthier time interval. Bengio et al., since the gradient of the recursive neural network, will ultimately "disappear," the recursive neural network that wants to learn a long-distance memory is more difficult, as shown in Figure 4. To solve this problem, Hochreiter and SchmidHuber [10] proposed Long Short-Term Memory, LSTM. In this model, the concept of "door" is added so that the network can choose when "Forget" increasing new "memory." As a variant of the recursive neural network, the longterm memory neural network in the design is to solve the gradient disappearance of ordinary recursive neural networks. e usual recursive neural network reads an input vector x t from a vector sequence (x 1 , x 2 , . . . , x n ) and calculates a new hidden layer state h t . However, the problem of gradient disappearance results in an ordinary recursive neural network that cannot be modeled on long-distance dependence. Long short-term memory neural networks introduced "Memory Cell" and three "Control Gate," which used to control when to choose "memory," when to choose "Forget." Scientific Programming Specifically, the long and short-term memory neural network uses an input gate, a forget gate, and an output gate. Among them, it determines the proportion of the current input that can enter the memory unit, and the forget gate controls the proportion of the current memory that should be forgotten. For example, at time t, the long and short-term memory neural network is updated in the following way: At time t, given input x t , calculate the value of input gate i t , forget gate and candidate memory C t according to the following formula: where σ is the component-wise logistic function and ⊙ is the component-wise product. At the same time, the value and output value of the new memory cell are given as follows: Experimental Data. Since batch training is required, and the analysis sequence lengths of sentences of different lengths are not the same, we have adopted a mask method for training. Even so, because the length of some sentences is too long, other sentences in the batch have been processed and have been waiting for the long sentence to appear. erefore, to train the model more quickly, we removed sentences with more than 70 words in the training process. Such sentences have a total of 76 sentences, accounting for 0.2% of the number of sentences in the training dataset. We believe that this will not affect the effect of the final model. After removing part of the training data and verification data, the actual data used is shown in Table 1. Evaluation Index. e analysis of phrase structure usually uses accuracy, recall, and F1 value for evaluation: (2) Recall Rate. e accuracy rate in phrase structure analysis refers to the percentage of the number of correct phrases in the analysis result to the total number of phrases in the test set: R � Number of correct phrases in the analysis result The total number of phrases in the test set . Experimental Results and Analysis. In addition to the comparison with the baseline method, this topic is also compared with two other classic dependency parsers: Malt Parser and MST Parser. For Malt Parser, we used the stackproj and nivreeager options for training, which correspond to the arc-standard analysis algorithm and the arceager analysis algorithm, respectively. For MST Parser, we report the results in Chen and Manning (2014). e test results are shown in Table 2. It can be seen from the table that the dependency syntax analyzer based on the long and short-term memory neural network has achieved certain effects in modeling the analysis sequence of sentences. is model has achieved 91.9% UAS accuracy and 90.5% LAS accuracy on the development set of Penn Tree Bank, which is about 0.7% improvement over the greedy neural network dependency parser of the baseline method. On the test set, our model achieved a UAS accuracy x 1 x 2 x t x t+1 x t+2 Figure 4: Ordinary recurrent neural networks cannot handle long-distance dependencies. A is the total number of sentences; B is the number of projectable sentences; C is the percentage of projectable sentences; D is the number of sentences up to 70; E is the percentage of sentences used to projectable sentences. Scientific Programming rate of 90.7% and an LAS accuracy rate of 89.0%, which is about 0.6% improvement over the greedy neural networkdependent syntax analyzer of the baseline method. Compared with the most representative transfer-based dependency parser, Malt Parser, our method has a relative improvement of about 1.4%; compared with the famous graph model-based MST Parser, our model can obtain 0.5 on the development set. % Improvement, the UAS accuracy rate on the test set is comparable, and the LAS accuracy rate has been improved by 1.4%. e experimental results show that, compared with the greedy feed-forward neural network, the dependency syntax analysis model based on the long and short-term memory neural network performs better. Different from the greedy model, this model uses long and short-term memory neural networks to model the entire sentence and can use historical analysis information and historical pattern information to help classify analysis actions, thereby improving the performance of the dependent syntax analyzer. e results of testing on the Pennsylvania Tree Bank are shown in Table 3. In the testing process, this article uses the column search technique, and the corresponding beam size is 12. It can be seen from the data in the table that the dual attention mechanism can effectively reduce the number of errors in the output results. In the effective output, the F1 value of the model reached 0.827, and its change with the training process is shown in Figure 5. Various errors change with the training process, as shown in Figure 6. By linearizing the phrase structure tree in natural language, the phrase structure analysis task is transformed into a sequence-to-sequence conversion task. A simple implementation of the sequence-to-sequence model is carried out, and it is found that the end-to-end analysis still needs the rule restriction on the decoder side. To this end, we propose a dual attention mechanism model, that is, a sequence-tosequence model that introduces attention mechanisms at the input and output at the same time. Experiments show that after the introduction of the dual attention mechanism Scientific Programming model, the performance of the model on the test set is greatly improved in Table 4. Conclusions Syntactic analysis is an indispensable part of tasks such as question answering systems, search string comprehension, semantic analysis, and knowledge base construction. is paper studies a neural network model of dependency syntactic analysis based on transfer learning. is model uses a feed-forward neural network as the classifier in the dependency syntax analyzer and adjusts its parameters by analyzing the model to achieve better results. e experimental results show that after improvement, the effect of the model is increased by 0.1 to 0.2 percentage points. We propose a dependency syntax analysis model based on long and short-term memory neural networks. is model is based on the neural network model and used as a feature extractor. Specifically, the model is based on the characteristics of the long and short-term memory neural network and uses it to memorize the analysis state and analysis history in the transfer-based dependency syntactic analysis process so that the model can capture and utilize more historical information. In addition, the model models the analysis process of the entire sentence in the dependent syntax analysis and improves the greedy model to model the independent analysis state. e experimental results show that compared with the baseline method, the model obtains an improvement of 0.6 to 0.7 percentage points. rough the work experience and error analysis, we can further study the dependency syntax analysis model based on the long and short-term memory neural network, and we found that the attention mechanism can be introduced into the model. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e author declares no conflicts of interest regarding the publication of this paper. Number of errors Wrong word count Tree structure error Total number of errors
5,567.8
2022-01-07T00:00:00.000
[ "Computer Science", "Linguistics" ]
A Spectroscopic Study of the Insulator–Metal Transition in Liquid Hydrogen and Deuterium Abstract The insulator‐to‐metal transition in dense fluid hydrogen is an essential phenomenon in the study of gas giant planetary interiors and the physical and chemical behavior of highly compressed condensed matter. Using direct fast laser spectroscopy techniques to probe hydrogen and deuterium precompressed in a diamond anvil cell and laser heated on microsecond timescales, an onset of metal‐like reflectance is observed in the visible spectral range at P >150 GPa and T ≥ 3000 K. The reflectance increases rapidly with decreasing photon energy indicating free‐electron metallic behavior with a plasma edge in the visible spectral range at high temperatures. The reflectance spectra also suggest much longer electronic collision time (≥1 fs) than previously inferred, implying that metallic hydrogen at the conditions studied is not in the regime of saturated conductivity (Mott–Ioffe–Regel limit). The results confirm the existence of a semiconducting intermediate fluid hydrogen state en route to metallization. Introduction The insulator-to-metal transition (IMT) in hydrogen is one the most fundamental problems in condensed matter physics. [1] In spite of seeming simplicity of hydrogen (2p + 2e in the molecule), the behavior of this system at high compression remains poorly understood. The structural, chemical, and electronic properties of hydrogen and other molecular system are strongly dependent on pressure (density); at high pressures the relative stability of atomic over molecular configurations increases due to an increase in the electronic kinetic energy thus easing the transformation to a metallic state. [2,3] Principal challenges include understanding the intermediate paired, mixed, and monatomic states, both solid and fluid; [3,4] the mechanism and pressure-temperature (P-T) conditions of IMT; the location of critical and triple points related to a change in the transition character and implications to high-temperature superconductivity and the internal structure, composition, temperature, and magnetic fields of gas giant planets. [5][6][7] Currently, the IMT in hydrogen is expected to occur in two regimes: at low T (<600 K) in the dense solid, where quantum effects are expected to dominate and at high temperatures in the fluid state, where classical entropy must play an important role. In the former scenario there is a possibility of quantum melting where solid H 2 liquefies into a metallic quantum fluid. [8,9] However, recent investigations found solid hydrogen at low temperatures (<200 K) transforming to a conducting state with a narrow or zero bandgap above 360 GPa, [10,11] making uncertain the existence of a ground-state metallic fluid in this regime. The nature of the metallic fluid at high temperatures, as relevant to planetary interiors, also remains to be established, with questions persisting about electronic transport properties such as electrical conductivity [12][13][14][15][16][17][18] and the related chemical state. [18,19] The IMT in fluid hydrogen was initially predicted as the first-order transition ending in a critical point at very high temperatures (10-17 kK). [5][6][7] However, dynamic gas gun and laser driven experiments probing changes in electrical conductivity and optical reflectance found a continuous transition to a metallic state at 50-140 GPa [12,20,21] at lower temperatures, implying a critical temperature below 3 kK. First-principles theoretical calculations suggest values of ≈2 kK but yield very different critical pressures, and correspondingly positions of the transformation line. [22][23][24][25][26] Arguably, the coupled electron-ion The insulator-to-metal transition in dense fluid hydrogen is an essential phenomenon in the study of gas giant planetary interiors and the physical and chemical behavior of highly compressed condensed matter. Using direct fast laser spectroscopy techniques to probe hydrogen and deuterium precompressed in a diamond anvil cell and laser heated on microsecond timescales, an onset of metal-like reflectance is observed in the visible spectral range at P >150 GPa and T ≥ 3000 K. The reflectance increases rapidly with decreasing photon energy indicating free-electron metallic behavior with a plasma edge in the visible spectral range at high temperatures. The reflectance spectra also suggest much longer electronic collision time (≥1 fs) than previously inferred, implying that metallic hydrogen at the conditions studied is not in the regime of saturated conductivity (Mott-Ioffe-Regel limit). The results confirm the existence of a semiconducting intermediate fluid hydrogen state en route to metallization. Monte Carlo calculations [19] provide the most accurate predictions; they suggest that the dissociation and metallization transitions coincide (cf. ref. [18]), and the critical point is located near 80-170 GPa and 1600-3000 K. While dynamic compression experiments, which explore a variety of P-T pathways, agree on existence of the metallic states detected via electrical, optical, and density measurements, [12,14,15,20,21,27] lower temperature data show inconsistent results on the position of metallization and the optical character of intermediate states. [14,15] Static diamond anvil cell (DAC) experiments combined with laser heating probing similar low temperature fluid states have also yielded controversial results on the electronic properties of hydrogen and the location of the phase lines. [13,[28][29][30][31][32] The difficulty of interpreting these optical DAC experiments is due to indirect probing of the state of hydrogen, [30,31] or detection of reflectance signals superimposed with those of other materials in the DAC cavity and interpreted assuming a priori a direct transformation from insulator to metal. [13,29,32] The latter results, reporting transient reflectance and transmission at a few laser wavelengths, have been found inconsistent with the proposed IMT, while an indirect transformation via intermediate-conductivity states is a plausible alternative. [12,14,15,28,33,34] One of the major drawbacks of the majority of preceding dynamic and static experiments is an extreme paucity of robust spectroscopic observations, which are critical for assessing the material electronic properties. Here, we address the challenges raised above by exploring experimentally the electronic states of hydrogen and deuterium in the P-T range where the IMT was previously reported but not sufficiently characterized. To overcome the challenges in sustaining hydrogen at these conditions and probing it spectroscopically we applied microsecond single-to several-pulse laser heating in combination with pulsed broadband-laser probing ( Figure S1, Supporting Information). We show that the transition in P-T space includes several stages where hydrogen transforms from a transparent insulating state, to an optically absorptive narrow-gap semiconducting state, and finally to a metallic state of high reflectance. The metallic state exhibits a plasma edge in the visible spectral range, implying a plasma frequency and electronic scattering time that contrasts with previous inferences, [14,20,21] mainly based on the Mott-Ioffe-Regel (MIR) limit approximation in which the electronic meanfree-path reaches the interatomic spacing, and in stark disagreement with the prior static experiments probing hydrogen at few laser wavelengths. [13] Experimental Section A strong extinction of the transmitted light was detected when hydrogen was laser heated above a certain threshold laser heating power ( Figure S2, Supporting Information). The transient absorption reaches a maximum shortly after the arrival of the heating pulse, followed by a regaining of the transmitted signal. The absorption spectra, measurable only at lower temperatures where transmission remains detectable, consistently show an increased transparency toward lower energies similar to that reported previously for absorptive fluid hydrogen, suggesting that hydrogen in this regime is semiconductor-like with a bandgap of the order of ≈1 eV. [28] Transient reflectance signal in this regime shows a small, spectrally independent increase ( Figure S3, Supporting Information) which can be explained by a small change in the refractive index of H 2 (D 2 ) correlated with bandgap reduction. [20] In this regime, peak temperature measured radiometrically ( Figure S4, Supporting Information) tends to increase slowly with laser power, while the duration at which the sample remains hot (and thus emits) increases [28,29,33] ( Figure S5, Supporting Information); temperature increases more rapidly at higher laser power. At temperatures exceeding 3000 K, a strong transient reflectance signal from hydrogens was detected in all samples studied ( Reflectance of hot transformed hydrogen isotopes (H and D) exceeds the background reflectance substantially and is characterized by a spectrally variable magnitude. At the conditions where the reflective hydrogen forms, it has a sufficiently large emissivity so its thermal radiation can be reliably collected and spectrally analyzed, enabling direct determination of the sample temperature ( Figure S4, Supporting Information). [28] The reflectance spectra ( Figure 1) show a large increase to lower energy, a characteristic of metals. Within a single heating event, the reflectance reaches a maximum when the highest temperature is reached (just after heating pulse arrival) and diminishes as the sample cools ( Figure 1b). The overall reflectance value increases with the laser heating pulse energy (and hence the maximum sample temperature) (Figure 1c). These transient changes at high temperatures are reversible (Figure 1a), sometimes occurring with relatively smaller changes to the background attributed to laser absorber movement; thus, they must manifest a transition in the state of hydrogens at these extreme P-T conditions. As in our previous work, [28] Raman spectra measured before and after heating to the presently achieved conditions showed the vibron mode of hydrogen and do not show any extra peaks that could be related to irreversible chemical transformations that would occur as a result of exposure of hydrogen to extreme P-T conditions. The reflectance measurements of hydrogen all yielded qualitatively similar spectra ( Figure S7, Supporting Information) with the pronounced increase in intensity toward low energy. These spectra can be fitted with a variety of different models, but it is found that a Drude free electron model (Supporting Information), which employs the plasma frequency Ω P and the mean free time between the electron collisions τ as the free parameters, fits the data well, yielding Ω P = 2.72(5) eV and τ = 4.4(1.6) fs for deuterium, where the detected reflectance was largest (Figure 2). In these calculations, it is assumed that the refractive index of warm nonmetallic hydrogen in contact with metallic hydrogen is 3.0 at extreme P-T conditions following recent dynamic compression measurements. [14] Furthermore, our reflectance data in the high frequency limit can only be accurately fitted by including a bound electron contribution to the electronic permittivity function of metallic hydrogen (ε b = 3.1 for the representative case above). The uncertainty of our estimation of the DC conductivity σ DC = Ω P 2 τ is of the order of 30%, σ DC = 6700(2400) S cm −1 . The reflectance spectra (Figure 1b,c) at various temperatures varied either during cooling down or by changing the laser heating energy can be also fit with the Drude model. The results of time domain experiments on cooling down show a change in a slope in the Drude parameters at the critical onset temperature T c (Figure 3), where the reflectance becomes less than approximately 10%. The most prominent change is in the DC conductivity, which is almost constant above the onset transition (although the reflectance values vary) and start dropping down fast below T c manifesting the transition. This is qualitatively similar to the recently reported behavior of deuterium under ramp compression near 200 GPa, albeit probed as a function of pressure. [14] It is also found that τ decreases from Adv. Sci. 2020, 7,1901668 [13] (triangles) using three monochromatic laser probes, and laser driven dynamic experiments for deuterium that used passive spectroscopy [15] (solid black lines with dashed gray interpolation) and monochromatic probing (star). [14] b) Optical conductivity from this work compared to that theoretically computed in ninefold compressed deuterium at 3000 K. [17] the metallic state through the transition (Figure 3). Furthermore, it is found that the electronic permittivity ε b increases; although the plasma frequency increases in the metallic state, the "screened" plasma frequency Ω P / b ε remains constant and drops in a semiconducting state ( Figure S8, Supporting Information). These observations suggest an electronic oscillator frequency shift from high energy toward zero as metallization progresses, which is further supported by our optical absorption data (Figure 4), in the regime of low reflectance. This is a common feature in insulators undergoing metallization (e.g., ref. [35]) resulting from charges becoming increasingly less bound, while the scattering time also increases considerably into the metallic state, which can also be attributed to a transformation from localized to delocalized carriers. Discussion The DC (electrical) conductivities inferred here are in reasonable agreement with the results of theoretical calculations (≈10 000 S cm −1 ) [16,17,19,[36][37][38] and compares well to the dynamic experiments on metallic hydrogen and deuterium. [12,14] However, our experiments suggest a more than an order of magnitude longer electronic collision time τ in the metallic state implying that the conducting electrons in hydrogens at the conditions studied are not in the MIR limit. In the absence of the spectral reflectance data, the validity of the MIR conditions was a common assumption in analyzing the dynamic compression data; [14,20,21] theoretical calculations were in a general agreement predicting a very damped Drude response [17,39] (Figure 2b). Our reflectance spectra are in partial disagreement with those reported in the dynamic experiments [15] (Figure 2a), though these refer to substantially different P-T conditions (Figure 5), do not cover a near IR spectral range, are obtained on a hydrogen-LiF interface, and use a passive spectroscopy technique sensitive to diffuse scattering. This makes a direct comparison of reflectance spectra possibly inappropriate, however, some evidence of a sharper rising reflectance to lower energy, similar to that observed here, is noted in these data. Recent DAC experiments [13] at similar conditions to the present results report a value of σ DC = 11 000 S cm −1 , which agrees broadly with our determination, but the reflectance results differ drastically (Figure 2a): a Drude fit to those data [13] yielded a larger plasma frequency (Ω P = 20.4 eV) and smaller electron collision time (τ = 0.13 fs) compared to our results. The distinction between our results and those of ref. [13] are unlikely due to a difference in the probed P-T conditions. In fact, we find the onset of metallic conditions at Adv. Sci. 2020, 7,1901668 Spectrum of metallic deuterium at 150 GPa, 3000 K determined from Drude fit to reflectance and having σ DC = 6700 S cm −1 , τ = 4.4 fs is the solid blue line (after Figure 2). Measured spectrum of a transitional, poor-metal deuterium state at 150 GPa, T < 2700 K, given by the filled circles (red points in Figure S2c, Supporting Information), is best fit by a Smith-Drude model (solid black line), here with σ DC = 35 S cm −1 , τ = 0.3 fs, C = −0.95. Prior data on semiconducting hydrogen [28] at 141 GPa and 2400 K (open symbols) are best fit by a Tauc model (dashed black line) with gap energy of ≈1 eV, corresponding to σ DC ≈ 15 S cm −1 , τ ≈ 0.03 fs. A 1 µm thick layer is assumed in calculations. Figure 5. Phase diagram of hydrogen at extreme P-T conditions. Filled orange and filled crossed red circles indicate conditions of the metallic state detected via optical reflectance in this study for hydrogen (H) and deuterium (D), respectively. The large error bars (nearly 1000 K) are due to low sample emissivity and temperature gradients. Open orange and red upward triangles correspond to P-T conditions where H and D reflectance respectively was lower than a few percent and our Drude analysis shows a sharp decline in the DC conductivity (Figure 3). A thermal pressure of 2.5 GPa/1000 K is included. [28] Open and half open blue circles are conditions in H where the onset of absorption occurs and a semiconducting state (≈0.9 eV bandgap) was detected, respectively, directly measured using a similar DAC technique as in this work. [28] Open and filled pink circles (light gray triangles) are the results of gradual laser compression at NIF [14] (Z-machine [15] ) corresponding to reaching the absorptive and reflecting D states, respectively. Solid brown square is the result of reverberating shock experiments detecting metallization of H by electrical conductivity measurements. [12,15] Filled black squares are IMT measured in single-shock experiments in precompressed samples; no major differences between D and H were indicated. [21] The results of DAC optical experiments reported as an abrupt insulator-metal transition are shown by light green hexagons and triangle for H and light green diamonds for D. [13,29,32] Solid cyan diamonds are DAC experiments in H showing change in temperature versus heating power dependence interpreted as phase transformation to a metal. [30,31] Solid red and blue line through the data are the suggested phase boundaries for semiconducting and metallic hydrogen. The melting curve and solid-state boundaries are from ref. [41]. higher temperature than in refs. [13,29], in better agreement with the results of dynamic experiments [12,14,15] (Figure 5) with regard to the P-T conditions of metallization. The differences in inferred metallization conditions and spectral response may be due to the larger background signal in refs. [13,29], from a tungsten layer in the probed sample region, the optical properties of which at extreme P-T conditions are unknown. The sharp reflectance rise in the visible spectral range documented here is remarkable. We assign it to the presence of a plasma edge common for many metals, for example gold and silver. Such electronic excitations with the frequencies near the plasma edge are not unusual for simple metals; these would represent the electronic transitions to excited bound states, which could correspond to weakly bound dimers of hydrogens. In this regard, we have attempted to reproduce our reflectance spectrum by using a two-oscillator model ( Figure S9, Supporting Information). However, the DC conductivity in this model must be near σ DC = 61 000 S cm −1 , an order of magnitude larger than for the Drude model, which is inconsistent with the dynamic electrical conductivity experiments. [12] The results presented here clearly demonstrate the existence of two transformation boundaries corresponding to the formation of absorptive and reflecting hydrogen ( Figure 5). The one at lower P-T conditions has been established in dynamic [14,15] and DAC experiments. [28,30,31,33] It has been suggested that this boundary is related to a bandgap closure, [14,15,28,34] rather than the plasma transition. However, the absorption edge is broad (≈1 eV), [28] while the transition is rather abrupt (a few hundreds of degrees); such large temperature driven bandgap changes are normally uncommon. This semiconducting state occupies a large P-T space ( Figure 5), [28] while the new data suggest a rather abrupt metallization at higher P and T. The bandgap closure is usually treated as a pressure (density) driven transformation, while both the previous absorption and the present reflectance results indicate a strongly temperature driven transition (see also refs. [28,31]). This suggests that the observed phenomena are related to the molecular instability and the observed boundary corresponds to a temperature driven partial molecular dissociation. Near the boundary at approximately 150 GPa, the molecular binding energy is approximately equal to the zero point energy. [3,40] In this interpretation, upon increasing the temperature, molecules first begin to dissociate and recombine frequently, producing a state with a measurable electrical and optical conductivity. [12,14,28] It is not a metallic state, as the charge carriers are mostly localized. To reach the metallic state one needs to dissociate a critical fraction of molecules (e.g., 40% [17] ) and enable nonlocal carrier transport, which occurs at higher P-T conditions. We note in this regard that semimetallic solid hydrogen state has been recently reported [10] at low temperature and higher pressure; however, the nature of that state is likely different emerging from the topology of the electronic band structure. Our data using direct temperature measurements show a reasonably good agreement with the results of dynamic experiments, [12,14,15,21,27] the majority of which is based on calculated temperature. Given this good consistency especially with the most recent calculated temperatures for dynamic compression experiments, [14,15] including updated calculations for ref. [12], our results suggest the basic accuracy of those calculations. DAC results reported by the Harvard group (green symbols) suggest a transition at ≈1000 K lower temperatures ( Figure 5). Our results do not suggest any major isotope effect (cf. ref. [32]), which is consistent with previous shock wave results. [12,21] The lines of conductance and metallization become closer in T at higher P ( Figure S10, Supporting Information) as expected on approaching a critical point, however the data suggest that they both would intersect the melting line first. The pressure range of 170 -250 GPa at temperatures just above the melting line can be expected to be anomalous. This P-T space has been probed in two recent high-temperature Raman experiments. [41,42] It is interesting that Zha et al. [42] detected an anomaly in the pressure dependence of the liquid hydrogen vibron band at 140-230 GPa, which can be related to the presence of conducting mixed molecular-atomic fluid hydrogen. However, they find that fluid hydrogen remains molecular at 300 GPa, which calls for improved P-T metrology in dynamic laser and resistively driven static experiments ( Figure S10, Supporting Information). Conclusions Our spectroscopic investigation of fluid hydrogens in the regime of molecular dissociation and metallization showed the complexity of the phenomena suggesting a two-stage transition with a semiconducting intermediate state preceding that of a free-electron metal. The reflectance spectra of the metallic hydrogens show the presence of a plasma edge, which allow constraining the electronic conductivity parameters. We find an electronic relaxation time that is much larger than previously inferred, suggesting that electronic transport is not in the MIR saturation regime as previously thought. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
4,965.8
2019-11-27T00:00:00.000
[ "Physics" ]
Reinvestigation of the biological activity of d-allo-ShK protein ShK toxin from the sea anemone Stichodactyla helianthus is a 35-residue protein that binds to the Kv1.3 ion channel with high affinity. Recently we determined the X-ray structure of ShK toxin by racemic crystallography, in the course of which we discovered that d-ShK has a near-background IC50 value ∼50,000 times lower than that of the l-ShK toxin. This lack of activity was at odds with previously reported results for an ShK diastereomer designated d-allo-ShK, for which significant biological activity had been observed in a similar receptor-blocking assay. As reported, d-allo-ShK was made up of d-amino acids, but with retention of the natural stereochemistry of the chiral side chains of the Ile and Thr residues, i.e. containing d-allo-Ile and d-allo-Thr along with d-amino acids and glycine. To understand its apparent biological activity, we set out to chemically synthesize d-allo-ShK and determine its X-ray structure by racemic crystallography. Using validated allo-Thr and allo-Ile, both l-allo-ShK and d-allo-ShK polypeptide chains were prepared by total chemical synthesis. Neither the l-allo-ShK nor the d-allo-ShK polypeptides folded, whereas both l-ShK and d-ShK folded smoothly under the same conditions. Re-examination of NMR spectra of the previously reported d-allo-ShK protein revealed that diagnostic Thr and Ile signals were the same as for authentic d-ShK. On the basis of these results, we conclude that the previously reported d-allo-ShK was in fact d-ShK, the true enantiomer of natural l-ShK toxin, and that the apparent biological activity may have arisen from inadvertent contamination with trace amounts of l-ShK toxin. ShK toxin is a cysteine-rich 35-residue protein ion channel ligand that was isolated from the sea anemone Stichodactyla helianthus (1). ShK toxin binds to the Kv1.3 ion channel with very high affinity (2). The Kv1.3 ion channel is present in human T lymphocytes as a homotetrameric protein (3), and it has been shown that Kv1.3 plays a major role in T-lymphocyte activation and proliferation. Further study has demonstrated that elevated expression of Kv1.3 ion channel on T lymphocytes is associated with autoimmune diseases such as type 1 diabetes mellitus and rheumatoid arthritis. Kv1.3 was therefore proposed as a promising therapeutic target for these autoimmune diseases (4,5). It has been shown numerous times that, as would be expected from symmetry considerations, a polypeptide chain made up of D-amino acids (and achiral Gly) that has the same amino acid sequence as the natural L-protein will fold to form a D-protein molecule that is the mirror image of the corresponding L-protein (6,7). Recently we reported the convergent total synthesis of wild-type ShK toxin and its mirror image form D-ShK, for determination of the ShK structure by racemic protein crystallography (8). During that work we demonstrated that our synthetic L-ShK, of the same L-protein chirality as ShK toxin found in nature, is biologically active in functional blockade of hKv1.3 currents using the cut-open oocyte voltage clamp method (9). Synthetic L-ShK blocked potassium currents in a concentration-dependent manner, with an IC 50 value of ϳ250 pM. This result was consistent with other reports using the same assay method (10). In the same paper, we reported that synthetic D-ShK toxin was biologically inactive: D-ShK protein, the unnatural mirror image protein, had a near-background IC 50 value ϳ50,000 times lower than the L-ShK toxin (8). This observed lack of affinity of D-ShK for the target Kv1.3 ion channel is as expected, because chirality is believed to be of great importance in protein-protein interactions; a mirror image protein (i.e. a D-protein) molecule would not be expected to bind to the natural protein target of the corresponding L-protein (11,12). Surprisingly, it had been reported previously that a diastereomer of the mirror image D-ShK protein, designated D-allo-ShK, retained some biological activity and bound to the Kv1.3 ion channel (13). This D-allo-ShK protein was described as being made up of D-amino acids, but with retention of the natural stereochemistry of the chiral side chains of the Ile and Thr residues, i.e. made up of D-allo-Ile and D-allo-Thr residues, along with D-amino acids and glycine (13). Isoleucine and threonine are the only two genetically encoded amino acids bearing two chiral centers: one at the ␣ carbon atom and another at the ␤ carbon atom. Inversion of the stereochemistry only at the ␣ carbon does not convert L-Ile and L-Thr to D-Ile and D-Thr. Rather, the diastereomeric amino acids D-allo-Ile and D-allo-Thr are generated. Similarly, inversion of the ␣ carbon stereochemistry in D-Ile and D-Thr generates L-allo-Ile and L-allo-Thr (13) (Scheme 1). Beeton et al. (13) reported that a D-allo-ShK polypeptide had folding properties similar to those of the ShK polypeptide chain. The authors performed the functional blocking assay of Kv1.3 expressed in L929 cells in the whole cell configuration of patch clamp technique. They reported that D-allo-ShK blocked Kv1.3 with a K d value of 36 nM, whereas wild-type ShK blocked Kv1.3 with K d 13 pM in the same assay. That is, D-allo-ShK apparently had ϳ2800-fold lower affinity for Kv1.3 than wildtype ShK protein. The authors also reported that both D-allo-ShK and wild-type ShK were 2-fold selective for Kv1.3 channel over Kv1.1 channel (13). The apparent lack of activity of our D-ShK was at odds with these reported results for D-allo-ShK. We have therefore chemically synthesized authentic D-allo-ShK polypeptide chain and reinvestigated its folding properties and biological activity. We also intended to use racemic crystallography to determine the X-ray structure of the D-allo-ShK protein molecule, in order to correlate the folded structure of this unusual analog with its reported activity. For that reason, we set out to make both D-allo-ShK and its mirror image form, L-allo-ShK. Authentication of allo-Ile and allo-Thr We previously reported the development of methods for the rigorous characterization of the amino acids D-allo-Ile, L-allo-Ile, and D-allo-Thr, and L-allo-Thr (14). Analytical data, including optical rotations, NMR measurements, X-ray structures, and chromatographic separations of all four Ile and all four Thr amino acids as peptide diastereomers, confirmed the identities of the Ile and the Thr enantiomers and diastereomers that were used in the syntheses of the various ShK analog polypeptide chains in the work reported here. Total chemical synthesis of D-allo-ShK polypeptide chains The amino acid sequence of ShK is shown in Scheme 2. The synthetic strategy for the preparation of the D-allo-ShK polypeptide chain was the same as that used for the total synthesis of ShK toxin as previously described (8). Unprotected peptide segments 1-Gln 16 -thioester and Cys 17 -35 were prepared by in situ neutralization t-butoxycarbonyl (Boc) 2 chemistry solid phase peptide synthesis (15). After purification by preparative HPLC and characterization by LC-MS, the two peptide segments were covalently condensed by native chemical ligation (16) at the Gln 16 -Cys 17 site, to give the full-length polypeptide chain. Native chemical ligation reaction data for the preparation of all five synthetic ShK polypeptide chains are shown in Fig. 1. Data for characterization of the purified polypeptides are shown in Fig. 2. Folding studies After the five purified full-length polypeptide chains were obtained, their folding properties were investigated (Fig. 3). Initially, the folding conditions used were the same as those used by Beeton et al. (13): 50 mM AcONH 4 , pH 8.1, polypeptide 0.5 mg/ml, air oxidation. For the true enantiomer D-ShK-CONH 2 polypeptide chain, the folding reaction proceeded smoothly, to give a single folded product of the correct mass, with retention time similar to the previously characterized correctly folded ShK toxin (Fig. 3a). We had previously observed similarly facile and effective folding in the total chemical synthesis of L-ShK and D-ShK (8). Surprisingly, none of the diastereomer allo-ShK polypeptide chains gave a discrete correctly folded protein product, even after 40 h (Fig. 3, b-e). Instead, each analog formed a complex mixture of components eluting later than the previously characterized correctly folded ShK. In each case, scrutiny of the LC-MS data using expected m/z values revealed only minute traces of a product corresponding to the calculated mass of a correctly folded allo-ShK protein product. After we found that air oxidation was not able to produce defined folded products for any of the allo-ShK polypeptide 2 The abbreviations used are: Boc, t-butoxycarbonyl; DNP, dinitrophenyl; TOCSY, total correlation spectroscopy. chains, we investigated the use of a conventional redox pair consisting of cysteine and cystine⅐2HCl to fold these polypeptides at pH 8.0. However, in our hands, the redox pair folding conditions also did not give a defined folded product for any of the allo-ShK polypeptide chains. In related work, using L-allo-Ile and L-allo-Thr amino acids authenticated by the same methods used here, we found that in contrast to the inability to fold the all allo-amino acid ShK polypeptide chains, inversion of the side chain stereochemistry of individual Thr or Ile residues in ShK gave polypeptide chains that folded to give protein products with the same three-dimensional structure as wild-type ShK (14). Introduction of single allo-amino acids had only minimal effect on the affinity of the resulting protein diastereomers for the Kv1.3 channel (14). The previously reported D-allo-ShK was actually D-ShK In their 2008 paper (13), Beeton et al. reported that a D-allo-ShK polypeptide chain could be folded and that the resulting D-allo-ShK protein molecule retained significant biological activity. In the work reported here, we found that allo-ShK polypeptide chains did not fold to give a protein molecule of defined three-dimensional structure under the conditions used by Beeton et al. As we had reported previously, the D-ShK-CONH 2 polypeptide chain folded readily, equally as well as the L-ShK polypeptide chain had in our previous studies (8), to give a single folded D-protein molecule. Based on these studies of authentic D-allo-ShK polypeptide chains, we suspected that the D-allo-ShK reported by Beeton et al. was in fact simply the mirror image D-ShK protein, because the reported folding properties of their D-allo-ShK polypeptide are the same as we observed for wild-type ShK and for D-ShK-CONH 2 polypeptide chains. To further investigate this possibility, we obtained NMR spectra of our authentic synthetic D-ShK (8) and reexamined the NMR spectrum previously obtained for the D-allo-ShK (13). The comparison of these two spectra showed that D-ShK and the reported D-allo-ShK were essentially identical. To be specific, the H␣ chemical shifts of D-Ile 4 and D-Ile 7 in our D-ShK are 4.67 and 4.77 ppm (Fig. 4 and Table 1). The H␣ chemical shifts of D-Ile 4 and D-Ile 7 in the reported D-allo-ShK are 4.66 and 4.76 (13), which are identical (within 0.01 ppm) to the corresponding chemical shifts in the authentic D-ShK and L-ShK (17). If these Ile residues had been D-allo, their H␣ chemical shifts would have differed from those in L-ShK. 3 the inability to fold allo-ShK polypeptide chains in conjunction with these NMR data, we conclude that the D-allo-ShK reported by Beeton et al. was actually the mirror image D-ShK protein. Where did the reported biological activity of D-allo-ShK come from? As for the reported biological activity of the putative D-allo-ShK, we suspect that it may have come from contamination by traces of wild-type ShK toxin. Wild-type ShK is highly potent against its target Kv1.3 with an IC 50 in the picomolar range, and wild-type ShK is 2-fold selective for Kv1.3 over Kv1.1 (10). The reported D-allo-ShK was also 2-fold selective for Kv1.3 over Kv1. 1(13). This selectivity supports the possibility of inadvertent contamination with wild-type ShK, which even in trace amounts could give the levels of activity reported for the putative D-allo-ShK. We have seen a similar contamination effect in our own hands (8). In that work, when we first tested our synthetic D-ShK for activity, we observed it to block Kv1.3 with an IC 50 of ϳ450 nM, which was ϳ2000-fold less active than wild-type ShK in the same assay. This ϳ2000-fold activity difference is comparable with ϳ2800-fold lower affinity for Kv1.3 than wild-type ShK reported by Beeton et al. (13) for D-allo-ShK. We suspected that the apparent activity of our synthetic D-ShK (8) may have come from contamination by trace amounts of wild-type ShK, because we had purified synthetic D-ShK using the same reverse phase HPLC column on which wild-type synthetic ShK had been purified. Although the preparative HPLC column had been washed intensively between purification of the different compounds, there is no guarantee that trace cross-contamination cannot occur. Unfortunately, even the most stringent wash procedures of HPLC columns do not completely remove all traces of samples run previously. It has been reported that subsequent purification of a different peptide or protein can displace traces of previously purified materials, a phenomenon known as "sweep up" (18,19). Even trace contamination with a peptide or protein having picomolar affinity can lead to spurious activity results when the same preparative column is used repeatedly, as is often the case in most research laboratories. For that reason, in our previous work (8) we resynthesized the D-ShK protein from scratch. The synthetic peptide segments, the ligated full-length polypeptide, and the folded D-ShK protein were purified using fresh columns that had never been exposed to wild-type ShK or to synthetic L-ShK. We then performed functional assays on the newly prepared D-ShK and found that synthetic D-ShK had essentially only background levels of Kv1.3 blocking activity (ϳ1/50,000 that of wild-type ShK) (8). These data suggested that cross-contamination by trace amounts of wild-type ShK toxin can occur readily, and it is likely that such inadvertent cross-contamination may have occurred for the previously reported D-allo-ShK (13). Concluding statement In summary, we have used total chemical synthesis to prepare authentic D-allo-ShK, D-allo-ShK-CONH 2 , L-allo-ShK, and L-allo-ShK-CONH 2 polypeptide chains in which the side chain stereochemical configurations of all Thr and Ile residues were inverted. In our hands, none of these allo-amino acidcontaining polypeptide chains could be folded into a protein molecule of defined tertiary structure, whereas corresponding control ShK polypeptide chains without allo-amino acids folded readily. These results cast doubt on the previously reported biological activity of D-allo-ShK. Based on the assay data we observed with our first-contaminated-batch of D-ShK (8), and on the lack-of-folding observed for D-allo-ShK polypeptide chains in this study, together with re-examination of protein NMR data, we conclude that the previously reported D-allo-ShK was in fact D-ShK, the mirror image form of ShK. We suggest that the apparent biological activity of that putative D-allo-ShK protein H NMR chemical shifts (in ppm) of authentic D-ShK Spectra were acquired in 95% H 2 O, 5% 2 H 2 O at 293 K and pH 4.9. Dioxane (chemical shift of 3.75) was used as a reference. Biological activity of D-allo-ShK toxin (13) arose from inadvertent contamination with native ShK toxin during purification. Peptide synthesis Peptide segments were synthesized on a 0.2-mmol scale using manual in situ neutralization Boc chemistry protocols for stepwise solid phase peptide synthesis (15). Peptide-thioesters were synthesized on trityl-SCH 2 CH 2 CO-Ala-OCH 2 -Pamresin (21). Peptides with a free ␣-COOH were synthesized on Boc-Cys-OCH 2 -Pam-resin prepared from aminomethyl-resin. Peptide segments with a ␣-CONH 2 were first synthesized as peptide thioesters and then reacted with ammonium acetate at pH 9.0 for C-terminal amidation. After assembly of the target peptide and removal of the N-terminal N ␣ -Boc group, each peptide was cleaved from the resin and simultaneously deprotected by treatment at 0°C for 1 h with anhydrous HF containing 5% (v/v) p-cresol as scavenger. After removal of HF by evaporation under reduced pressure, each crude peptide was precipitated and washed with ice-cold diethyl ether, then dissolved in 50% aqueous acetonitrile containing 0.1% TFA, and lyophilized. DNP removal Crude histidine-containing peptides (2 mM) in 10 ml 6 M guanidine HCl, 200 mM Na 2 HPO 4 buffer, were treated with 200 mM sodium 2-mercaptoethanesulfonate at pH 7.0 for 1 h to remove the DNP-protecting group before purification by reverse-phase HPLC. Reverse-phase HPLC and LC-MS analysis Analytical reverse-phase HPLC and LC-MS were performed using an Agilent 1100 series HPLC system equipped with an online MSD ion trap. The column used was Phenomenex Aeris WIDEPORE 3.6-m C4, 150 ϫ 4.6 mm. Chromatographic separations were obtained using a linear gradient of 5-30% acetonitrile (with 0.08% TFA) in water (with 0.1% TFA) over 25 min, with column temperature 40°C. Flow rates were controlled at 0.9 ml/min. Peptide detection was based on UV absorption at 214 nm, and masses were obtained by online electrospray mass spectrometry. Preparative HPLC Peptide products from solid phase peptide synthesis or from ligation were purified using a Phenomenex Jupiter 5.0-m C4, 250 ϫ 10.0 mm column. A shallow gradient of acetonitrile (with 0.08% TFA) versus water (with 0.1% TFA) was designed for each peptide based on its elution characteristics. Flow rates were controlled at 5 ml/min. Fractions containing the desired pure peptide were identified by analytical LC and mass spectrometry, combined, and lyophilized. Native chemical ligation reactions to generate ShK polypeptide analogs Allo-ShK polypeptides were prepared by native chemical ligation following the same procedure used for the synthesis of L-ShK polypeptide, as described previously (8). NMR spectroscopy The sample for NMR was prepared by dissolving lyophilized authentic D-ShK protein (5) in 95% H 2 O, 5% 2 H 2 O at pH 4.9. One-dimensional 1 H and two-dimensional homonuclear TOCSY (spin lock time, 70 ms) and NOESY (mixing time, 200 ms) spectra were acquired at 20°C on a Bruker Avance3 600 MHz spectrometer. All spectra were processed in TopSpin (version 3.2; Bruker Biospin) and analyzed using CcpNmr analysis (version 2.1.5). 1 H chemical shifts were referenced to the 1,4-dioxane signal at 3.75 ppm. Chemical shift assignments for backbone and side-chain protons of D-ShK were made by conventional analysis of two-dimensional TOCSY and NOESY spectra. A complete assignment of the proton NMR signals of D-ShK was obtained.
4,097.8
2017-06-08T00:00:00.000
[ "Chemistry" ]
Characterization of the Histopathologic Features in Patients in the Early and Late Phases of Cutaneous Leishmaniasis Cutaneous leishmaniasis (CL), characterized by an ulcerated lesion, is the most common clinical form of human leishmaniasis. Before the ulcer develops, patients infected with Leishmania (Viannia) braziliensis present a small papule at the site of the sandfly bite, referred to as early cutaneous leishmaniasis (E-CL). Two to four weeks later the typical ulcer develops, which is considered here as late CL (L-CL). Although there is a great deal known about T-cell responses in patients with L-CL, there is little information about the in situ inflammatory response in E-CL. Histological sections of skin biopsies from 15 E-CL and 28 L-CL patients were stained by hematoxilin and eosin to measure the area infiltrated by cells, as well as tissue necrosis. Leishmania braziliensis amastigotes, CD4+, CD8+, CD20+, and CD68+ cells were identified and quantified by immunohistochemistry. The number of amastigotes in E-CL was higher than in L-CL, and the inflammation area was larger in classical ulcers than in E-CL. There was no relationship between the number of parasites and magnitude of the inflammation area, or with the lesion size. However, there was a direct correlation between the number of macrophages and the lesion size in E-CL, and between the number of macrophages and necrotic area throughout the course of the disease. These positive correlations suggest that macrophages are directly involved in the pathology of L. braziliensis–induced lesions. INTRODUCTION Leishmaniasis is a broad term for anthropological zoonotic diseases caused by trypanosomes of the genus Leishmania. American tegumentary leishmaniasis (ATL) is characterized by a spectrum of clinical features, including asymptomatic infection, cutaneous leishmaniasis (CL), mucosal leishmaniasis, and disseminated leishmaniasis. CL is the main clinical form of the disease and it is characterized by one or more well-limited ulcers with raised borders, which develop at the site of the bite of infected sandfly. However, before the classical ulcer appears, patients often develop a lymphadenopathy in the lymph nodes draining the infection site, followed by the appearance of a nodule with a small superficial ulceration, which characterizes early CL (E-CL). 1,2 The initial lesion increases in size and depth and between 4 and 6 weeks after the sandfly bite eventually forms an ulcer, the primary feature of late CL (L-CL). After the parasites are inoculated into the host, they interact with several different cell types, including macrophages, the major cell that harbors the parasite. Activation of macrophages by interferon (IFN)-γ + produced by CD4 + T cells contribute to control parasite growth, 3,4 whereas CD8 + T cells have been associated with pathology. [5][6][7] Histopathological studies in ulcers of L-CL patients show an increase in inflammatory response, with the participation of T cells, B cells, plasma cells, macrophages, and the development of a granuloma. [8][9][10][11] Although an intense lymphocyte proliferation and production of IFN-γ and tumor necrosis factor is induced on Leishmania antigen stimulation of peripheral blood mononuclear cells from patients with L-CL, 12 in the preulcerative phase of the disease, lymphocyte proliferation, and cytokine production is lower than in patients with L-CL. 13 Nevertheless, when compared with healthy subjects, E-CL patients exhibit an increase in the frequency of inflammatory or intermediate monocytes, produce higher levels of proinflammatory cytokines, and exhibit substantial transcriptional changes at the infection site. 2,14 However, the histopathological features of E-CL have not been described. Therefore, in this study, we compared the histopathological features of biopsies from patients with E-CL and L-CL. We found that there are more parasites in biopsies from E-CL patients as compared with L-CL. Interestingly, there was no correlation between the number of parasites and the amount of inflammation or size of the lesions. However, there was a direct correlation between the number of macrophages with the area of necrosis and size of the ulcers. METHODS Study design. This is a cross-sectional study aimed to compare the histopathological features of skin biopsies from patients with E-CL and L-CL. Patients were attended in the Health Post of Corte de Pedra, Bahia, Brazil, reference center in the treatment of tegumentary leishmaniasis. All patients included in the study were adult. The study was carried out from April 2009 to May 2014. For every E-CL case selected and biopsied, two patients with L-CL were recruited by matching by age ±5 years. All patients denied previous history of CL and were clinically examined before therapy. After CL diagnosis, all were treated with intravenous glucantime 20 mg/kg/weight for 20 days as per the recommendation of Brazilian Ministry guidelines for CL. The clinical information used in this study was obtained from a public health clinic located in the rural countryside of the state of Bahia. Unfortunately, some data were incompletely recorded on the patient charts and, consequently, some analyses had data missing. The relevant sample size is consistently referenced in figures, tables, and descriptive texts. Biopsies and case definition. E-CL is defined by the presence of a papular lesion occurring, according to patient reporting, within approximately 30 days of being bit by a *Address correspondence to Sérgio Arruda, Instituto Gonçalo Moniz, Fundação Oswaldo Cruz (FIOCRUZ), Rua Waldemar Falcão, 121, Candeal, Salvador, Brazil CEP, 40296-710. E-mail<EMAIL_ADDRESS>phlebotomine. 1 Patients with early cutaneous leishmaniasis seek medical attention due to the presence of a papular lesion associated with a painful regional lymphadenopathy. Ulcers typically appear 1-2 weeks after the appearance of papular lesions, which develop approximately 1-2 weeks after being bit by a sandfly. Fifteen biopsies from E-CL and 28 from L-CL patients were analyzed. E-CL was defined by the presence of a papular lesion with less than 30 days of illness and a positive polymerase chain reaction (PCR) for Leishmania braziliensis. L-CL was defined by the presence of one ulcerated lesion with raised borders and a positive PCR for L. braziliensis. Only one skin fragment was obtained from each patient, and they were performed with a 4-mm punch. This fragment was divided in two parts, one for processing, and histological sections were stained with hematoxylin and eosin (H and E) and for immunohistochemical analysis, and other fragment was reserved in RNA for the quantitative PCR later. 15 Ethical considerations. This study was approved by the Human Ethics Committee of the Research Center Gonçalo Moniz, Fiocruz, Bahia, protocol number 533.032/2014, and the Institutional Review Board of the Faculdade de Medicina da Bahia, Federal University of Bahia. A signed informed consent was obtained from all patients included in this study. Quantitative analysis. The cells quantification was performed using an optical microscope BX51 (Olympus, Center Valley, PA) coupled with digital camera system Q5 (Olympus) and imaging software Image-Pro Plus (Media Cybernetics, Rockville, MD) to the micrograph of the slides was used. Ten random fields of each section with the respective antibodies were photographed using a magnifying power ×400. In each field, the number of positive cells was quantified using the counting feature of the semiautomatic software ImageJ 1.48v (National Institutes of Health, Bethesda, MD). Positivity was defined with the identification of cells that reacted with the chromogenic substrate. Morphometry of inflammation and necrosis areas. The histological sections stained with H and E were scanned by an optical microscope BX61VS (Olympus, Center Valley, PA). The total extension of these sections as well as the areas of inflammatory infiltrate and necrosis was measured by Image J 1.48v (National Institutes of Health). The total length of the biopsy fragment and the sum of the areas of inflammation and necrosis are shown in mm 2 . The percentage (%) of inflammation and necrosis in the biopsies were calculated by dividing total extension of inflammation and necrosis in mm 2 by the total extension of the biopsy fragment multiplied by 100. Statistical analysis. For variables with normal distribution, we used the Student t test and post two-way analysis of variance test. For non-normal distribution, the nonparametric Mann-Whitney test was used. For correlations of normally distributed variables and non-normal, we used Pearson and Spearman tests, respectively. The strength of correlation was classified as: weak (r = 0.10-0.30), moderate (r = 0.40-0.60), and strong (r = 0.70-1). For comparison of the proportions, we used the Fisher's exact and χ 2 test. Statistical analysis was performed using GraphPad Prism 1.5 (GraphPad Software, Inc., La Jolla, CA). The results were considered statistically significant for P < 0.05. RESULTS Sociodemographic and clinical aspects. The sociodemographic and clinical features of the participants were stratified according to disease stage and are shown in Table 1. The age distribution was similar in the two groups. Males were more affected by the disease than women and the predominant localization of the lesions was in the lower limbs in both in E-CL and L-CL. The size of the Leishmania skin test was greater in patients with L-CL than in E-CL (P < 0.05) as well as the duration of the illness (P < 0.0001). Pictures of an E-CL lesion and a classical ulcer from L-CL are shown in Figure 1A and B, respectively, and the size of lesions in different periods of the disease is shown in Figure 1C. The increase in the size and depth of the initial lesion occurred mainly in the first 30 days of the disease from 52.1 ± 11.1 up to 346.8 ± 60.0 mm 2 . Identification of amastigotes and relationship between L. braziliensis amastigotes with illness duration and lesion size. Confirming what was observed in H and E (Figure 2A), tissue amastigotes were detected by immunohistochemistry using anti L. braziliensis IgG antibody ( Figure 2B). In biopsies from E-CL analyzed by TEM, amastigotes were seen within macrophages ( Figure 2C). Amastigotes were found mainly in the parasitophorous vacuole of macrophages, at the upper region of the dermis adjacent to the epidermis, as well as in areas of necrosis. No correlation was found between the number of amastigotes and areas of inflammation and necrosis (data not shown). The relationship between the parasite load with phase of the disease, duration of illness, and lesion size is shown in Figure 3. The parasitism was more intense in recent lesions than in classical ulcers. The number of parasites on 10 random fields under an optical microscope had a mean ± standard error of mean of 150 ± 56.6 for the E-CL and only 21 ± 8.0 in L-CL, P < 0.01 ( Figure 3A). The relationship between the number of amastigotes and illness duration and size of the lesion is shown in Figure 3B and C. The number of amastigotes decreased with the illness duration and with the lesion size but did not reach statistical significance. The number of amastigotes was higher in papular lesions and decreased at the time of the ulceration indicating that ulcer formation is associated with a reduced parasite burden in the site. Inflammatory cell profile. The inflammatory profile in both groups revealed a predominance of lymphocytes and CD68 + macrophages. CD4 + and CD8 + T lymphocytes were present throughout the sample from the dermal-epidermal junction and in granulomas. CD20 + B lymphocytes predominated in the middle portion of the dermis. The number of CD20 + cells did not differ significantly between early and late lesions, while there was an increase in the number of CD4 + and CD8 + T-cells in L-CL lesions compared with E-CL ( Figure 4). Macrophages were observed infiltrating the dermis at the junction of the epidermis and dermis up to hypodermis ( Figure 5A,B). Together with lymphocytes they were also present in areas of necrosis. Plasma cells, giant cells, and granulomas were seen. Vasculitis near the necrotic areas was also detected. Was observed a positive correlation observed between the number of macrophages and lesion size in E-CL ( Figure 5C and D), there was no correlation between the frequency of CD4 + and CD8 + T cells and lesion size in E-CL (P > 0.05). Areas of inflammation and necrosis. The area of inflammation and necrosis in E-CL and L-CL and the relationship between them with the size of the lesions are shown in Figure 6. As might be expected, the area of inflammation in E-CL (28.4% ± 4.2) was lower than in L-CL (49.8% ± 3.8) ( Figure 6A). Lytic necrosis was seen in small areas. There was no difference between area of necrosis in both groups analyzed ( Figure 6B). There was also no correlation between the percentage of the areas of inflammation and necrosis in both groups (data not shown). Although there was no association between CD68 + cells with the area of inflammation, there was a positive and significant correlation between the frequency of CD20 + B cells and the inflammation (R = 0.51; P < 0.05) (data not shown). There was a direct correlation between the number of CD68 + cells and the area of necrosis in both phases of the disease (E-CL and L-CL) ( Figure 7A and B). Finally, we found a direct correlation between the frequency of T and B cells and inflammation, but no correlation between the number of CD4 + and CD8 + T cells with the number of amastigotes, the lesion size, or the area of necrosis. DISCUSSION The cutaneous ulcer with raised borders is the most common presentation of CL occurring in more than 90% of patients infected with Leishmania Viannia braziliensis. However, before the ulcer appears in the skin, patients often present with a lymphadenopathy usually with a mild skin desquamation at the site of the parasite inoculation. This is followed by the development of a nodule that leads to a detectable ulcer. This initial phase of the disease characterizes E-CL. Although the immunopathology of the classical ulcerated lesion, a feature of L-CL, is well described, there is a lack of information about the histopathology in E-CL. In this study, we showed that parasite load is higher in E-CL than L-CL, but the parasite load is neither associated with the size of the ulcers or with ulcer development. Alternatively, there was a direct correlation between the frequency of macrophages with the area of necrosis and ulcer size. Different from CL caused by other Leishmania species in which parasites are easily found in the skin lesion, 17 in L. braziliensis ulcers, amastigotes are scarce or even absent under light microscopy examination. Here we showed that the amount of amastigotes was higher in E-CL than in L-CL biopsies. As expected, amastigotes were predominantly found inside macrophages, but parasites were also found outside of these cells and in the collagen of dermis. The absence of association between the parasite load and the area of inflammation, area of necrosis, and size of the ulcer suggest that the parasite load does not play a direct role in lesion development. This finding is in agreement with a previous report of a disparity between parasite numbers and the intensity of the inflammatory and necrotic events in L-CL. 18 As macrophages are the main cells responsible for Leishmania killing, one could expect an inverse correlation between the numbers of CD68 + cells and amastigotes. Interestingly, we found a direct correlation between the frequency of CD68 + cells and number of amastigotes and there was an association between macrophages number and the area of necrosis in both E-CL and L-CL and between majority and size of the ulcer and CL. It has been shown that macrophages from CL patients exhibit an enhanced inflammatory profile, but are less able to kill Leishmania. [19][20][21] Therefore, it is likely that parasite survival and leishmania antigen derived from dead parasites stimulate the adaptive immune response, thereby enhancing the inflammatory reaction. The role of CD4 + and CD8 + T cells in the pathogenesis of L-CL is well documented. Although the T-cell response is important to prevent parasite dissemination, an exaggerated inflammatory response is associated with pathology. 22,23 We have previously shown a direct correlation between the frequency of CD4 + T cells expressing IFN and TNF, and CD4 + T cells expressing lymphocyte activation markers with the lesion size. 22,24 CD8 + T cells also play a role in the pathology. 4,25 Although there was no association between CD8 + T cells expressing granzyme and the area of inflammation in E-CL, there was a correlation between the frequency of CD8 + T cells expressing granzyme and the intensity of the inflammatory reaction in L-CL. 26 Moreover, although CD8 + T cells kill L. braziliensis-infected cells, they have an impairment in parasite killing. 4,5,23 The inflammatory reaction in both E-CL and L-CL is composed of CD68 + , CD4 + and CD8 + T cells as well as B cells. As T cells and macrophages are responsible for the granuloma formation, the role of these cells in the pathogenesis of CL has been well studied. In contrast, little emphasis is given for the role of B cells in the pathogenesis of L. braziliensis infection. B cells are found in high frequency in tissue of CL patients. 27,28 Here we showed that CD20 + B cells are also observed in E-CL and the correlation between the frequency of B cells and the inflammation area pointed out the need for future studies to determine the participation of antibodies in the control of the infection or in the pathology of CL. Although the area of inflammation was greater in L-CL than in E-CL biopsies, there was no difference between the area of necrosis in the two phases of the disease. The size of the lesion directly correlated with illness duration and similarly the area of inflammation was greater in L-CL than in E-CL. However, there was no correlation between inflammation and size of the lesions and there was also no correlation between the inflammatory and areas of necrosis. Necrosis seen in our study was small and focal in the majority of the biopsies. The pathogenic mechanisms leading to necrosis during CL are not well elucidated. Likely, this process is multifactorial, including vessel obliteration induced by vasculitis, 1,29 killing of macrophages and epithelial cells expressing Leishmania antigen, and tissue injury by the inflammatory response. [30][31][32] It is known that metalloproteinase (MMP) genes are highly expressed in the tissue of CL patients and that monocytes secrete high levels of MMP-9. 33-35 MMP expression by macrophages may explain our findings of a direct correlation between macrophages and the area of necrosis in CL throughout the disease. Additional studies should be performed to identify if a programmed necrosis by activity of protein kinase RIPK3 is occurring. 36 We recognize that the limited sample size in our study may have prevented a better correlation between some variables and inflammation or the area of necrosis. Although longitudinal studies using biopsies of the same patients in the two phases of the disease could help to better understand the dynamics of the immunopathology, this is not possible as patients are treated upon diagnosis. Despite a few limitations, our immunopathologic study comparing biopsies from patients with E-CL versus L-CL contributes to the understanding of host and parasite factors in the pathogenesis of L. braziliensis and emphasizes the participation of macrophages in the development of CL ulcers. We have previously shown that although macrophages from patients with CL have an impairment in Leishmania killing, they produce high levels of proinflammatory cytokines, such as TNF and the chemokines CXCL9 and CXCL10. 19 These molecules contribute to necrosis and cell recruitment to the site of infection. Monocytes and macrophages are heterogeneous subpopulations, with killing, inflammatory, and regulatory profiles. 34 Previous studies have shown a high frequency of monocytes with inflammatory profile in E-CL, 14 and there is a direct correlation between the frequency of monocytes expressing toll-like receptor 9 with ulcer size in CL. 37 Moreover, no production rather than protection is associated with pathology in L. braziliensis infection. 38 It is clear that monocytes and macrophages have also protective function killing intracellular pathogens. However, while Leishmania killing is mediated by classical monocytes, secretion of proinflammatory cytokine is produced mainly by the inflammatory monocytes. 3,14 Therefore, the increase in proinflammatory monocytes in CL and even in E-CL may explain the intense inflammatory reaction and parasite persistence. Because of the plasticity of monocyte population and limited numbers of cells obtained in the biopsies, studies on monocyte subsets in tissue are limited. However, our documentation that macrophages number correlates with the necrosis area and lesion size in E-CL indicates that in addition of CD4 + and CD8 + T cells, macrophages play a role in ulcer development in CL due to L. braziliensis. ATL is one of the best examples about the tenuous line that separate protection from pathology. Here, although we showed that parasite burden was not associated with inflammation and ulcer size, it was clear that pathology due to inflammation and necrosis occurred due to an attempt of the host to eliminate parasites. Different from many other infectious diseases in which early therapy is associated with a high rate of cure and acceleration of the healing time, patients with E-CL have a high rate of failure to antimony therapy in comparison with L-CL. The documentation of high number of amastigotes early in the infection and a progressive inflammatory reaction with illness duration indicate that in addition to parasite killing, a down modulation of the inflammatory reaction should be attempted in the treatment of patients with E-CL.
4,932.2
2017-01-23T00:00:00.000
[ "Biology", "Medicine" ]
Low energy constraints and scalar leptoquarks The presence of a colored weak doublet scalar state with mass below 1 TeV can provide an explanation of the observed branching ratios in B → D(∗)τντ decays. Constraints coming from Z → bb̄, muon g−2, lepton flavor violating decays are derived. The colored scalar is accommodated within 45 representation of SU(5) group of unification. We show that presence of color scalar can improve mass relations in the up-type quark sector mass. Impact of the colored scalar embedding in 45-dimensional representation of SU(5) on low-energy phenomenology is also presented. Introduction Recent experimental results coming from LHC and B-factories indicate almost an ideal agreement between measured theoretically predicted quantities within the Standard Model.Only few observables disagree on the level of few standard deviations.For example, experimentally observed enhancement of the branching ratios in the B → D(D * )τν τ decays with respect to the Standard Model (SM) predictions.Namely, BaBar collaboration has presented the following ratios [2]: These results are consistent with previous ones obtained by Belle [3] but higher than the SM values of R * ,SM τ/ = 0.252(3) and R SM τ/ = 0.296(16) with 3.4 σ significance when the two observables are combined (see Ref. [4]).Among many scenarios of new physics there are viable scenarios where Standard Model is extended by colored scalars.We investigate whether the presence of a light colored scalar state which is a color triplet weak doublet can explain observed discrepancy in R ( * ) τ/ [1].Due to its weak doublet nature it modifies relevant couplings in b → cτν τ transition and at the same time it affects Z → b b and lepton electromagnetic moments and → γ decays.We investigate systematically its impact on all these processes. Scalar leptoquark and B → D ( * ) τν τ decays The b → c ν transition can be mediated, among other possibilities (see references in [1]), by color triplet scalars (or vectors) with renormalizable leptoquark couplings to the SM fermions.These bosons can have electric charges of |Q| = 1/3 and |Q| = 2/3.The list of all states is given in Table 1 in [1].Among possible leptoquarks we also discard scalars (3, 1) −1/3 , singlet of S U(2) L weak interactions and (3, 3) −1/3 , triplet of S U(2) L weak interactions due to their proton destabilization.With this selection we are left with the weak doublet states, which have hypercharge 1/6 or 7/6.There are two states with the electric charge 2/3 and −1/3 (Q = I 3 + Y), with Y = 1/6.One of them might couple to the right handed neutrino and would not interfere with the SM amplitudes.In our approach we do not consider this state, since one has to elaborate on the origin of right handed neutrino.We are left with only one state Δ ≡ (3, 2) 7/6 .The leptoquark Δ interacts with the SM fermions in a following way: where we have used Δ = iτ 2 Δ * for the conjugated state.This Lagrangian is written in the weak basis and the transition to the mass basis splits Yukawa couplings of the weak doublets to two sets of couplings relevant for the upper and the lower doublet components.The coupling Y represents couplings between charged leptons and down-type quarks while Z connects up-type quarks with charged leptons.We choose basis in which masses of charged leptons and down-type quarks are diagonal and all relative rotations are assigned to neutrinos and up-type quarks and the transition to such basis is achieved by substituting ν L → V PMNS ν L and u L → V † CKM u L , where V PMNS and V CKM represent Pontecorvo-Maki-Nakagawa-Sakata (PMNS) and Cabibbo-Kobayashi-Maskawa (CKM) mixing matrices, respectively.The two components of colored scalar, i.e., Δ (2/3) and Δ (5/3) , then have following interactions with the fermion fields: Existing experimental results teach us that the flavor changing processes within first two generations of quarks and leptons are well fitted with CKM and PMNS parameters.Therefore, we only require nonzero coupling of Δ (2/3) to the third generation τb but not to bμ or be as indicated by the experimental results suggested by nonobservation of anomalies in b → c ν, with = e, μ.We also require that only c quark but not u or t couples to neutrinos.Therefore, we introduce the minimal set of couplings needed to explain the b → cτν branching fraction.These requirements yield The Δ (5/3) Yukawa couplings are related to the above ones through CKM and PMNS rotations that induce CKM-suppressed couplings of τ to up-type quarks and PMNS-rotated couplings of c quarks to charged leptons.We have where z2i are linear combinations of z 2 j with O(1) coefficients related to the PMNS matrix elements. One can, at this stage, regard null-entries in Y and Z to be sufficiently small numbers that can thus be neglected in subsequent analyses. EPJ Web of Conferences QCD@Work 2014 NP scenarios can be reduced to the effective Lagrangians in which either new vector/axial-vector and tensor currents, or (pseudo)scalar density operators are responsible for the measured discrepancy.There are additional observables that could help single out the class of NP operators preferred by the data [5].In all these analyses the effective operator contribution was included into decay amplitude on individual basis.The model of LQ mediation we consider here results in scalar/pseudoscalar and tensor contributions simultaneously.Namely, the relevant effective Hamiltonian for semileptonic b → c transition induced by the (3, 2) 7/6 state is where m Δ is the mass of the LQ component with charge |Q| = 2/3 and is also defined as a matching scale for the above Hamiltonian.(In what follows we assume that Δ (2/3) and Δ (5/3) are degenerate in mass.)This means that the appropriate Wilson coefficients of scalar and tensor operators, g S and g T , are uniquely determined and correlated.The above leptoquark effective Hamiltonian will affect semileptonic decays with the tau lepton, but contrary to the SM the final state neutrino is not necessarily a ντ .The most natural mechanism to enhance b → cτν is to have a constructive interference between the SM and the LQ amplitudes of b → cτν τ , whereas pure leptoquark contributions, producing νe and νμ are negligible.This implies that we employ a Hamiltonian that includes the SM as well as the LQ contribution to b → cτν τ decay where the scalar and tensor effective couplings are related to the underlying Yukawa couplings at the matching scale Hadronic (pseudo)scalar and tensor operators in Eq. ( 7) have anomalous dimensions in QCD and dependence of their matrix elements on the renormalization scale is canceled by the scale dependence of the Wilson coefficients at the leading logarithm approximation.The Wilson coefficients run to the beauty quark scale, i.e., μ = m b = 4.2 GeV, at which the matrix elements of hadronic currents are calculated.As presented in [1] the difference between running of g S and g T (g T (m LQ ) = 1/4 g S (m LQ )) modifies the matching scale relation to g T (m b ) 0.14 g S (m b ). For exclusive decay amplitudes for B → Dτν the hadronic matrix element of the vector current is conventionally parametrized by f + (q 2 ) and f 0 (q 2 ) form factors where p μ B,D are four vectors of momenta of B and D mesons and The presence of the tensor operator in Eq. ( 8) requires inclusion of an additional form factor f T (q 2 ), defined as The matrix element of a scalar operator is related to f 0 (q 2 ) form factor , where m b and m c are masses of b and c quarks in MS scheme at the scale μ = m b [6].Using these connections 00002-p.3 the differential branching ratio can be expressed by form factors f + (q 2 ), f 0 (q 2 ) and f T (q 2 ) as shown in Eq. (3.8) of [1].In our numerical calculations the ratio f T (q 2 )/ f + (q 2 ) = 1.03 (1) is evaluated in the model of Ref. [7].In the heavy quark limit this ratio is f T (q 2 )/ f + (q 2 ) = 1, as the form factors are equally related to the Isgur-Wise function, We employ following parametrization of vector form factors [8][9][10] where the constant R D and the new kinematic variable w are given as and r D denotes the ratio of masses of D and B mesons.Lattice estimate of the function Δ(w) is consistent with constant value Δ(w) = 0.46 ± 0.02 as described in [1]. The decay B → D * τν τ offers additional tests of the SM and NP due to the vector meson state D * [5].In the case of light lepton in the final state, one vector and two axial form factors are present in the decay amplitude.If τ is in the final state an additional form factor A 0 (q 2 ) appears.The mediation of the (3, 2) 7/6 leptoquark induces effective Lagrangian containing the tensor operator that requires knowledge of tensor form factors.Following notation of Ref. [5], the polarization four-vectors of the final state leptons and D * vector meson are denoted by ˜ μ (λ) and μ (λ D * ), respectively.Polarizations take the following values: λ = 0, ±, t and λ D * = ±, 0.Here t stands for the time-like polarization vector.Standard parametrization of vector and axial hadronic matrix elements for the B → D * transition is given by For the parametrization of the tensor hadronic matrix elements, we follow Ref. [11] and relations in heavy quark limit among tensor, vector and axial form factors as presented in [1].The branching ratio is calculated after integration over q 2 and angle θ l .We constrain the allowed values of tensor and scalar Wilson coefficients using BaBar's measurements of the ratios R τ/ = B(B → Dτν)/B(B → D ν) and R * τ/ = B(B → D * τν)/B(B → D * ν) as shown in Fig. 1, where the result of fit to both ratios is shown.We derive 1σ range for the Wilson coefficient g S at the low scale where we have assumed g S to be real in estimating the error bars.The coupling g S at the matching scale is rescaled by factor 0.64 with respect to the above value due to QCD corrections as explained in the text.We have compared the q 2 distributions of decay widths for B → Dτν and B → D * τν for our best fit point, g S = −0.37,against the experimental bin values presented in [2].The effect of leptoquark does not generate observable q 2 -dependent features.The existing level of experimental and hadronic uncertainties imply that there is no possibility to discern a signal of leptoquark from the SM contribution, based on observation of a single bin.The discrepancy becomes more pronounced after integrated over large range of q 2 . Constraints from Z → b b and lepton electromagnetic moments The same couplings appear in other observables.There is well known SM prediction -experimental result discrepancy in Z → b b (see e.g.[12]).Standard parameterization of the Zb b renormalizable coupling is adopted coupling is denoted by g, c W is the cosine of the Weinberg angle and P L,R = (1 ± γ 5 )/2 are the chiral projectors.At the SM tree-level, the couplings are g b0 L = −1/2+ s 2 W /3 and g b0 R = s 2 W /3. Higher-order electroweak corrections that are contained within g b L,R get largest contributions from top quark in loops.A recent electroweak fit that includes updated theoretical predictions and new results from LHC points to tensions in the Z → b b observables reaching above 2 σ significance in R b and A b FB [12].The shifts with respect to the SM values of couplings are then δg b L = 0.001±0.001and δg b R = (0.016±0.005)∪(−0.17±0.005).Δ (2/3) component possesses a possibly large coupling y 33 to bτ pair that contributes to Z → b b amplitude at order |y 33 | 2 and thus allows one to constrain it directly.The LQ correction to the left-handed coupling is where as explained in [1].For m Δ above 300 GeV large portion of preferred |y 33 | range lies within the nonperturbative regime.In order to maintain a predictable setup we assume that coupling y 33 is perturbative, i.e., |y 33 | < √ 4π.Additional constraints are available from a number of lepton electromagnetic moments appearing in γ vertex.The lepton vertex with electromagnetic field will be modified by penguin diagrams QCD@Work 2014 involving virtual exchanges of Δ (5/3) and charm quark.These contributions enter into muon g − 2. The muon anomalous moment gets shifted due to new contribution of Δ (5/3) in the loop.We determined the allowed 1 σ range from the condition χ 2 − χ 2 S M ≤ 1 that translates to a constraint |δa μ | < 10.9 × 10 −11 or put in terms of z22 (see Fig. 3 in [1]) 500GeV .The τ electric dipole moment arises at a loop level with the helicity flip on the virtual charm quark and consequently both penguin diagrams are finite (for details see [1]).At the moment, the best bounds from Belle experiment are orders of magnitude too weak to directly probe the parameter range, preferred by B → D ( * ) τν.We then found from bound |z 23 y 33 | < 4π that the upper bound on tau EDM in the perturbative setting is |d τ | < 2.6 × 10 −21 ecm. The additional constraints might be derived from the lepton flavor violating → γ decays The decay τ → γ is mediated by a loop diagram with Δ (5/3) scalar and a charm quark.The best experimental bounds on LFV radiative decays of τ were presented by BaBar collaboration in Ref. [13].Experimental bounds on branching ratios B(τ → eγ) < 3.3 × 10 −8 and B(τ → μγ) < 4.4 × 10 −8 (both at 90 % C.L.) severely constrain the combination of couplings coming from LQ contributions.The experimental upper limits for μ → eγ branching ratio are orders of magnitude more stringent and in conjuction with the small width of the muon they compensate for the m μ suppression in sensitivity.We rely on the latest result from the MEG experiment obtained from data collected in years 2009-2011 [14], B(μ → eγ) < 5.7 × 10 −13 , at 90 % C.L. . (3, 2) 7/6 in GUT framework The leptoquark Δ ≡ (3, 2) 7/6 can be naturally accommodated within a framework of matter unification (see references [53,54] in [1]).In our study we wanted to investigate whether a particular low-energy ansatz with Yukawa couplings is compatible with the idea of grand unification.Within this approach Δ is one of states within the 45-dimensional representation.Some of the Δ couplings to matter are proportional to Z.The S U(5) contractions relevant for these Yukawa couplings are (Y 1 ) i j 10 i 5 j 45 and (Y 3 ) i j 10 i 5 j 5 where 10 i and 5 i together comprise an entire generation of fermions.Y 1 and Y 3 are, at this stage, arbitrary 3 × 3 matrices in flavor space with i, j (= 1, 2, 3) being corresponding generation indices. We denoted unitary transformations of the down-type quark fields to be D L and D R , where subscripts L and R are related to appropriate chirality.These rotations take the down-type quark fields from a flavor into a mass eigenstate basis.For the up-type quark (charged lepton) sector we similarly used U L and U R (E L and E R ) to be appropriate unitary matrices.Our assumption for neutrinos was that they are Majorana particles and accordingly denote unitary matrix that defines the neutrino mass eigenstates with N. Following details given in [1], after we imposed the ansatz of Eq. ( 5) on possible Yukawa couplings (Y 1 ) i j 10 i 5 j 45 and (Y 3 ) i j 10 i 5 j 5 at the GUT scale, we found two relations that connect fermion mass matrices of down-type quarks and charged leptons with the original Yukawa couplings: Here, ) is a diagonal mass matrix for down-type quarks (charged leptons) and we take both vacuum expection value of representations 5 and 45 (v 5 and v 45 ) to be real.Note that the relations in question contain only the right-handed unitary transformations D R and E R .We proceed by identifying a connection between Y 1 and Z to be Y 1 = −U R Z. This, then, leads us to the following matrix equation We have checked numerically if our ansatz is compatible with the idea of S U(5) grand unification (the details of this work are presented in [1]).We found that numerically there exists a hierarchical relation among z21 , z22 and z23 that mimics mass hierarchy in the down-type quark and the charged lepton sectors with z23 being a dominant element: z21 : z22 : z23 = 0.024 : 0.32 : 1 . We point out that these results are completely independent from the up-type quark and the neutrino sectors, where CKM and PMNS mixing parameters reside, respectively.The hierarchy in the Z matrix, originating from quark-mass hierarchy, enables us to reduce the parameter set of the model to two independent Yukawa couplings.One of these has to be y 33 while we choose z22 , the coupling of Δ (5/3) scalar to cμ pair, to be the remaining free parameter.The rest of the couplings are determined, either from the hierarchical pattern (19) or using properties of PMNS matrix (5).With the help of the PMNS rotation connecting z2 j and z 2k couplings we deduced z 23 = z2k V k3 ≈ z22 c 13 (s 23 + 3.22 c 23 ).The numerical factor 3.22 comes from the hierarchy between z2k couplings.V i j , s i j and c i j denote the PMNS matrix elements and the mixing angles that parameterize it, respectively.Using the 3 σ ranges for mixing angles from a recent PMNS fit as described in [1] z 23 = ωz 22 , 2.63 < ω < 3.17 .Effect of the aforementioned experimental constraints on (3, 2) 7/6 with the minimal Yukawa texture (5), additionally restricted by the pattern of fermion masses, is shown on Fig. 2. As expected, the central role is played by the constraint on g S , although τ → μγ reduces the parameter space remarkably.Due to suppressed coupling to e, sensitivity of τ → eγ and μ → eγ observables is reduced, however, the latter overcomes this suppression by a very stringent experimental upper bound and therefore has the most important role next to constraint on g S .An order of magnitude improvement on the experimental bound on μ → eγ would cause tension with the R(D ( * ) ) observables, and smaller values of the g S coupling would be preferred.Note that only the 2 σ region (hatched in Fig. 2) is overlapping with the region where y 33 is perturbative all the way to the GUT scale.We mention in passing that for mass 200GeV < m Δ < 1TeV all predictions are approximately invariant under rescaling of couplings and m Δ by same factor, as indicated on the axes of Fig. 2. The 2σ region at m Δ = 500 GeV in the y 33 − z22 plane implies the following bounds |y 33 | > 0.74, |z 22 | < 0.037.Region that satisfies perturbativity of the couplings all the way to the GUT scale, QCD@Work 2014 Figure 1 . Figure 1.Values of the scalar Wilson coefficient g S (m b ) (g T (m b ) 0.14 g S (m b )) consistent at 2σ with BaBar Collaboration's measurements of ratios R(D) (bright ring) and R(D * ) (darker ring).The 1σ (2σ) region, fitted to the two constraints, is doubly (singly) hatched. and we take m Z = 91.2GeV and s 2 W = 0.231.The shift δg b L might receive the imaginary part due to on-shell τ leptons.The constraint from the electroweak fit is sensitive to the interference term between the approximately real g b L and complex δg b L (y 33 ), and therefore only the real part of δg b L (y 33 ) enters the prediction.The constraint δg b L (y 33 ) = 0.001 ± 0.001 leads to Figure 2 . Figure 2. Constraints on the couplings to bτ (y 33 ) and to cμ (z 22 ) coming from the 1 σ region of R ( * )τ/ (thin hyperbolic region), 90 % CL upper bounds on μ → eγ, τ → μγ and τ → eγ.Dashed frame represents the region where couplings remain perturbative all the way to the GUT scale, as explained in the text.Doubly (singly) hatched area is allowed at 1σ (2σ).
4,926.6
2014-01-01T00:00:00.000
[ "Physics" ]
Edge Control in the Computer-Controlled Optical Surface The computer-controlled optical surface (CCOS) can process good optical surfaces, but its edge effect greatly affects its development and application range. In this paper, based on the two fundamental causes of the CCOS’s edge effect—namely the nonlinear variation of edge pressure and the unreachable edge removal—a combined polishing method of double-rotor polishing and spin-polishing is proposed. The model of the combined polishing method is established and theoretically analyzed. Combined with the advantages of double-rotor polishing and spin-polishing, the combined polishing process can achieve full-aperture machining without pressure change. Finally, the single-crystal silicon sample with a diameter of 100 mm is polished by the combined polishing process. The results show that, compared with the traditional CCOS polishing, the residual error of the sample after the combined polishing process is more convergent, and the edge effect is effectively controlled. Introduction Computer-controlled optical surfacing technology has been widely used for the ultraprecision machining of various optical materials and plays a pivotal role in this process. However, as with micro-milling, the further development of CCOS is severely limited by the edge effect in the processing [1][2][3]. The edge effect is mainly caused by two reasons: first, the edge area of the workpiece cannot be reached by the orbital motion of the polishing disc; and second, the non-linear variation of the pressure at the edge of the workpiece leads to the inaccuracy of the tool influence function (TIF) [4][5][6][7]. Many scholars have conducted in-depth studies to address these problems. Various TIF algorithm models have been proposed to simulate and calibrate the variations of actual TIF at the edges. Among the early representative theories are the linear pressure distribution model by Wagner [8] and the skin model by Luna-Aguilar [9]. A new edge pressure model is developed based on the results of finite element analysis. The basic pressure distribution can be calculated based on the surface shape of the polishing pad, a correction function is used to compensate for the errors caused by edge effects, and the edge TIF with different overhang rates can be accurately predicted [5,10]. Surveys such as that conducted by W. Song [11] have shown that the generalized spatial variable deconvolution algorithm can accurately calculate the dwell time, which can better control the actual removal amount so as to effectively suppress the edge error and improve the convergence rate. Based on the errors distribution on the workpiece, Yu et al. [12] developed a new tool running path, which not only reduces residual errors on the edges but also the total polishing time. On the other hand, a considerable amount of literature has been published on how to obtain an eccentric TIF. The purpose of these studies is to expand the scope of the actual processing as much as possible, even full-aperture processing. A polishing method based 2 of 11 on surface extension is proposed. By simulating the pressure distribution of the workpiece under different overhangs, the exact removal function under different overhangs is obtained, and the optimal parameters that can effectively suppress the edge effect are obtained [13]. Hongyu-Li established a novel edge-control technique based on "progressive" polishing technology, which obtained an accurate and stable edge tool influence function (TIF) and low residual surface errors [14,15]. In 2016, a new concept of the 'heterocercal' tool influence function (TIF) was developed by Haixing-Hu [16], which was generated from compound motion equipment. This type of TIF can better remove the edge area of the sample. In addition, it also has high removal efficiency and surface quality. In 2017, Hang-Du [17] reported an acentric tool influence function (A-TIF) was designed to suppress the rolled edge after CCOS polishing. It has been proven to be effective through experiments. The above-mentioned work largely suppressed the edge effect of CCOS polishing, but there are certain limitations to solving the two fundamental causes of the edge effect. In this paper, by combining the advantages of double-rotor polishing and spin-polishing, a combined polishing process is proposed. It aims to solve the two fundamental problems mentioned above simultaneously and provides a new way to control the edge effect of CCOS. Basic Polishing Theory Define the removal function, R (x, y), as the average amount of material removed per unit time by a tool that does not move. The basic principle of CCOS polishing is Preston's equation [18], the TIF of which can be calculated based on the equation of material removal, as shown in Equation (1). Here, ∆Z (x, y) is the total amount of material removed from the workpiece, P (x, y) is the pressure of the tool on the workpiece, V (x, y) is the relative velocity between the tool and the workpiece, and T is the dwell time. k 0 is the Preston coefficient, which is related to the processing temperature, polishing fluid, and other processing conditions. Figure 1a shows a schematic diagram of two velocity fields generated from orbital motion V 1 and spin motion V 2 . P is any point in the overlap area of the sample and tool. r 1 is the offset of tool center O 2 relative to rotation center O 1 and r 2 is the radius of the tool. ω 1 and ω 2 are the orbital angular velocity and spin angular velocity of the tool, respectively. The total velocity, V, can be expressed as Equation (2). ( Assuming f = ω 2 /ω 1 , e = r 2 /r 1 , combining Equations (1) and (2), the TIF of double rotor polishing, R 2 (x, y), can be expressed as [19]: According to Equation (3), a TIF simulation of the double-rotor polishing with a Gaussian-like shape is shown in Figure 1b. This type of TIF has high removal efficiency for the intermediate area and low removal efficiency for the edge area, which can provide good processing capability for polishable areas. In addition, different height values are indicated by different colors, while the numbers next to them indicate relative heights. According to Equation (3), a TIF simulation of the double-rotor polishing with a Gaussian-like shape is shown in Figure 1b. This type of TIF has high removal efficiency for the intermediate area and low removal efficiency for the edge area, which can provide good processing capability for polishable areas. In addition, different height values are indicated by different colors, while the numbers next to them indicate relative heights. Figure 2a shows the motion analysis of the spin-polishing, whose total speed V is equal to the tool's spin speed V2, as shown in Equation (4). Combining Equations (1) and (4), the TIF of spin-polishing can be obtained, as shown in Equation (5). Figure 2b shows the TIF simulation of the spin-polishing. This TIF is W-shaped, and it has high removal efficiency for the edge area and low removal efficiency for the intermediate area, which is contrary to the characteristics of double-rotor polishing. The TIF's cross-sectional profiles of the double-rotor polishing and spin-polishing are shown in Figure 3. As seen in the graph, it is clear that the distribution of the two TIFs' Figure 2a shows the motion analysis of the spin-polishing, whose total speed V is equal to the tool's spin speed V 2 , as shown in Equation (4). Combining Equations (1) and (4), the TIF of spin-polishing can be obtained, as shown in Equation (5). Figure 2b shows the TIF simulation of the spin-polishing. This TIF is W-shaped, and it has high removal efficiency for the edge area and low removal efficiency for the intermediate area, which is contrary to the characteristics of double-rotor polishing. (4) Gaussian-like shape is shown in Figure 1b. This type of TIF has high removal efficiency for the intermediate area and low removal efficiency for the edge area, which can provide good processing capability for polishable areas. In addition, different height values are indicated by different colors, while the numbers next to them indicate relative heights. Figure 2a shows the motion analysis of the spin-polishing, whose total speed V is equal to the tool's spin speed V2, as shown in Equation (4). Combining Equations (1) and (4), the TIF of spin-polishing can be obtained, as shown in Equation (5). Figure 2b shows the TIF simulation of the spin-polishing. This TIF is W-shaped, and it has high removal efficiency for the edge area and low removal efficiency for the intermediate area, which is contrary to the characteristics of double-rotor polishing. The TIF's cross-sectional profiles of the double-rotor polishing and spin-polishing are shown in Figure 3. As seen in the graph, it is clear that the distribution of the two TIFs' The TIF's cross-sectional profiles of the double-rotor polishing and spin-polishing are shown in Figure 3. As seen in the graph, it is clear that the distribution of the two TIFs' removal peaks is highly complementary. That is, where the double-rotor polishing removal is high the spin-polishing removal is low, and vice versa. H 1 and H 2 are the peak height removed by spin-polishing and double-rotor polishing, respectively. Theoretically, the removal rate at the center of the spin-polishing' TIF is 0. However, due to the effect of long-time pressure and the accuracy of machine movement, a very small removal amount, H 3 , occurs here. The effective radius R 1 of the spin-polishing TIF is equal to the tool's radius r 2 , whereas the double-rotor polishing's effective radius R 2 is the sum of the tool's radius r 2 and offset r 1 , as shown in Equation (6). removal peaks is highly complementary. That is, where the double-rotor polishing removal is high the spin-polishing removal is low, and vice versa. H1 and H2 are the peak height removed by spin-polishing and double-rotor polishing, respectively. Theoretically, the removal rate at the center of the spin-polishing' TIF is 0. However, due to the effect of long-time pressure and the accuracy of machine movement, a very small removal amount, H3, occurs here. The effective radius R1 of the spin-polishing TIF is equal to the tool's radius r2, whereas the double-rotor polishing's effective radius R2 is the sum of the tool's radius r2 and offset r1, as shown in Equation (6). Combined Polishing Method In the CCOS polishing process, the tool moves over the surface of the workpiece following a predetermined trajectory and stays at each arbitrary point for a certain time. The material removed by the tool in each area of the workpiece surface can be superimposed together to obtain the distribution function of the surface errors. In other words, the distribution function of the surface errors is equal to the convolution of the removal function and the dwell time, D (x, y), as shown in Equation (7). It is well known that the overhang of the tool will lead to a non-linear variation in pressure, which is an overwhelming factor causing edge effects. So, what would happen if there were no overhang of the tools? Although the width of the rolled edge is increased, the accuracy of CCOS polishing is also improved. Therefore, in order to eliminate the influence of nonlinear pressure and a collapsed edge, the combined polishing process is all based on no overhang of tools. The basic ideas of the combined polishing process are: (1) Minimizing the height of the rolled edge as much as possible when using non-overhanging double-rotor polishing; (2) Minimizing the width of the rolled edge using the minimal tool; (3) Reducing the height of the rolled edge with the commutative method of the spinpolishing; (4) Repairing of annular residual errors caused by spin-polishing using the double-rotor polishing method and finally obtaining a flawless surface. When CCOS is used to polish the workpiece, to a certain extent the rolled edge will inevitably occur, as shown in Figure 4a,b. Therefore, the combined polishing process proposed in this paper was used to solve this problem. Firstly, a large tool is used to polish the workpiece quickly and efficiently. At the same time, a safety factor K ∈ (0, 1) is introduced to control the height of the rolled edge, which can prevent over-processing. How- Combined Polishing Method In the CCOS polishing process, the tool moves over the surface of the workpiece following a predetermined trajectory and stays at each arbitrary point for a certain time. The material removed by the tool in each area of the workpiece surface can be superimposed together to obtain the distribution function of the surface errors. In other words, the distribution function of the surface errors is equal to the convolution of the removal function and the dwell time, D (x, y), as shown in Equation (7). It is well known that the overhang of the tool will lead to a non-linear variation in pressure, which is an overwhelming factor causing edge effects. So, what would happen if there were no overhang of the tools? Although the width of the rolled edge is increased, the accuracy of CCOS polishing is also improved. Therefore, in order to eliminate the influence of nonlinear pressure and a collapsed edge, the combined polishing process is all based on no overhang of tools. The basic ideas of the combined polishing process are: (1) Minimizing the height of the rolled edge as much as possible when using nonoverhanging double-rotor polishing; (2) Minimizing the width of the rolled edge using the minimal tool; (3) Reducing the height of the rolled edge with the commutative method of the spin-polishing; (4) Repairing of annular residual errors caused by spin-polishing using the double-rotor polishing method and finally obtaining a flawless surface. When CCOS is used to polish the workpiece, to a certain extent the rolled edge will inevitably occur, as shown in Figure 4a,b. Therefore, the combined polishing process proposed in this paper was used to solve this problem. Firstly, a large tool is used to polish the workpiece quickly and efficiently. At the same time, a safety factor K ∈ (0, 1) is introduced to control the height of the rolled edge, which can prevent over-processing. However, since there is no overhang of the tool during the polishing process, a large unmachined area W max emerged at the edge of the workpiece. Second, since the small tool has a small TIF, the width of the unmachined area at the edge of the workpiece can be reduced, as shown in Figure 4c,d. When the red area in the figure is removed by the small tool, the width of the edge unmachined area is reduced from W max to W min , which can eventually be reduced to less than 10 mm. It is worth noting that the effective area of the double-rotor polishing does not need to be particularly flat at this point. This can provide a processing allowance for subsequent spin-polishing to remove the height of the rolled edge. ever, since there is no overhang of the tool during the polishing process, a large unmachined area Wmax emerged at the edge of the workpiece. Second, since the small tool has a small TIF, the width of the unmachined area at the edge of the workpiece can be reduced, as shown in Figure 4c,d. When the red area in the figure is removed by the small tool, the width of the edge unmachined area is reduced from Wmax to Wmin, which can eventually be reduced to less than 10 mm. It is worth noting that the effective area of the double-rotor polishing does not need to be particularly flat at this point. This can provide a processing allowance for subsequent spin-polishing to remove the height of the rolled edge. After minimizing the width of the rolled edge, the most important thing is how to decrease the height of the rolled edge. The detailed method of using spin-polishing to remove the height of the rolled edges is shown in Figure 5. The core idea of this method is the equivalent replacement. Firstly, the number n of polishing discs of different sizes is determined based on the measured surface error distribution of the workpiece. Based on the basic principle that the height of the middle area after being removed is not lower than the lowest point A of the full aperture, the height of the rolled edge is divided into n segments that match the surface error of the workpiece, as shown in Figure 5a. d is the height from the highest point of rolled edge to the lowest point of full aperture, which is divided into n regions, such as S1, S2, …, Sn. Wi and Disci are the width and the polishing tool used of the corresponding area, respectively. Second, the distribution function of the surface error Hi (x, y) needs to be calculated exactly before each polishing. Hi (x, y) is the sum of the distribution function of Si and the distribution function of the other areas of the full aperture excluding Si. The distribution function of the total removal H (x, y) is then equal After minimizing the width of the rolled edge, the most important thing is how to decrease the height of the rolled edge. The detailed method of using spin-polishing to remove the height of the rolled edges is shown in Figure 5. The core idea of this method is the equivalent replacement. Firstly, the number n of polishing discs of different sizes is determined based on the measured surface error distribution of the workpiece. Based on the basic principle that the height of the middle area after being removed is not lower than the lowest point A of the full aperture, the height of the rolled edge is divided into n segments that match the surface error of the workpiece, as shown in Figure 5a. d is the height from the highest point of rolled edge to the lowest point of full aperture, which is divided into n regions, such as S 1 , S 2 , . . . , S n . W i and Disc i are the width and the polishing tool used of the corresponding area, respectively. Second, the distribution function of the surface error H i (x, y) needs to be calculated exactly before each polishing. H i (x, y) is the sum of the distribution function of S i and the distribution function of the other areas of the full aperture excluding S i . The distribution function of the total removal H (x, y) is then equal to the sum of the distribution functions for each polishing. Their relationships are shown in Equations (8) and (9). Combined with the removal efficiency of the removal function, the distribution of the residence time of each polish can be obtained. Here S i is the area of rolled edge to be removed in the i-th polishing, A i is the fullaperture zone before the i-th polishing. (A i -S i ) is the other regions of the full aperture excluding S i . H Si (x, y) is the distribution function of the surface material to be removed in S i for the i-th polishing, and H (Ai-Si) (x, y) is the distribution function of the surface material to be removed in (A i -S i ) for the i-th polishing. In addition, after the distribution function of the removal amount of the spin-polishing is determined, the rolled edges can be removed iteratively using the combined polishing method and finally eliminated, shown in Figure 5a-e. Assume that the initial surface profile is shown in Figure 5a. The area S 1 is removed in the first spin-polishing and the machined surface profile is obtained, as shown in Figure 5b. Similarly, the surface contours before and after the second polishing are shown in Figure 5c,d, respectively. Then, the rolled edge can be completely eliminated theoretically after n iterations, as shown in Figure 5e. After the elimination of the rolled edge, the intermediate area can be polished with precision by selecting a suitable size tool, and finally, a high-quality surface is obtained, as shown in Figure 5f. Moreover, in order to realize the assumptions of the combined polishing process and to improve the convergence rate of the surface errors, some parameters also need to be constrained, as shown in Equation (10). This ensures that the edge removal width is greater than the width of the rolled edge and that the middle equivalent removal zones do not overlap as much as possible. Here, W min is the width of the rolled edge after polishing with the smallest tool, W i is the width of the rolled edge to be removed by the i-th polishing. D i is the diameter of the tool used for the i-th polishing, D i + 1 is the diameter of the tool used for the (i + 1)-th polishing. (10) Figure 6 shows the whole flow chart of the combined polishing process. The first thing is to measure the initial surface error of the workpiece. Then, the choice of polishing process is determined by whether the surface error has a rolled edge or not. If it does, the spin-polishing is preferred to reduce the height of the rolled edge. Then the double-rotor polishing method without overhang is used to polish the middle area of the workpiece. Lastly, the polished surface is inspected and judged on whether it meets the requirements. On the other hand, if there is a collapsed edge, it is processed directly by double-rotor polishing. In conclusion, the combined polishing process not only avoids the nonlinear variation of edge pressure but also solves the problem of unreachable trajectory, which has an excellent effect on the control of edge effect. Edge Processing Experiment In order to verify the credibility and feasibility of the aforementioned combined po ishing process, a single crystal silicon sample was selected for the polishing experiments The specific experimental parameters are shown in Table 1. The detailed experiment is a follows: First, a polishing tool with a diameter of 30 mm was used for polishing, whic Edge Processing Experiment In order to verify the credibility and feasibility of the aforementioned combined polishing process, a single crystal silicon sample was selected for the polishing experiments. The specific experimental parameters are shown in Table 1. The detailed experiment is as follows: First, a polishing tool with a diameter of 30 mm was used for polishing, which could quickly remove material. Then, a polishing tool with a diameter of 10 mm was used to reduce the width of the rolled edge. After that, three sizes of polishing tools were used to reduce the height of the rolled edge. Finally, a 20 mm polishing disc was used for reshaping. Results and Discussion The initial surface error of the sample is 3.243λ PV, 0.849λ RMS and is shown in Figure 7a. Since the initial surface error is the collapsed edge, a double-rotor polishing method with a large tool was first chosen for polishing. Then, a figure accuracy of 0.882λ PV, 0.184λ RMS was obtained and is shown in Figure 7b. Compared to the initial surface error, the residual surface error after polishing is greatly converged. Moreover, it can be found that although the surface error in the middle region is converged, there are clear rolled edges appearing at the edges of the specimen. Then the double-rotor polishing method with the smallest tool was employed to reduce the width of the rolled edges. The surface error with 0.921λ PV, 0.142λ RMS was obtained, as shown in Figure 7c. Compared with the surface error in Figure 7b, the width of the rolled edges is significantly reduced. With this, the suppression of rolled edges' width in the combined polishing process is realized. According to the aforementioned combined polishing method, the spin-polishing method was selected to reduce the height of the rolled edge. The surface error after spinpolishing is a ring band of varying heights, as shown in Figure 7d. Its PV decreases from 0.921λ to 0.275λ, and RMS decreases from 0.142λ to 0.037λ, which obviously improves the surface quality of the sample. What is more noteworthy is that the height of the rolled edge is significantly reduced. After the combined polishing process, the sample with a surface accuracy of 0.148λ PV and 0.021λ RMS is finally obtained, as shown in Figure 7e. In addition, it is worth noting that a figure accuracy of 0.103λ PV, 0.010λ RMS can be obtained in 90% of the area after the combined polishing. Compared with the conventional CCOS polishing, such as Figure 7b,c, the edge effect of the sample is greatly weakened after the combined polishing process, and the surface quality is greatly improved. The results show that the aforementioned combined polishing process is of great significance for controlling the edge effect of CCOS polishing, which also verifies the effectiveness and practicality of the combined polishing process. The profiles at different stages of the combined polishing process are shown in Figure 8a. What is clear is that the profile after double-rotor polishing with a large tool has a larger width of the rolled edge W max , as the effective radius of the double-rotor polishing. In other words, when machining with a small tool, the width W max of the rolled edge will be gradually reduced until the minimum value W min . At this point, the first step of the combination polishing is completed, namely, reducing the width of the rolled edge. An amplified view of the dashed area in Figure 8a is shown in Figure 8b. From the profile after spin-polishing, the surface residual error is in line with the expectation of the combined polishing process. In this case, the height of the rolled edge is reduced by removing the height of the middle area simultaneously, as in regions A and B in Figure 8b. This shows that the method of reducing the height of the rolled edge by spin-polishing is feasible. However, special attention should be paid to the accurate calculation of the height to be removed for each polishing before processing. The best result is that the height of the intermediate region after polishing is equal to the minimum height of the initial surface, that is, h is 0. This helps to reduce the amount of subsequent processing and the convergence rate of surface errors. Comparing the surface profile before and after the combined polishing process, the results show that the combined polishing process is very effective in suppressing the edge effect of CCOS. Micromachines 2021, 12, 1154 9 of 11 polishing is a ring band of varying heights, as shown in Figure 7d. Its PV decreases from 0.921λ to 0.275λ, and RMS decreases from 0.142λ to 0.037λ, which obviously improves the surface quality of the sample. What is more noteworthy is that the height of the rolled edge is significantly reduced. After the combined polishing process, the sample with a surface accuracy of 0.148λ PV and 0.021λ RMS is finally obtained, as shown in Figure 7e. In addition, it is worth noting that a figure accuracy of 0.103λ PV, 0.010λ RMS can be obtained in 90% of the area after the combined polishing. Compared with the conventional CCOS polishing, such as Figure 7b,c, the edge effect of the sample is greatly weakened after the combined polishing process, and the surface quality is greatly improved. The results show that the aforementioned combined polishing process is of great significance for controlling the edge effect of CCOS polishing, which also verifies the effectiveness and practicality of the combined polishing process. The profiles at different stages of the combined polishing process are shown in Figure 8a. What is clear is that the profile after double-rotor polishing with a large tool has a larger width of the rolled edge Wmax, as the effective radius of the double-rotor polishing. feasible. However, special attention should be paid to the accurate calculation of the height to be removed for each polishing before processing. The best result is that the height of the intermediate region after polishing is equal to the minimum height of the initial surface, that is, h is 0. This helps to reduce the amount of subsequent processing and the convergence rate of surface errors. Comparing the surface profile before and after the combined polishing process, the results show that the combined polishing process is very effective in suppressing the edge effect of CCOS. Conclusions In summary, a combined polishing process was proposed for the edge effects that occur in CCOS. In addition, a theoretical analysis and experimental validation of the combined polishing process were carried out. During the combined polishing process, the pressure variation was limited and the edge area was effectively removed. Finally, the PV value of the surface error was reduced from 3.243λ to 0.148λ, which indicates that the combined polishing process has a good suppression effect on the edge effect. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the data also forms part of an ongoing study. Conflicts of Interest: The authors declare no conflict of interest. Conclusions In summary, a combined polishing process was proposed for the edge effects that occur in CCOS. In addition, a theoretical analysis and experimental validation of the combined polishing process were carried out. During the combined polishing process, the pressure variation was limited and the edge area was effectively removed. Finally, the PV value of the surface error was reduced from 3.243λ to 0.148λ, which indicates that the combined polishing process has a good suppression effect on the edge effect. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the data also forms part of an ongoing study. Conflicts of Interest: The authors declare no conflict of interest.
6,939.8
2021-09-25T00:00:00.000
[ "Materials Science" ]
k‐Space‐based coil combination via geometric deep learning for reconstruction of non‐Cartesian MRSI data State‐of‐the‐art whole‐brain MRSI with spatial‐spectral encoding and multichannel acquisition generates huge amounts of data, which must be efficiently processed to stay within reasonable reconstruction times. Although coil combination significantly reduces the amount of data, currently it is performed in image space at the end of the reconstruction. This prolongs reconstruction times and increases RAM requirements. We propose an alternative k‐space‐based coil combination that uses geometric deep learning to combine MRSI data already in native non‐Cartesian k‐space. | INTRODUCTION Magnetic resonance spectroscopic imaging (MRSI) is an imaging modality with which to map various biochemical compounds noninvasively and in vivo. This method can, for example, reveal important information about the neurochemical underpinnings of brain diseases, 1 which is unique compared with other imaging modalities. 2 The recent development of spatial-spectral encoding and multichannel acquisition allow whole-brain (wb)MRSI scans in clinically attractive times at high field 3,4 and ultrahigh field. 4,5 However, the sheer size of the acquired raw wbMRSI data makes the full integration of the MRSI reconstruction on the image-reconstruction computers of current MR scanners challenging, both with respect to computing power and RAM requirements. Stateof-the-art wbMRSI acquisitions generate raw data of up to about 100 GB per 15 minutes, the postprocessing of which must be fast enough to not interfere with subsequent MRI sequences. 6 A particular problem is that the size of the raw data increases proportionally with the number of receive-coil elements, which has dramatically increased over the last two decades due to striking benefits of fast MRI techniques. 7 The introduction of receive-coil arrays 8 has not only improved the SNR of MRI considerably, but has enabled the development of parallel imaging technologies, which are currently an indispensable part of all clinical MRI protocols. All of these benefits rely on a proper characterization of the receive fields (ie, sensitivity maps) of the individual coil elements and consequential estimation of coil combination weights, which determine how to best combine the signals from different elements in the most SNR-efficient and artifact-free manner. For proton (1H) MRSI, the most successful coil combination approaches use not the actual water-suppressed data to derive coil combination weights, but rather, collect separate waterunsuppressed information, [9][10][11] which provides much more accurate coil combination weights. In recent years, deep-learning reconstruction of MRI has become a well-established field that uses a variety of techniques to perform reconstruction tasks. 12 Conventional MRI turns the signal of high-abundant water and lipid protons into high-SNR images. Thus, MRI acquisition can be accelerated by violating the Nyquist criterion in regular (parallel imaging) or in irregular (compressed sensing) k-space undersampling patterns, which is a trade-off between speed and SNR efficiency. To restore the full k-space or to reconstruct an artifact-free image, analytical methods were originally proposed and have been established as indispensable tools for routine MRI. [13][14][15] Thus, current efforts in deep learning-based MRI reconstruction focus on reconstructing undersampled MRI (primarily staying on the Cartesian grid). Zhu et al 16 presented an approach in which k-space data acquired by any acquisition trajectory can be learned to be reconstructed into the image domain. Within MRSI postprocessing pipelines, the coil combination step is the most critical reconstruction step with respect to reducing the amount of data. Currently, the coil combination for MRI (including MRSI) is performed in the image domain. This has a significant disadvantage. The data of each receive-coil element must be processed separately. In other words, for a 64-channel receive coil, for example, the same reconstruction steps have to be repeated 64 times; likewise, the RAM requirements are accordingly higher. The combination of all MRI data can be performed only after all the raw data have been sampled and reconstructed (including final 3D spatial Fourier transform). The obvious solution to lowering the hardware demands is to perform the coil combination as early as possible in the postprocessing pipeline in native sensor space (ie, ungridded k-space). This would not only dramatically reduce the RAM requirements and make repetition of reconstruction steps obsolete, but allow performing the major workload "on the fly" (in parallel with data acquisition). However, especially for non-Cartesian k-space sampling, this poses a significant challenge with regard to finding the right k-space convolution kernel. 17 In this study, we propose a k-space-based coil combination of non-Cartesian data through geometric deep learning. The output of the network is a multichannel non-Cartesian k-space, which after summation of all channels allows subsequent reconstruction through either conventional analytical methods or by other deep-learning methods, such as end-toend reconstruction. 16 | Coil combination The coil combination of MRSI data from multichannel coils, as defined in the image domain, can be described as follows: where S (r, t) is the coil-combined signal at position r; (r) is a scaling constant at position r; S n (r, t) is the signal of the channel n at position r; n,m is the noise correlation matrix between the channels n and m; andw m (r) is the complex weight of channel m at position r. The scaling factor (r) can be defined as follows: The noise correlation matrix n,m is estimated from the sample noise matrix 18 and can be directly applied to all the acquired data (prewhitening of data). From now on, it is assumed that all data are prewhitened. By choosing w m (r) as the sensitivity map of channel m, the SNR of S (r, t) is maximized. 8 For the MRSI data, the MUSICAL coil combination method 9 has been shown to provide a useful approximation, in which coil combination weights are estimated from the first FID time point of waterunsuppressed MRSI data as follows: Alternatively, this coil combination can be defined also in k-space domain through the convolution theorem, as follows: Combining all the previous equations will yield where ℱ S n (r, t) is the acquired signal in k-space domain and the second term inside the sum is the k-space of the approximation of the sensitivity maps. The second term can be replaced by the ESPIRiT version 19 to suppress anatomical contrast and thus provide a better estimate for the sensitivity maps. Equation 5 shows that the convolution is a natural operator for coil combination in k-space domain. A full convolution would be time-intensive, but because sensitivity maps are inherently smooth in spatial variation (ie, most of the information is concentrated in the k-space center), a truncated kernel can be used analogously to k-space-based parallel imaging reconstruction (eg, GRAPPA). The aforementioned convolution of multichannel data with a kernel is, in principle, possible, but it would be very challenging to handcraft the optimal kernel and it would require prior gridding on a Cartesian k-space grid, if non-Cartesian sampling trajectories (eg, spirals or concentric ring trajectories) have been used. Geometric deep learning can overcome both these challenges by (1) handcrafting the optimal kernel and (2) doing so for any arbitrary k-space geometry. | Geometric deep learning Geometric deep learning 20 is a general extension of deeplearning methods for data represented as graphs or manifolds. Convolutional neural networks rely on convolution operators, which allow the extraction of relevant features from input data. 21 To define a convolution operator on a graph, let us first define the undirected and weighted graph as follows: where V = {1, …, n} defines the number of nodes in the graph; ℰ defines the edges between the nodes of the graph; and = w ij is the adjacency matrix for which w ij = 0, The spatial convolution on a graph 22 between the graph function (features of the vertices) f and convolution parameter g can be defined as where the patch operator D j (x) acting over the graph function (features) f is defined as where u (x, y) are pseudo coordinates of the nodes y in the neighborhood of node x, while the weighting function of the path operator is defined as where j is a mean vector of the Gaussian kernel and ∑ − 1 j is its covariance matrix. Both are learnable parameters. In the context of the coil combination of non-Cartesian MRSI data (represented as a graph [Equation 6] with acquired signals of particular coil elements represented as f ), the convolution operator g of Equation 7, as presented previously, acts over the non-Cartesian data and provides a solution for how to approximate the coil combination in k-space without the need to grid data to a Cartesian grid (Equation 5). Figure 1 illustrates how kernels are defined for concentric . �� , ring trajectories in a 2D non-Cartesian k-space, including all connected nodes. | Experimental data All measurements were carried out on a 3T Prisma MR scanner with a 20-channel receive head/neck coil array (Siemens Healthineers; Erlangen, Germany). Twelve volunteers (7 males and 5 females) were included in this study. Internal review board approval and written, informed consent were obtained for all volunteers. For each volunteer, rapid non-water-suppressed MRSI was performed, with the head of the volunteers placed in 10 different positions inside the coil. This is necessary to ensure that the trained neuronal network generalizes well for all possible heads and head positions with respect to the coil. The volunteers were measured with the following short-TR FID-MRSI sequence parameters 23,24 : FOV = 220 × 220 × 200 mm³; volume of interest = 220 × 220 × 100 mm³; TR = 60 ms; acquisition delay = 1.3 ms; nominal flip angle = 16º; vector size = 16 complex points; spectral width = 1030 Hz; and acquisition time = 20 seconds. In-plane (k x , k y ) encoding was performed by 16 concentric ring readouts using sine-modulated and cosine-modulated gradients. Through-plane (k z ) encoding was achieved through 25 phase encodings. The target reconstruction matrix was 32 × 32 × 25, with a resolution of 6.9 × 6.9 × 8.0 mm³ without violating the Nyquist criteria. An additional water-suppressed, long-TR FID-MRSI sequence was measured at the eleventh position with slightly modified parameters: TR = 470 ms; nominal flip angle = 50°; vector size = 360 points; and acquisition time = 3:11 minutes. All other scan parameters were identical. Additional MPRAGE 25 sequences with a nominal resolution of 1.1 × 1.1 × 1.0 mm³ were scanned within 2:38 minutes each, at two positions, for anatomical reference (ie, the first identical to the position of the first MRSI scans and the second identical to the eleventh MRSI scan). | Training and testing data preparation Training data were derived only from the short-TR MRSI data without water suppression. F I G U R E 1 Connectivity of a graph. A, The k-space coordinates in the kx and ky dimensions. Blue points represent the position in the kspace where data were acquired. The number of points is defined by the Nyquist criterion for the outermost ring, and the same number of points is acquired for the inner rings. B-D, The neighborhood of three sample nodes and edges with the other nodes in the neighborhood at three different locations in k-space plus the edges between a node from the outer part of the k-space (B), a node from the inner part of the k-space (C), and a node from the middle part of the k-space (D). Abbreviations: cCC, conventional coil combination; kspCC, k-space coil combination; tCho, total choline; tCr, total creatine; tNAA, total N-acetyl aspartate The fast Fourier transform in the partition direction was performed and the volume of interest was masked. Then, the data were transformed back to full k-space. These steps were consistently applied to all in vivo MRSI data. The non-water-suppressed MRSI data were reconstructed to the image space using an established processing pipeline, which included FOV off-center correction, off-resonance correction, gridding to the Cartesian grid, and Fourier transform in all three spatial dimensions. From these reconstructed data, phase offset maps (ie, the coil geometry-specific phase portion without the TE-dependent phase) were calculated based on Equation 10: where N is the number of time points, = TE dwt , and dwt is the dwell time. The phase offset maps should eliminate dependency of the sensitivity maps' phase information on the B 0 inhomogeneity, which helps following the ESPIRiT algorithm to estimate sensitivity maps. The freely available 2D-ESPIRiT code 19 was rewritten for 3D data, and the sensitivity maps were estimated as follows: where M 1stFID (r) is the magnitude of the first FID point. The estimated sensitivity maps were applied to reconstructed, non-water-suppressed MRSI data, which had been augmented with the two following steps: (1) A random phase was added, sampled from the uniform distribution, <-π, π>; and (2) the magnitude of the MRSI data was randomly scaled, sampled from the uniform distribution, <0, 1>. Those two steps were repeated twice for each data set. All data were then transformed back to non-Cartesian (kx, ky) k-space coordinates through the discrete Fourier transform. Then, the FFT was applied in the (kz) partition direction. In addition, a Hamming filter and an annihilation filter 26 were applied in the kx and ky dimensions. Using this procedure, pairs of non-Cartesian k-space training data were generated. Each instance consisted of only one partition encoding step and one FID point. Each pair consisted of (1) the input to the network, which was corrected by the FOV shift and the phase-roll shift, and (2) the desired output of the network, which was also corrected by the sensitivity maps. Thus, the desired output was defined as the kspace representation of the data right before the summation step of the coil combination. Training data were obtained only from 7 of 10 volunteers. The data of the remaining 3 volunteers were used for further analysis of the network performance. For testing, the long-TR MRSI data (with activated water suppression) of the eleventh head position were used. The preprocessing consisted only of the FOV shift, the offresonance correction, and the spatial filters (same as for training data). | Neural network architecture The non-Cartesian data were represented as a graph. The nodes of the graph represent acquired points in k-space with unique coordinates. These k-space coordinates were normalized, and the Euclidean distances between all nodes were calculated. An edge between two nodes was created if the distance between those nodes was lower than 1.5 times the Nyquist criterion. The graph connectivity is depicted in Figure 1. The edge was weighted by the inverse of the Euclidian distance between two nodes. The complex values of acquired signals were separated into real and imaginary parts and are represented as separate features of the nodes. Therefore, the number of features per node was twice the number of coil elements. A shallow-graph neural network was designed in Deep Graph Library consisting only of two Gaussian mixture model convolution layers, 22 which are described in section 2. Gaussian mixture model convolution layers with 10 Gaussian kernels were chosen. Bias terms were not used for either layer, and a tanh activation was used after the first layer. The pseudo-coordinates of nodes in the neighborhood were defined in Euclidean coordinates. The number of input and output features for the whole network was equal, while the output of the first convolutional layer had only half of the input features. The input and desired output were also weighted by an annihilation filter. 26 | Network training The training and the following inference of the network with the data was performed using a Tesla V100 GPU (Nvidia; Santa Clara, CA). The Deep Graph library 27 was used with the PyTorch 28 DL framework. Training of the neural network was separated into two parts. First, the network was trained to perform the coil combination task without introducing any noise. Then, the network was fine-tuned by adding white noise to the input data to make the performance of the network more robust against noise. The first training consisted of 50 epochs with batch size = 10. The Adam optimizer was used with a learning rate of 1 −4 . The loss function was the mean squared error. The second training consisted of 30 epochs. The Adam optimizer was used with a learning rate of 1 −4 . The same loss function and batch size as in the first training were used. White Gaussian noise was simulated for each instance and epoch (10) with a zero mean. The SD of the noise was randomly sampled from the uniform distribution <0.1> scaled by 0.0001 of the absolute maximum in the data set and by the number of the current epoch. The spatial filters (the Hamming and annihilation filter) were applied after the addition of noise. | Evaluation To evaluate the performance of the network, watersuppressed long-TR MRSI data were used. The analysis was carried out on 4 volunteers: (1) the first volunteer whose water-unsuppressed MRSI data were included in the training data set, but not the water-suppressed MRSI; (2) the second volunteer for whom no data were ever shown to the network; (3) the third volunteer was measured with seven head positions following a "head shaking" motion and five head positions following a "nodding" motion; (4) the fourth volunteer was measured at the same position with nine averages. From these nine averages, three data sets were generated and evaluated to simulate different SNR regimes: one average (standard SNR), four averages (double SNR), and all nine averages (triple SNR). Again, data of the third and the fourth volunteer were never used during the training of the network. Directly after prediction, the annihilation filter was removed from the output data. The data were then summed across the channels and Hamming-filtered in the partition direction. The resulting single-channel data were then reconstructed into the image space using our standard reconstruction, 24 and spectra were fitted using established MRS quantification software (LCModel 29 ), with the basis set including 16 metabolites and a macromolecule baseline adapted for 3 T. 30 From the output of the LCModel, the ratio maps were calculated and the Cramér-Rao lower bounds of three main metabolite groups (tCr [total creatine], tNAA [N-acetylaspartate + N-acetylaspartyl glutamate], and tCho [total choline]) were directly obtained. The SNR of the NAA and tCr resonances were estimated from the respective LCModel fits. For this, the maximum of each metabolite fit was divided by the SD of the noise from a metabolite-free spectral range (ie, approximately 7.68-6.52 ppm). Smooth baseline variations in this spectral range were estimated by a polynomial fit of the fourth order and subtracted. The residuum was analyzed for normality by a Kolmogorov-Smirnov test ( = 0.05) to identify possible remaining artifacts. The FWHM of tCr was calculated from the LCModel fit. The results of the k-space coil combination (kspCC) were compared with a conventional image-based coil combination (cCC) termed "iMUSICAL." 9,11 The SNR results of the methods were compared by Bland-Altman plots, 31 but with replacement of the more common absolute difference of the two compared values by their relative difference (in percentage). The mean of the two methods was taken as the denominator. 32 The absolute values of the SNR of NAA were compared by boxplot analysis. The SNR evaluation was performed separately for low-SNR (< 15) and high-SNR (≥ 15) data, as Bland-Altman plots indicated a systematic SNR-dependent performance difference. The results of the FWHM of tCr were compared by Bland-Altman plots in the same way as for the SNR analysis. The boxplots were computed for a quantitative analysis over the whole volume. The CRLBs of the three main metabolites (tNAA, tCr, and tCho) were compared between both methods using boxplots. The relative metabolite maps of tNAA/tCr and tCho/tCr were qualitatively compared between two methods only for the volunteer whose data were not used during the training process. Sample spectra of the same volunteer were qualitatively compared between both methods. | Spectral data quality The spatial distributions of the SNR values of two metabolites (NAA, Cr) were very similar for both methods in all three orthogonal projections ( Figure 2). Quantitative results for the SNR (NAA) of the 2 volunteers indicated that cCC outperformed the kspCC overall, but this SNR difference between both methods was driven by the high SNR regions ( Figure 3A,B). The Bland-Altman plots show that the difference in the SNR values of both methods is not homogeneous across the whole range of SNR values ( Figure 3C,D). Although there was an overall higher SNR for cCC (median = 29, interquartile range [IQR] = 20-36) over the whole volume of volunteer 5 than for kspCC (median = 22, IQR = 15-28), no difference was found for the low-SNR regime (cCC: median = 9, IQR = 7-12; kspCC: median = 10, IQR = 7-14). The same pattern was observed for volunteer 10. Again, the overall SNR was higher for cCC (median = 24, IQR = 16-29) than for kspCC (median = 19, IQR = 13-27), but there was no difference in the low-SNR regime (cCC: median = 12, IQR = 10-14; kspCC: median = 12, IQR = 9-14). The Bland-Altman plots showed a systematic offset for both volunteers (volunteer 5: mean = −20% with 1.96*SD of 90%; volunteer 10: mean = −15% with 1.96*SD of 90%) in favor of the cCC, which was driven by a larger population of high (> 20) SNR values. A correlation analysis showed that this SNR difference increased linearly for higher SNR values. The FWHM maps of the tCr depicted in all three orthogonal projections at the same location as for the SNR | Spectral fitting quality Qualitatively, the spatial distributions of the CRLB values of the three metabolites (ie, tCho, tNAA, and tCr) for both methods were similar ( Figure 5); however, consistent with the observed SNR differences, the CRLBs of all three metabolites groups were lower for cCC than for kspCC ( Figure 6): The median CRLB of tNAA of volunteer 5 for kspCC was 6% with an IQR of 5%-10%, whereas, for cCC, the median was 5% with an IQR of 4%-8%. The same pattern was observed in results of volunteer 10: For kspCC, the median CRLB of tNAA was 7% with an IQR of 5%-11%, and for cCC, the median CRLB of tNAA was 6% with an IQR of 5%-9 %. The median CRLB of tCho for volunteer 5 for kspCC was 13% with an IQR of 10%-17%, and for cCC, it was 9% with an IQR of 8%-13%. Again, the same pattern was observed for volunteer 10: For kspCC, the median CRLB of tCho was 12% with an IQR of 10%-17%, and for cCC, the median CRLBs of tCho were 10% with an IQR of 8%-15%. Qualitatively, the contrast of the metabolic ratio maps from both methods, kspCC and cCC, were similar in all three orthogonal projections (Figure 7). Sample spectra of both methods were similar except for differences in the baseline (eg, volunteer 10) (Figure 8). | Stability of the network performance against extreme head positions The performance of the network was robust against position changes (ie, head rotations) over the entire range possible within the limitations imposed by the coil dimensions. This included "head shaking" motion over a range of −17 to +18° relative to a neutral initial position (Figure 9; left column). F I G U R E 6 Boxplots of the CRLBS of the three main metabolite groups (tNAA, tCr, and tCho). Results of 2 volunteers: a volunteer whose data were included in the training data (A) and a volunteer from whom no data were shown to the neural network during training for two coilcombination methods (B) (A) (B) F I G U R E 7 Comparison of ratio maps, tNAA/tCr, and tCho/tCr, obtained through cCC and a kspCC. The ratio maps of volunteer 10 are presented Metabolic ratio maps were similar for both methods. The CRLBs of tNAA for cCC were same for all positions. For kspCC, the CRLBs of tNAA increase by +1%. The performance of both methods for positions followed the "nodding" motion pattern were also stable over a range of −4 to +10° (the CRLBs of tNAA for cCC remained unchanged; the CRLBs for kspCC increased by 1%) with no major change of metabolic ratio maps (Figure 9, right column). However, B 0shimming errors due to strong motion resulted in consistently worse data quality in the frontal lobe for both approaches. | Performance of the network in various SNR regimes The quality of metabolic ratio maps increased linearly with the number of averages for both methods (Figure 10 | DISCUSSION This paper presents the first proof-of-principle of a k-spacebased coil combination method. The major benefit of this method is that the signals from all coil elements can be combined in any native sensor space (eg, non-Cartesian k-space) without the need to perform a Fourier transformation to the image space beforehand. This shifts the major reconstruction workload to the sequence run-time and eliminates the need for excessive RAM requirements. Ultimately, this makes fully reconstructed MRSI or MRI data available immediately after the acquisition is finished. Lately, many deep learning-based reconstruction methods were proposed for MRI, 12 focused primarily on reconstruction of undersampled Cartesian k-space MRI data by restoring the missing k-space points or by correcting the image artifacts, which are caused by the violation of the Nyquist criterion. The acceleration of MRSI acquisition through undersampling of the k-space is possible, 33 but the acceleration F I G U R E 8 Representative sample spectra of volunteer 10 chosen from two locations. Both locations are marked on the anatomical reference. The spectra are shown on the right with those originating from the same spot next to each other. The spectra in the same column were processed with the same coil combination through spatial-spectral encoding 34-37 is more efficient in terms of speed and can yield higher SNR, thus enabling high-resolution wbMRSI in reasonable time frames. Spatialspectral encoding can still be combined with parallel imaging, making the reconstruction even more complex, but is recommended rather as an add-on. 11,38 A disadvantage of spatial-spectral encoding is the amount of generated raw data, especially if non-Cartesian spatial-spectral encoding techniques are used. For high-spatial and spectral-resolution MRSI, these can be more SNR-efficient and time-efficient than their Cartesian counterparts, but have the disadvantage of an even more complex reconstruction. Furthermore, the amount of raw data is linearly proportional to the number of receive coil channels. Currently, the use of 64-channel coil arrays and beyond is becoming a new standard in the detection of human brain pathology. 7 Conventional neural networks assume a regular/equidistant/grid-like structure of the data or such representation (eg, non-Cartesian acquisition trajectories can be represented as a vector 16 ). Our coil combination approach uses geometric deep-learning methods. Geometric deep learning offers the generalization of deep-learning methods over the data represented as a graph, where also structure of the data is an input variable. Such methods allow input of data with various structures (in principle, one network could potentially process MRSI data acquired with different non-Cartesian trajectories). In our case, the coil combination task is performed over non-Cartesian data. The MRSI data were neither gridded on the Cartesian grid nor transformed to any other domain. The k-space coil combination network was able to combine the MRSI data of volunteers, one of whose data were never shown during the training. This is in contrast to currently used coil combination methods in the MRSI field, which are performed exclusively in the image domain and rely on estimation of sensitivity maps or similar external or internal prescans 9,39,40 or are data-driven, such as whitened singular value decomposition. 41 Such generalization was achieved by the acquisition of water-unsuppressed data at the different head positions of different volunteers inside of the coil, which formed the basis for simulation of training data. To the best of our knowledge, this is the first published coil-combination method in the k-space domain for MRI or MRSI. The inference of the network per partition (all FID points) took about 2 seconds. Each partition consisted of 16 concentric rings, thus resulting in an inference of about 125 ms per TR, which is approximately one-fourth that of the TR used for acquisition (TR = 470 ms). Although this is sufficiently F I G U R E 9 Top section: Metabolic ration maps of tCho/tCr and tNAA/tCr of the same slice for seven different positions following "head shaking" motion (on the left) and five different positions following "nodding" motion (on the right). Bottom section: Boxplots of CRLBs of tNAA for kspCC and cCC for the positions from the top section fast for possible online reconstruction, further optimization will be important to allow rapid online signal combination for even shorter TRs. 5,42 The results of the SNR analysis suggest that the SNR over the whole brain is overall lower when using kspCC compared with the cCC. The spatial distribution of SNR values was very similar for both methods, as well as for a metabolite close to the problematic lipid region in proton spectra (eg, NAA) and for a metabolite more distant from the lipid region (eg, tCr). Thus, extracranial lipid artifacts can be ruled out as a source of error in the SNR assessment. 43 A more detailed analysis revealed that there is no noticeable SNR difference for low SNR values, but there is a pronounced difference between kspCC and cCC in the high SNR regions. However, the analysis of data with a different number of averages revealed a similar increase of SNR for both methods with increasing number of averages. Those two results suggest a more complex causality between the level of noise and performance of the network. Fortunately, typical wbMRSI sequences are optimized for high spatial resolution with just acceptable enough SNR to guarantee adequate spectral quantification. As long as the SNR is good enough in the worst brain regions, an SNR loss in the best regions is acceptable. This lower SNR difference in the low-SNR regions could have been the result of the two-step training of the neural network. This pattern was consistent for volunteers, regardless of whether the network had ever seen any kind of data from the particular volunteer. Further investigation of this unexpected effect will be necessary. The results of the FWHM analysis showed that there was no difference between either coil combination method with respect to spectral linewidth. The spatial distribution on the FWHM maps, the boxplots, as well as the Bland-Altman plots, suggested a very similar performance. The CRLB values are known to be affected by spectral SNR. 44 Thus, the results of the CRLBs of the main metabolite groups (tNAA, tCr, tCho) showed a similar discrepancy for low CRLB values for kspCC. This trend could be observed in the CRLBs map and was directly visible through the boxplot analysis. The CRLB of each metabolite represented the quality of fit (ie, precision) and depended not only on the SNR, but also on the spectral overlap of neighboring resonances and the presence of spectral artifacts. A common practice in automatic quality assurance is to threshold the volume with a reasonable CRLB (eg, metabolite quantification with CRLBs ≤ 20% or, in some cases, ≤40%, are considered acceptable). Thus, an increase in very low CRLB values does not constitute a major problem for further analysis. The final results of MRSI quantification included metabolite ratio maps. Our results showed that the spatial distribution of tNAA/tCr and tCho/tCr were very similar in all orthogonal slices for both coil combination methods. These results suggest a similar performance and the same information content, no matter which coil combination method was used. This work describes the first proof-of-principle of the coil combination method in the k-space domain. Conventional kernel-based k-space-based coil combination, similar to the gridding algorithm, may introduce artifacts in the image domain, as described by Lustig et al. 45 However, the task performed by a neural network is defined by a training data set, which could include ways to to correct for coil sensitivities profiles as well as any other required correction terms. Thorough investigation of such effects is further required. Further optimization of the network architecture and generalization to different coils and MR scanners will be necessary. Currently, our neural network is limited to only one spatial resolution and a fixed FOV. Therefore, only a network architecture that consisted of two convolutional layers was necessary. We expect that, with increasing variety of input data (such as different FOV or spatial resolution), the complexity of the network would need to be increased (such as by adding more layers). The network was designed for a particular coil with a fixed number of channels and geometry. The number of channels defines the number of features per node. However, all channels are not always active during acquisition. The performance of the network in such cases needs to be further investigated. The graph representation of the MRSI data was very simple and driven by the MRSI physics (ie, definition of node neighborhood according to the Nyquist criteria). Further optimization in the graph representation is possible. The time dimension of the MRSI data along the FID evolution was not taken into account. Currently, each single FID time point is processed independently, but the character of MRSI data suggests that the information of one time point is related to the information present in earlier, as well as later, time points. This information could be exploited to further improve reconstruction (eg, SNR) in the future. A fairly small number of volunteers (n = 12) was recruited. Moreover, sensitivity profiles required to coil-combine the MRSI data were learned from only 10 different head positions per volunteer inside the coil. The remaining data were created through data augmentation. More training data with a higher variability may improve these results. After further improvement of the k-space coil combination, the reproducibility study and thorough comparison against a higher number of available coil combination methods needs to be performed. Such studies would clarify the benefit of the proposed method against the conventional method. 9,41,46 The CRLB is specific to the unbiased estimator, 47 which cannot be said about the deep-learning methods. Further evaluation of the precision and the accuracy of the method needs to be performed in which the ground truth is known. | CONCLUSIONS This paper presents the preliminary results of a new coil combination approach for MRSI data, which is performed in native non-Cartesian space before gridding on Cartesian coordinates. Geometric deep learning was used to solve this reconstruction step. Considering the preliminary nature of our study, the results are highly promising and open the possibilities to move many postprocessing steps into the raw sensor space (eg, undersampled k-space reconstruction), which would ultimately lead to rapid online reconstruction for clinically attractive wbMRSI. DATA AVAILABILITY STATEMENT The main part of the code will be available on Github (https:// github.com/stano mot/kSpac e-coil-combi natio n-of-nonCa rtesi an-MRSI-data). For more detailed questions, the readers are encouraged to contact the authors.
8,103.2
2021-06-01T00:00:00.000
[ "Engineering", "Medicine", "Computer Science" ]
HyperGal: hyperspectral scene modeling for supernova typing with the Integral Field Spectrograph SEDmachine Recent developments in time domain astronomy, like the Zwicky Transient Facility, have made possible a daily scan of the entire visible sky, leading to the discovery of hundreds of new transients every night. Among them, 10 to 15 are supernovae (SNe), which have to be classified prior to cosmological use. The Spectral Energy Distribution machine (SEDm), a low resolution Integral Field Spectrograph, has been designed, built, and operated to spectroscopically classify targets detected by the ZTF main camera. The current Pysedm pipeline is limited by contamination when the transient is too close to its host galaxy core; this can lead to an incorrect typing and ultimately bias the cosmological analyses, and affect the SN sample homogeneity in terms of local environment properties. We present a new scene modeler to extract the transient spectrum from its structured background, aiming at improving the typing efficiency of the SEDm. HyperGal is a fully chromatic scene modeler, which uses pre-transient photometric images to generate a hyperspectral model of the host galaxy; it is based on the CIGALE SED fitter used as a physically-motivated spectral interpolator. The galaxy model, complemented by a point source and a diffuse background component, is projected onto the SEDm spectro-spatial observation space and adjusted to observations. The full procedure is validated on 5000 simulated cubes. We introduce the contrast as the transient-to-total flux ratio at SN location. From estimated contrast distribution of real SEDm observations, we show that HyperGal correctly classifies ~95% of SNe Ia. Compared to the standard extraction method, HyperGal correctly classifies 10% more SNe Ia. The false positive rate is less than 2%, half as much as the standard extraction method. Assuming a similar contrast distribution for core-collapse SNe, HyperGal classifies 14% (11%) additional SNe II (Ibc). Introduction In the last two decades, time-domain astronomy has become increasingly efficient, thanks to the ability of surveys to conduct (near-) daily scans of the entire visible sky, such as Catalina Real-Time Transient Survey (Drake et al. 2009), PanSTARRS-1 (Kaiser et al. 2002), ASAS-SN (Shappee et al. 2014), and ATLAS (Tonry et al. 2018).A more recent survey is the Zwicky Transient Facility (ZTF, Bellm et al. 2019;Graham et al. 2019), a successor of the Palomar Transient Facility (Law et al. 2009), using a 47 deg 2 camera.With such equipment, ZTF can detect O(10 2 ) transients of interest every night, with instrumental artifacts and previously known sources excluded, and a typical 5σ r-band AB magnitude limit of 20.5.Among these newly identified sources, 10-15 are new objects that have just appeared and have become bright enough to be detected.Once the photometric detection is triggered, ZTF relays the alert to the Spectral Energy Distribution Machine (SEDM, Blagorodnova et al. 2018), an integral field spectrograph (IFS), designed and built to spectroscopically classify transients brighter than ∼19.5 mag, operating on the Palomar 60-in.telescope.The core of the SEDm is a Micro-Lenslet Array (MLA) covering 28 ′′ × 28 ′′ , subdivided into 52 × 45 hexagonal spaxels, combined to a multi-band (ugri) field acquisition camera, used for positioning and guiding. Currently, the automated pipeline routinely used for IFS data reduction and supernova (SN) spectrum extraction is pysedm (Rigault et al. 2019).Since this pipeline intrinsically assumes the target is an isolated point source, it cannot properly handle the situation where the transient is close to its host galaxy core.As a matter of fact, since August 2018, some 30% of the observed SNe exhibit some severe host contamination, which significantly decreases the confidence level of the classification, and about 10% are simply unusable.This situation has various undesirable effects.From a mere statistical point of view, discarding SNe with too strong a host contamination reduces the type Ia SN (SN Ia) sample by 10-20%, which weakens the strength of the Hubble diagram anchor at low redshift.Furthermore, the wrong classification of SNe Ia could induce a significant bias in the cosmological analysis (e.g., Jones et al. 2017). Finally, a more subtle effect is related to the galactic environment bias, which would be caused by extracting hostcontaminated SNe (Rigault et al. 2013).In recent years, numerous studies have shown that the SN Ia standardized luminosity is tightly correlated with environmental properties.Rigault et al. (2015Rigault et al. ( , 2020) ) showed that, after standardization for light curve shape and color, SNe Ia characterized by a high local specific star formation rate (lsSFR) are fainter by 0.16 ± 0.03 mag.Other tracers, such as host galaxy stellar mass (Kelly et al. 2010;Sullivan et al. 2010;Childress et al. 2013;Betoule et al. 2014) or simply host morphology (Pruzhinskaya et al. 2020), are finding the same correlation between SN Ia luminosity and their environment.Recently, Briday et al. (2022) showed that all these tracers are compatible with two SN Ia populations differing in standardized magnitude by at least 0.12 ± 0.01 mag. Some developments have been made to improve the robustness of the point source extraction by estimating the faintest isomagnitude contour separating the galaxy and the SN (Kim et al. 2022); however, this is not yet an optimal solution in most problematic situations; for instance, when the SN is faint or located near the host core, it only brings a marginal 1.7% improvement in classification accuracy from the standard pysedm analysis. We might consider handling the host contamination by interpolating the galaxy area under the transient from the external parts in the field of view (FoV).Unfortunately, there are several reasons for not using such a method, beyond the mere signal-tonoise issue.First, there is the seeing, which makes the SN spread over the galaxy structure: as much as the host light is contaminating the SN flux, the reverse is also true, and it is not clear how far from the SN position we would consider the galaxy flux to be free of the point source signal.Furthermore, the host spatial structure under the SN extent -linear, concave, or convexis not known a priori, especially in a strongly structured region such as the galaxy core, which would prevent a clean and robust interpolation.Finally, an interpolation would assume that the host spectral features are spatially uniform under the SN extent, which again is usually not the case, especially close to the galaxy core. In order to improve the final SN Ia sample in as many ways as possible, we look to HyperGal1 , a scene modeler specifically designed to handle the strong host contamination case through a detailed hyperspectral galaxy modeling, complemented by a smooth background component and a point-source transient.The algorithm concept is based on two ideas: first, public multi-band wide photometric surveys can provide reference information on the host galaxy before the transient event; second, the required host galaxy cube (two spatial dimensions and one spectral one) can be estimated from pure photometric observations using a dedicated SED fitter as a physically motivated spectral interpolator.The resulting hyperspectral host model can then be projected in the observable space of the SEDM, taking into account all observational effects: relative geometry between the photometric pixels (px) and the IFS spaxels (spx), spatial (point spread function, PSF) and spectral (line spread function, LSF) impulse response functions (IRF) of the SEDM, atmospheric differential refraction (ADR), sky background, and additional diffused light. In Sect.2, we describe the HyperGal pipeline and the validation tests on realistic simulations are presented in Sect. 3 to estimate the accuracy of the SN extraction as well as the SN typing itself, since this is what the SEDM is designed for.We also show the improvement with respect to an isolated source extractor such as pysedm.A discussion of the relevant hypotheses and possible future improvements are given in Sect. 4. HyperGal pipeline This section presents the different processing steps from the required input to the transient spectrum extraction (Fig. 1).The supernova ZTF20aamifit, at a redshift of z = 0.045 as measured from strong Hα line in the host spectrum, is systematically used here for illustration.It was observed with the SEDM in February 17, 2020, at airmass 1.7 in poor seeing conditions (2. ′′ 4 FWHM).It is ∼2.′′ 8 away from its host galaxy core, close enough to not be considered as isolated (see Fig. 2). Inputs Three main inputs are necessary to run HyperGal: the SEDM cube to be analysed, the archival photometric thumbnails, and the redshift of the target.The SEDM IFS (x, y, λ) cube of the scene is built from the 2D raw spectroscopic exposures with pysedm (Rigault et al. 2019, Sect. 2).It includes all the components, such as the transient point source, spatially and spectrally structured host galaxy, night sky background, and spatially smooth diffused light, to be handled by the scene modeler (Fig. 2). The archival multi-band photometric images of the transient environment, acquired before the SN explosion, are obtained from the PanSTARRS-1 (PS1) 3π Steradian survey (Chambers et al. 2016) in all grizy bands and queried at the SN location through the Image Cutout Server2 .In particular, PS1 was chosen for its sky coverage compatible with ZTF (north of declination −30 deg).Figure 3 shows an RGB image for ZTF20aamifit host galaxy through the PS1 grz bands. An analysis of spatially structured scenes (harboring three or more well-resolved objects in the SEDM FoV) provides a precise estimation of a scale ratio of SEDM and PS1 pixel sizes of 2.230 ± 0.003, which, for a PS1 px scale of 0. ′′ 25, corresponds to an effective SEDM spaxel size of 0. ′′ 558.Once measured, this SEDM scale is fixed in the pipeline.To save computation time for the SED fit and the spatial projection step, PS1 images were first spatially rebinned following 2 × 2. The third input is the host galaxy redshift, required by the SED-based interpolation of the photometric images.Around 50% of the targets observed by the SEDM have a host galaxy spectroscopic redshift known beforehand (Fremling et al. 2020); for the others, a redshift is a priori estimated from a preliminary transient spectrum extraction, using the transient spectral features and the possible presence of emission lines from the host galaxy.While it would be theoretically possible to assess the host redshift directly during the scene modeling, we did not try to implement this feature yet (see Sect. 4).Furthermore, the consequences of an inaccurate input redshift are not considered in this analysis. SED fit The SED fit aims to generate an effective hyperspectral, namely a full 3D (x, y, λ), host model from the grizy PS1 broadband images.During the process, each photometric pixel is treated independently, so that the resulting spaxel in the output cube gets its own spectrum.At the end of this process, this cube is still independent of the SEDM observation details (impulse responses, atmospheric effects, etc.).It is important to note that the SED fitter is not used here to derive accurate and spatially resolved physical parameters from the host galaxy, but rather to build a physically plausible spectral interpolation compatible with broadband archival images. The software used for this step is cigale 3 (Burgarella et al. 2005;Noll et al. 2009;Boquien et al. 2019).It is based on a progressive computation, successively using modules describing a unique component of the SED.The set of all parameters tested by cigale is shown in Table 1. Star formation history and population The time-evolution of the star formation rate (SFR) is described by the star formation history (SFH) through the sfhdelayed module.Our SFH scenario includes two components: a delayed SFR and a late burst: (1) Both terms have a decreasing exponential form: SFR burst (t) ∝ e −(t−t 0 )/τ burst for t > t 0 , 0 otherwise. (3) The amplitude of the late starburst is fixed by the parameter f burst , defined as the ratio between the stellar mass formed during this event and the total stellar mass.The SFH is applied with the initial mass function (IMF) from Chabrier (2003) on the stellar population model from Bruzual & Charlot (2003), used through the bc03 module. Nebular emission The light emitted in the Lyman continuum by the heaviest stars ionizes the gas in the galaxy.This physical process generates significant radiative emission in the continuum and spectral lines.This SED component is described by the nebular module, based on Inoue (2011).The model is effectively parameterized by the metallicity Z (the same as in the stellar population model bc03) and the ionization parameter log(U). Dust extinction Dust in the galaxy absorbs the radiation at short wavelengths, especially from the UV to the near-IR; this energy is then reemitted in the mid-to far IR.As HyperGal is primarily targeting sources at redshift z < 0.1 in the optical domain, the extinction effect is properly considered through the dust attenuation module dustatt_modified_CF00 from Charlot & Fall (2000).This approach is considering two star populations: the young ones (<10 7 yr) still reside in their birth cloud (BC), and the old ones are considered as already dispersed in the interstellar medium (ISM).Attenuation is therefore treated differently: for the young population, both ISM and BC are considered, while for the old population, only the ISM is considered.In both case, the attenuation A λ is modeled by a power law, normalized by the V-band attenuation: with λ V = 0.5 µm.The young-to-old star V-band attenuation ratio is parameterized through , a free parameter allowing for more flexibility and a better estimate of the Hα emission lines (Battisti et al. 2016;Buat et al. 2018;Malek et al. 2018;Chevallard et al. 2019).The power-law slope for the ISM is fixed at n ISM = −0.7 following Charlot & Fall (2000), and the slope for the BC at n BC = −1.3 as advocated in da Cunha et al. (2008).For completeness, the dale2014 module was used for the dust emission (Dale et al. 2014); however, this complex component has no significant impact in our spectral domain. From SED fit to hyperspectral galaxy model We ran cigale using PS1 filter transmission curves from (Tonry et al. 2012, see Fig. 6) on photometric pixels for which the signalto-noise ratio (S/N) is above 3 in all 5 bands.Otherwise, the output flux is set to 0 at all wavelengths: such pixels presumably belong to the sky or diffuse backgrounds and cannot be properly modeled by the SED fitter.For all fitted pixels, cigale returns a spectrum over an extended wavelength domain (from far UV to radio), with an inhomogeneous spectral sampling between 1 and 5 Å px −1 .All spectra are rebinned at the SEDM spectral sampling of ∼26 Å px −1 and truncated to the [3700, 9300] Å range, resulting in 220 monochromatic slices. The broadband flux from the SED fit is compared to the input photometric measurements in Fig. 4, where we show for each PS1 band and pixel, the pull (i.e., the model residual normalized by the error on the data) and the relative rms averaged over the five bands: where f λ denotes the data and fλ the predicted value.The averaged rms is generally lower than 3% in the core of the galaxy, but can reach ∼10% in the outer parts.However, as the PS1 observations are 2-3 magnitude deeper than the SEDM ones (Chambers et al. 2016), relatively poorly fitted pixels far away from the host core have a marginal flux impact proportionate to the SEDM background and, thus, they do not significantly affect the transient spectrum in the scene model. SEDM impulse response functions At the next step, the "intrinsic" hyperspectral galaxy model obtained from the SED fit has to be projected in the SEDM observation space, including the spectro-spatial IRFs.This section first presents the spectral component, namely, the line spread function (LSF), then the spatial component, known as the point spread function (PSF). Spectral IRF (LSF) The output spectra from cigale have a spectral resolution of ∼3 Å in the wavelength range 3200-9500 Å (i.e., a median resolving power of R = λ/∆λ ∼ 2000, Bruzual & Charlot 2003), which is 20 times the near-constant SEDM resolution (R ∼ 100, Blagorodnova et al. 2018).The full SEDM LSF is therefore a very good approximation of the differential spectral IRF between cigale and the SEDM.To characterize the SEDM LSF, we used the intermediate line fits of the wavelength solution derived from arc-lamp observations, Cd, Hg, and Xe (Rigault et al. 2019, Sect. 2.1.2).Each emission line was fit by a single Gaussian profile over a third-order polynomial continuum. Studying wavelength calibration for 65 nights between 2018 and 2022, the LSF standard deviation σ LSF turned out to be stationary (no evidence of evolution with time) and fairly homogeneous in the FoV, but chromatic (as expected).Figure 5 shows the chromatic evolution of the standard deviation, and the quadratic polynomial model adjusted to it. To adapt the cigale spectra to the SEDM resolution, the spectra of the hyperspectral galaxy model were convolved by the chromatic Gaussian LSF.An illustration of the result is shown in Fig. 6. Spatial IRF (PSF) The SNe are effective point sources, thus, they can be solely described in the FoV by the SEDM PSF (and its amplitude). HyperGal uses a bisymmetric PSF model, in which the radial profile is the sum of a Gaussian N(r; σ) for the core, and a Moffat M(r; α, β) for the wings (Buton et al. 2013;Rubin et al. 2022): where r is an elliptical radius: with (x 0 , y 0 ) the coordinates of the point source.Parameters A and B simultaneously describe the flattening and the orientation of the PSF. The four shape parameters (α, β, σ, η), which could be ill-constrained in low S/N regime if adjusted independently, are correlated by fixed relationships.The PSF was tested on 148 isolated standard stars, observed in 2021 with the SEDM, and we settled on the following model.The constrained PSF only has two free parameters: α (Moffat radius) and η (relative normalization of the Gaussian), while the two other parameters are expressed as linear functions of α: where β 0 = 1.53, β 1 = 0.22, σ 0 = 0.42, and σ 1 = 0.39 were determined from the training star sample. The chromaticity of α(λ) is set as a power law function: where normalization α ref and index ρ are free parameters, and λ ref ≡ 6000 Å. Parameters η, A, and B do not exhibit strong chromaticity and are therefore considered constant.Finally, the SEDM PSF of a given observation is fully described by five independent parameters: α ref and ρ, η, A, and B. As the exact PSF profile is less critical for extended objects such as the host galaxy, we chose to model the differential PSF between PS1 and SEDM as a single bisymmetric Gaussian kernel, with free ellipticity and position angle.The hyperspectral model is thus convolved with this differential PSF before the spatial projection. Scene modeling At this point, we have the two main elements on hand to build the scene model: 1) a hyperspectral host galaxy model, as well as the (differential) spectral and spatial IRF to match it to the SEDM observations; 2) a chromatic PSF model for the transient point source.The last component needed to complete the scene is the night sky and diffused light background, modeled with a 2D 2nd-order polynomial at each wavelength.The nonuniform A43, page 5 of 14 We go on to describe the progressive method used to adjust it to the observed SEDM cube as well as the detailed spatial projection procedure used to match the two cubes. General method We first considered N ≪ 220 "meta" slices of the SEDM cubes, that is, slices summed over a restricted wavelength domain that are small enough to be considered roughly achromatic, but large enough to increase the S/N and significantly speed up the computation time.The scene is projected and fitted on all metaslices independently (the so-called "2D fit"; Sect.2.4.3), which results in a set of N × m parameters; some are nuisance parameters (e.g., background and component amplitudes), other key scene parameters, such as the point source position, and PSF shape parameters. From this set of parameters evaluated at N wavelengths, specific chromatic models were used to fix all shape and position quantities (the "1D fit"), for which the full spectral resolution is not required.Ultimately, HyperGal performs a final linear "3D" fit of the different component amplitudes over all monochromatic slices, providing the total scene model cube at original SEDM spectral sampling. The pipeline uses by default N = 6 metaslices linearly sampled between 5000 and 8500 Å.This spectral range is where the SEDM efficiency is higher than 70% (Blagorodnova et al. 2018) and is extended enough to well constrain the chromatic parameters, especially the ADR (see Sect. 2.4.4).The pipeline was tested with different number of metaslices, but no significant difference was noticed in the results. HyperGal was extensively optimized with the parallel computing library DASK4 (Dask Development Team 2016), a dynamic task scheduler working as well on single desktop machines as on many-node clusters.DASK optimizes the pipeline by analyzing the (minimal) interdependencies between all computation tasks and building an optimal parallelized workflow to be submitted and run on an arbitrary number of available workers (in our case, we used ten nodes on the IN2P3 Computing Center5 ). Spatial projection The spatial projection of the hyperspectral galaxy model (matched to the SEDM spectral and spatial IRFs) was done by successively projecting each (meta)slice, taking into account the relative geometry and size between PS1-derived model (square, 0. ′′ 50 aside) and SEDM (hexagonal, 0. ′′ 558) spaxels.The projection is based on a spatial anchor, that is, a reference position in the sky supposedly known in both (meta)slices.The chosen anchor is the transient position, derived from the ZTF survey astrometry and located at the center of the queried PS1 images (and therefore at the center of the hyperspectral model).In the SEDM cube, this position is initially guessed from the astrometric solution of the SEDM Rainbow Camera (Blagorodnova et al. 2018;Rigault et al. 2019), but cannot be strictly fixed: the (chromatic) SEDM anchor position (x 0 , y 0 ) is free in the fitting process of each metaslice.The projection is done by geometrically overlapping the two polygonal spaxel grids, with the anchor position as a reference; this is effectively equivalent to a nearest neighbor interpolation scheme.These computations were done using shapely6 (Gillies et al. 2007) and geopandas7 (Jordahl 2014).At this point, the model cube which the PS1/SEDM differential PSF and the SEDM LSF were applied to is now projected in the SEDM observation space, over the SEDM spaxel grid. Metaslice (2D) fit As already mentioned, all components of the scene are first independently fitted on the N metaslices.The free parameters per metaslice are: (1) the SN position (x 0 , y 0 ) in the SEDM FoV, used as an anchor position for the spatial projection; (2) the SN PSF parameters (α, η, A, B); (3) the PS1/SEDM differential PSF parameters (σ G , A G , B G ); (4) the amplitudes of the SN (I) and host (G) components; (5) the background where i runs on the spaxels of the metaslice, y and ỹ are the data and model fluxes respectively, and σ is the error on the data.Figure 7 illustrates the projection of one metaslice of the hyperspectral galaxy model onto the SEDM space.The fitted scene on this metaslice shows a spatial rms between the model and the data of 2.6%.Although indicative of the overall scene model accuracy, a low rms does not necessarily imply a clean separation of the different components, (e.g., when the transient lies on top of a sharp host galaxy core).Extraction accuracy is directly evaluated from simulated SN spectra in Sect.3. Chromatic (1D) fit Once the fit is performed independently overall N metaslices, a set of N chromatic estimates of the m parameters is at hand to assess their (smooth) chromatic evolution -except for the component amplitudes and background parameters, which are nuisance parameters at this point. The chromaticity of the full Gaussian + Moffat PSF is modeled as described in Sect.2.3.2.The chromaticity of the width of the 2D Gaussian which models the differential PSF between PS1 and the SEDM is adjusted by a similar power law: where ρ G and σ ref are adjusted on the N metaslice estimates obtained previously, and λ ref ≡ 6000 Å; the shape parameters A G and B G are considered constant equal to their (inversevariance weighted) mean values over the N metaslices. The effective anchor location in the SEDM FoV is systematically wavelength-dependent, due to the chromatic light refraction through the atmosphere (ADR).Given the N positions of the SN in the different metaslices, an effective four-parameter ADR can be fitted to track the chromatic offsets in the FoV: Figure 8 illustrates the ADR effect, a drift of the metaslice anchor position with wavelength and the ADR model at an effective airmass of ∼2.0. Final (3D) fit Once all PSF and ADR chromatic models are available from 2D + 1D metaslice adjustments, the scene morphological parameters are considered known and fixed at each wavelength: the Although G(λ) is primarily used to recover flux calibration mismatch between PS1 and SEDM, this normalization parameter can interfere in a nontrivial way with the position and intensity of the emission lines in the hyperspectral galaxy model.This effect might help in handling slightly incorrect input redshift used in the SED fitting step, especially under the assumption of a uniform spatial distribution of the line.As this has not been analysed in depth, we elaborate on this notion in Sect. 4. Figure 9 presents the white image (spectral integral) of the final HyperGal scene model for SN ZTF20aamifit.The quality of the fit is evaluated from the pull map, showing no evidence of structured residuals.The spectral relative rms map indicates an accuracy of ∼4% at SN and host core location, and 6-7% where only the background is significant. Component extraction The strength of the HyperGal pipeline is the simultaneous fit of the 3 scene components, the host galaxy, the transient point source and the background.The main quantity of interest is of course the SN spectrum (i.e., the vector of the point source amplitudes I(λ), see Fig. 10), but we can also selectively subtract individual components to assess the quality of the scene model. Integrated spectrum of the host galaxy In summary, the host contribution can be isolated in the SEDM cube by subtracting the SN and the background components (see Fig. 11).To further compute an integrated host spectrum, a large elliptical aperture is defined around the host with the SEP package (Barbary 2016;Bertin & Arnouts 1996) from the PS1 images.This aperture is then projected in the SEDM cube, using the respective World Coordinate Systems.We note that the ADR is neglected in the process, as it rarely induce a deviation of more than one or two spaxels in the FoV and has barely any impact on the host spectrum integrated over a large aperture. The integrated host spectrum is shown in Fig. 11, with the expected position of some major emission lines at the input redshift (independently of the host spectrum).This procedure highlights the consistency between the input redshift used for the hyperspectral galaxy modeling and the extracted integrated spectrum.In the future, it could be considered as a way to consistently estimate the host's redshift directly from such integrated spectrum during the scene modeling (see Sect. 4). Point source radial profile Similarly, the point source contribution can be isolated in the SEDM cube by subtracting both host and background models, as shown in Fig. 12 for the [6167, 6755] Å metaslice of the ZTF20aamifit cube.This closer look at the point source contribution allows us to check the accuracy of the PSF profile in each metaslice.The fact that the profile smoothly tends toward 0 means that the background was correctly modeled by HyperGal; also, the absence of outliers in the data points indicates that there is no evidence of residual host contamination in the profile, as noticed in the isolated SN image. SN classification As HyperGal is primarily designed for the transient spectral classification, an automated typing procedure is included in the pipeline, based on Supernova Identification (SNID Blondin & Tonry 2007).The process of typing is performed over the 4000 to 8000 Å spectral range, which includes the most discriminating A43, page 8 of 14 spectral features for redshifts z ≲ 0.1.This domain also corresponds to the one where the SEDM CCD quantum efficiency is over 60%. The quality of the SNID classification is quantified by the rlap parameter, measuring the strength of the correlation between the input and template spectra.According to Blondin & Tonry (2007), an rlap ≥ 5 indicates a high confidence in the classification, without considering any prior on the redshift or the phase of the SN. Figure 13 presents the SNID typing of ZTF20aamifit using its HyperGal-extracted spectrum.The best match has an rlap = 27, which leaves no doubt about its classification as an SN Ia.In comparison, the pysedm-extracted spectrum (see Fig. 10) is also typed as an SN Ia but with a significantly lower confidence (rlap = 9). HyperGal validation The HyperGal pipeline was validated with a set of simulations, to quantify the accuracy of the extracted SN spectra as a function of various observational conditions and the ability to spectrally classify the transient.In this section, we first present the simulation process, before performing some statistical analysis on the spectral accuracy, followed by the typing efficiency.For comparison, the SNe are also extracted with a method similar to pysedm (Rigault et al. 2019), namely, a plain PSF extraction of a supposedly isolated source (not accounting for the background galaxy), but using the same PSF and diffuse background models as HyperGal for consistency. Simulated sample During a short shutdown of the main ZTF camera, SEDM was free to observe a few galaxies which hosted SNe at least 1 yr earlier.These observed host cubes are therefore naturally in the SEDM space for which HyperGal is designed; ten different hosts with various morphologies were acquired at different locations in the IFU and with an airmass ranging from 1.01 to 2.04.This allows us to cover a large variety of observation conditions, ranging from the ideal case to the poorest condition.An artificial point source, whose spectrum and type is known a priori, was then added to these cubes. To mimic the SEDM spectra as closely as possible, we used the spectra of well-isolated transients observed with SEDM that had been successfully classified by SNID with a very high rlap.For the SNe Ia (the most numerous to be observed), 70 spectra were selected with rlap > 25 for the best model and rlap > 15 for the first 30 models.Similarly, seven SNe II spectra with rlap > 12 were selected.For the more rarely observed SNe Ic and SNe Ib (∼5% of observations), only one spectrum of each was chosen, but with a high classification confidence (rlap ∼ 22 for the Ib and rlap ∼ 13 for the Ic).To increase the S/N, each of these spectra was then slightly smoothed using a Savitzky-Golay filter (third-order polynomial over a window of five pixels) to keep the spectral structures intact. While building the simulated sample, the different SN types were distributed to follow the observed fractions (Fremling et al. 2020), with 80% of SNe Ia, 15% of SNe II, 2.5% of SNe Ib, and 2.5% of SNe Ic.For further analysis, Ib and Ic will be studied jointly as SNe Ibc. A marginalization on the phase of the SNe Ia was applied, based on the DR1 statistics from the ZTF SN Ia group (Dhawan et al. 2022).Knowing the phase of the 70 SN Ia input spectra used for the simulation, we composed the SN templates to follow the observed distribution of phases, modeled as a Gaussian distribution centered on −3 days with a standard deviation of 4 days.Concerning the PSF, the profile is assumed to follow the model presented in Sect.2.3.2.To faithfully represent the seeing diversity of the observations, the chromatic radial profile parameters were drawn from the joint distribution built from ∼2000 standard stars, thus taking into account the latent correlations between parameters.Finally, two extra parameters -which we consider the most likely to impact the HyperGal robustness -were introduced in the simulations: the contrast, c, between the transient and the local background and the distance, d, between the target and the host. The latter step aims to cover all observed cases, from the exact overlapping between the point source and the host (d ≈ 0) to the limit of an unstructured background (d ≫ host core size).The host center is identified by matching the WCS solution from the SEDM cube and the underlying photometric images from PS1.The distance, d, is drawn from a uniform distribution between 0 and 5. ′′ 6 ≡ 10 spx.As the SEDM mostly observes well-centered point sources, the simulated SN is placed within 12 spx from the center of the FoV, or at least toward the MLA center if the host is on the edge. The contrast, c, is defined by c = S /(S + B) ∈ [0, 1], where S is the transient signal and B is the total (sky and host) background, both spectrally integrated over the equivalent r band of ZTF.For a random c drawn from a uniform distribution in [0, 1], the background signal B is first estimated at the simulated SN location, by successively integrating spatially the pure host cube weighted by the chromatic PSF profile, then spectrally over the ZTF r-band.Once B is known, the SN spectrum is scaled so that the r-band integral S = cB/(1 − c).Finally, the simulated SN contribution to the cube variance is added to the one from the host galaxy, under the hypothesis of pure photon noise, using the flux solution of the host cube. Ultimately, the 5000 simulated cubes were built, covering a large range of observation conditions, host galaxy morphologies and positions in the FoV, transient locations and spectral types, and S/N values.The HyperGal pipeline and the standard point source extraction were then used to estimate the resulting SN spectra. Extraction accuracy The SEDM is designed for and used in the spectral classification of transient.Thus, beyond pure absolute spectro-photometric flux accuracy, what is important is the capacity of HyperGal to extract the spectral features allowing for a proper classification, independently of the absolute flux level or even the large-scale continuum shape.Consequently, the HyperGal performances are evaluated on continuum-normalized transient spectra in the [4000, 8000] Å wavelength range, as in SNID. The continuum was fit as a fifth-order polynomial over the wavelength range slightly extended by 100 Å at each extreme, to avoid some unwanted boundary effects.The spectral comparison between simulation input and HyperGal/standard method output spectra was then systematically performed on continuumnormalized spectra and quantified using a wavelength-averaged relative rms similar to Eq. ( 5): where N refers to the number of monochromatic slices between [4000, 8000] Å, f λ denotes the data, and fλ the predicted value.The distance, d, is found to have no influence on the spectral accuracy of HyperGal, with an absolute correlation coefficient lower than 0.2.On the other hand, Fig. 14 shows the correlation between spectral relative rms and contrast, c, for both extraction methods on continuum-normalized spectra.The results are marginalized over all SN types, as the extraction accuracy is supposedly independent of the spectral shape. Both methods obtain an rms greater than 20% for c < 0.2, suggesting that spectral classification at such low contrast will be difficult.Yet, the standard method seems to be more accurate than HyperGal at extremely low contrast (c < 0.1); this actually appear to be an artifact of the continuum normalization.At very low contrast, neither method can reasonably disentangle the SN from the background; however, by effectively mixing the SN and host signal, the standard point source extracted spectrum has a higher S/N (albeit less accurate) and the continuum normalization is less prone to fail catastrophically, in contrast to the case of the spectrum consistent with 0 as extracted by HyperGal. HyperGal starts to stand out for 0.2 < c < 0.3, with a median rms around 10%, and the rms decreases steadily below 10% at c > 0.3, 5% for c > 0.5, and 1% for c > 0.8.Compared to the standard extraction method, HyperGal shows a median improvement of ∼50% for 0.2 < c < 0.6, and gradually returns to a A43, page 10 of 14 Spectral RMS distribution in contrast bins (continuum divided) Fig. 14.Distribution, as a function of the contrast, of the spectral relative rms between simulation input spectra and extracted spectra, averaged over the [4000, 8000] Å domain.In the boxes, the 3 levels represent the 3 quartiles (25%, median, and 75%).Each bin includes the same number of simulations, as the contrast c is uniformly distributed in [0, 1].median improvement of ∼20% up to highest contrasts.Since the continuum normalization removes the effects of absolute scaling and color terms on the spectral rms, the improvement exclusively relates to the contamination of the SN spectrum by the host galaxy spectral features.This demonstrates the effectiveness of HyperGal in drastically reducing this host contamination. Distribution of contrast in the observations Before turning to the classification efficiency, the contrast distribution in the SEDM observations is estimated, as a reference for a comparison with our results.Rather than using HyperGal on observations made with the SEDM (as was actually done for the ZTF Cosmology SN Ia Data Release 2 by Rigault et al., in prep.), which would be akin to evaluating the pipeline with itself, instead the contrast c = S /(S + B) was estimated from photometric images of the same DR2 sample, made up of about 3000 SNe Ia. For each SN, its signal S in the PS1 r-band at the date of the SEDM observation was estimated from the SALT2 fit (Guy et al. 2005(Guy et al. , 2007;;Betoule et al. 2014) of its light curve.We chose the PS1 r band which, in practice, is very similar to the ZTF one and because only these images from the survey were available at the time of the study.On the other hand, the host contribution to the background, B gal , was estimated from the integrated flux within a radius of 2 ′′ around the SN.As the PS1 images are already sky-subtracted, an additional sky background, B sky , had to be added for a fair comparison with simulations.Two different values were used: a fiducial value of m sky = 20 mag, approximately corresponding to the magnitude depth of the SEDM, and a more conservative value, m sky = 21 mag.Given the sky background is largely negligible in front of a galactic one, its exact value essentially alters the high contrast values: for a SN isolated from its host galaxy, the contrast would systematically increases as the sky background tends toward 0. Figure 15 sky levels, less than 1% of observations have a contrast c < 0.1, and only 7% with c < 0.2.At the high-contrast end, 2-5% of the observations have a c > 0.9 depending on the adopted sky magnitude.Almost 95% of observations have a contrast 0.1 ≤ c ≤ 0.9, and a slightly less than 90% with 0.2 ≤ c ≤ 0.9.According to the results of Sect.3.2, we can therefore assess the spectral accuracy of HyperGal on the DR2 sample (using the spectral relative rms (Eq.( 14)) as an indicator) to be on the order of 10%, 5%, and 2% for 80%, 60%, and 20% of the observations, respectively.In comparison, the standard extraction method reaches these levels for 60%, 45%, and 15% of the observations. Typing efficiency As mentioned earlier, the most important validation result in the context of the SEDM is the efficiency of HyperGal to spectrally classify the target SN.The test on the simulated cubes was performed using the same classifier as in ZTF, namely, SNID; the confidence criteria given for the classification are however slightly stricter, as we regularly identified false positives (i.e., SN erroneously classified as Ia) in the current pysedm pipeline.The minimum rlap is set to rlap min = 6 (rather than 5) for the best-fit model; furthermore, at least 50% of the top-10 models have to be of the same type as the best one to confirm a classification.If one of these criteria is not met, the spectrum is classified as "uncertain". Figure 16 shows the typing efficiency from HyperGal and the improvement with respect to the standard extraction method without host modeling.Contrary to the previous rms analysis, results are presented for each SN type, since the spectral signatures are different in all SN types. As anticipated in Sect.3.2, both methods are definitely not reliable for contrasts below 0.1.SNe Ia are more easily classified, due to the quantity and strength of features in their spectra: the typing success is 71% for SNe Ia for 0.1 ≤ c ≤ 0.2 (∼7% of real observations); types Ibc and II, on the other hand, are correctly classified with a success rate of 23% and 35%, respectively. For 0.2 < c < 0.3, the typing success reaches more than 96% for SNe Ia, 77% for Ibc and 51% for SNe II.More than 99% of SNe Ia are correctly classified with c > 0.3, and more than 95% of all SNe for c > 0.4.With ∼84% of observations having a contrast c > 0.3, ∼9% with 0.2 < c < 0.3, and ∼7% with 0.1 < c < 0.2, we can conclude that HyperGal is able to successfully classify nearly 95% of all SNe Ia observed by SEDM.For a contrast of c ≳ 0.2 (which represents more than 90% of the real observations), nearly 99% of SNe Ia are properly classified.The improvement brought by HyperGal over the standard extraction method is obvious, with a sweet spot in 0.1 < c < 0.6: this will results in more than 30% of additional SNe correctly classified. The main spectral feature of SNe II being the Hα emission line, usually highly contaminated by the host galaxy, HyperGal allows a significant improvement for this particular type, from 15% to 37% of additional correctly classified SNe II in the 0.1 < c < 0.6 range; for SNe Ibc, the difference only appears from c > 0.2, with similar gains between 13% and 31%.SNe Ia exhibits a lot of strong and easily identified spectral features, the boost from the standard method is slightly less obvious, but remains, in fact, highly significant, from 30% of additional correctly classified SNe Ia for 0.1 < c < 0.2 to 5% when 0.5 < c < 0.6.For c > 0.6, when the SN ostensibly stands out of the galaxy, the difference between the two methods becomes marginal whatever the SN type. Taking into account the contrast distribution of the observations, HyperGal should significantly improve the classification of SNe Ia in nearly 50% of the observations (the other half being also properly classified by the standard extraction method).As 50% of the observations have 0.1 < c < 0.6, HyperGal will allow the correct classification of almost 20% more SNe Ia in this interval, corresponding to 10% of all SNe Ia classifiable with the SEDM.Assuming a similar contrast distribution for all SN types, HyperGal is expected to classify 14% additional SNe II and 11% SNe Ibc. To probe the critical contamination of the SN Ia sample by core-collapse SNe, the false-positive rate (FPR) for SN Ia is examined.Figure 17 shows that HyperGal has a significantly lower FPR than for the standard method.Excluding the unrealistically low contrast cases (c < 0.1), HyperGal shows a progressive decrease in FPR from 8% to 1% for contrast rising from 0.1-0.6 (FPR is null beyond that); in comparison, the standard method oscillates between 6 and 9% in same contrast range.As a conclusion, the HyperGal FPR is on average less than 5% for contrasts between 0.1 and 0.6 (∼50% of the observations), and less than 2% for c > 0.1 (more than 99% of all observations); this is half the result of the standard extraction method. Discussion Here, we discuss some limitations of the current HyperGal implementation and possible future developments.Regarding the validation methodology, we acknowledge some simplifications with respect to actual observations.For instance, the true distance distribution between the SN and its host was not explicitly modeled, for instance, this parameter was marginalized uniformly between 0 and 5. ′′ 6.As a full-scene modeler which properly handles this parameter and therefore shows little sensitivity to it (Sect.3.2), this approximation does not impact the HyperGal results; this is not true for the single point-source method which critically depends on the transient-host distance. Overall, we think the validation approximations actually tend to minimize the improvement of HyperGal with respect to the standard method.Undoubtedly, the most limiting constraint from HyperGal is the need for an external redshift measurement of the host galaxy, a priori needed by the SED fitter used as a physically motivated host galaxy spectral interpolator and of critical importance for the treatment of emission lines.In practice, this is not so much of an issue: in the current ZTF sample, about 50% of SN hosts already have a spectral redshift, mostly from SDSS surveys (Fremling et al. 2020), with a precision of σ z ∼ 10 −5 for z < 0.1 (Bolton et al. 2012); the remaining 50% of SNe have a redshift deduced from a preliminary extraction of the SN spectrum, either from low-resolution spectral features in the SN spectrum (∼40%) or emission lines of the host galaxy having contaminated the SN spectrum (∼10%).In both cases, the redshift is estimated by SNID with a precision of σ z ∼ 5 × 10 −3 (Fremling et al. 2020).Furthermore, 95% of ZTF SN hosts are brighter than 20 mag, paving the way for other surveys such as the Dark Energy Spectroscopic Instrument (DESI) Bright Galaxy Survey (DESI Collaboration 2016) to systematically provide a large fraction of spectral redshifts in the future. A slightly incorrect input redshift (encoded as a wavelength offset of the emission line position in the hyperspectral galaxy model), as well as an approximate SED fit of the emission line fluxes (marginally constrained by broadband photometric observations) is corrected to first order by the monochromatic galaxy amplitudes G(λ) during the ultimate 3D fit.Primarily introduced to recover flux calibration mismatch between PS1 and SEDM, this normalization parameter actually interferes in a nontrivial way with the position and intensity of emission lines in the brightest parts of the scene to minimize residuals between fixed (at this stage of the procedure) hyperspectral model and SEDM observations.This particular effect, which depends on the relative distribution of stellar and gaseous components in the host, has not been studied extensively for HyperGal, but we note it is efficient to disentangle host spectral features from SN spectrum even with sub-optimal input redshift or emission line fluxes.However, it effectively precludes the use of the residual host component for any a posteriori measurements, for instance, redshift or local measurement of Hα flux, yet crucial for local environment studies mentioned earlier (e.g., Rigault et al. 2020). It is possible to think of including a consistent redshift estimate directly in the HyperGal procedure, at the level of the hyperspectral model (to minimize artificial fluctuations of G(λ)), but also at the level of the SN spectral typing (to reach a redshift consensus between the host and the SN).This would imply to include the intensive SED fit or the SN typing procedure in the minimization loop, which is computationally costly in either case.Another major HyperGal development would be to use the SEDM cube, a rich and faithful observation of the host galaxy at the position of the transient, as additional hyperspectral constraints in the SED fitting process.Both developments would push the concept of an SED fitter merely used as a spectral interpolator to its limit.It would then probably be preferable to switch to other more efficient methods, such as physics-enabled deep learning (Boone 2021). Conclusion This paper presents HyperGal, a fully automated scene modeler for the transient typing with the SEDM (Blagorodnova et al. 2018).The core of this pipeline is based on the use of archival photometric observations of the host galaxy, taken before the SN explosion.Knowing the physical processes in place within galaxies, as encoded in the SED fitter cigale, the spectral properties of the host are modeled, adjusted, and scaled appropriately to create a hyperspectral model of the host galaxy.This 3D intrinsic model is then convolved with the spectro-spatial instrumental responses of the SEDM, and projected in the space of the observations.A full scene model, including the structured host galaxy, the point source transient and a smooth background, is finally produced to match the SEDM observations, allowing for the extraction of the SN spectrum from a highly contaminated environment. The pipeline is validated on a large set of realistic simulated SEDM observations, covering a wide variety of observation conditions (airmass, seeing, and PSF parameters), scene details (host morphology, distance to the host, host/SN contrast), and transient types.The contrast distribution is estimated from about 3000 observed SNe Ia of the upcoming ZTF Cosmology SN Ia DR2 paper (Rigault et al., in prep.).The transient spectra in the 5000 simulations are then extracted with HyperGal and compared to the historical point-source method, which ignores the structured host component. The most important results concern HyperGal efficiency in spectroscopically typing SNe, a key objective of the SEDM instrument.The full scene modeler shows an ability to correctly classify ∼95% of the observed SNe Ia under a realistic contrast distribution.For a contrast c ≳ 0.2 (more than 90% of the observations), nearly 99% of the SNe Ia are correctly classified.Compared to the standard extraction method, HyperGal correctly classifies nearly 20% more SNe Ia between 0.1 < c < 0.6, representing ∼50% of the observation conditions.A43, page 13 of 14 A&A 668, A43 (2022) The false positive rate for HyperGal is less than 5% for contrasts between 0.1 and 0.6, and less than 2% for c > 0.1 (>99% of the observations); this is half as much as the standard extraction method.HyperGal has demonstrated its ability to extract and classify the spectrum of an SN even in the presence of strong contamination from its host galaxy.The improvement compared to the standard method is significant: this will noticeably improve the statistics of the SNe Ia sample for the ZTF survey, while reducing a potential environmental bias, ultimately impacting the precision of cosmological analyses. A43, page 1 of 14 Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article is published in open access under the Subscribe-to-Open model.Subscribe to A&A to support open access publication. Fig. 1 . Fig. 1.Main processing steps of the HyperGal pipeline and sections where they are detailed. Fig. 2 . Fig. 2. SEDM cube from the observation of ZTF20aamifit.Left panel shows the spectra, whose color corresponds to the selected spaxels in the right panel (white image of the spectrally integrated cube).Red cross shows the SN position. Fig. 3 . Fig. 3. RGB image of the host galaxy of SN ZTF20aamifit, constructed from the PS1 grz cutouts.The red cross shows the position of the SN detected by ZTF.The x-and y-axes are in native PS1 pixels, 0. ′′ 25 aside.White dashed box is used as the boundaries in Figs. 4 and 6. Fig. 4 . Fig. 4. Map of the pull for the grizy broadband images from cigale outputs, and spectral relative rms over the five reference host images, shown from left to right and top to bottom.Only pixels with S /N > 3 for all grizy bands are considered (see Sect. 2.2). Fig. 5 . Fig. 5. LSF standard deviation, σ LSF , as a function of wavelength, from the wavelength calibration of 65 nights between 2018 and 2022.Each violin corresponds to an emission line in the arc-lamp spectra (color legend). Fig.6.Hyperspectral galaxy model of ZTF20aamifit host galaxy, after projection in the SEDM observation space (including LSF).Green circles correspond to the spatially integrated flux from PS1 cutouts, while black diamonds refer to the same quantities as fit by cigale.The five shaded curves show the transmission of the grizy PS1 filters.Red and blue spectra on the left correspond to the spectra integrated in selected regions of same color in the model cube on the right.The black spectrum is the spectrum integrated over the full FoV. 8 Fig. 8. SN positions as a function of wavelength, and the effective ADR fit.Top panel: relative offsets with respect to reference position at reference wavelength along each axis; filled points correspond to the observed offsets, and open circles to the predictions of the ADR model.Bottom panel: relative offsets in the (x, y) plane.Color codes refer to the central wavelength of the metaslices. 9 Fig. 9. Full scene model for ZTF20aamifit.Top panel: integrated SEDM and HyperGal-modeled cubes; the red cross indicates the adjusted point source position at 6000 Å. Bottom panel: spectral pull and spectral relative rms.No galaxy-or SN-related structured residual is visible in the pull map and the spectral rms indicates an accuracy of ∼4% at the host and SN locations. Fig. 11 . Fig. 11.ZTF20aamifit host galaxy, isolated from the SEDM data cube.Left panel: isolated host galaxy component in the SEDM cube, after subtraction of both the SN and background models.Right panel: host spectrum integrated over the selected spaxels; the main spectral features are marked for the input redshift z = 0.045. Fig. 12 . Fig. 12. SN ZTF20aamifit, isolated from the SEDM data cube.Left panel: isolated SN component in the SEDM [6167, 6755] Å metaslice, after subtraction of both the host and background models; the red cross indicates the fitted SN location, and contours show the elliptical isoradius at 3 and 5 spx for observations (black solid lines) and model (red dashed lines).Right panel: PSF profile for the same metaslice, as a function of the elliptical radius.The data points refer to the isolated SN on the left panel, the red curve corresponds to the PSF profile (without the background), the blue and the green curves to the Moffat and the Gaussian components, respectively.The Gaussian component is particularly weak because of the poor seeing conditions. Fig. 13 . Fig. 13.SNID typing of the ZTF20aamifit HyperGal spectrum.Left panel: input spectrum (in grey) and best model from SNID (in blue).Right panel: distribution in the (redshift, phase) plane of the 30 best matches with an rlap > 5 (all being normal SNe Ia in this case).The input redshift of the galaxy (z = 0.045) is indicated with the horizontal grey line.The best model, with a very high rlap = 27, classifies ZTF20aamifit as an SN Ia at redshift z = 0.046 and phase p = +5.6 days. Fig. 15 . Fig. 15.Cumulative contrast distribution estimated from ∼3000 SN Ia observed with the SEDM.Since only B gal is estimated from PS1 images, an additional B sky is estimated using two different sky levels, m sky = 20 (blue) for a realistic value and m sky = 21 (red) for a conservative value. Table 1 . Modules and input parameters used with cigale. Typing efficiency on the validation simulations.Top panel: rate of successful classification with HyperGal for each type of SN at different contrast levels.Results for c > 0.6 are aggregated as the results vary very little.Bottom panel: improvement in typing compared to the standard extraction method.
12,847.8
2022-09-22T00:00:00.000
[ "Physics" ]
NEARCTIC SPECIES OF THE NEW WOLF SPIDER GENUS GLADICOSA ( ARANEAE : LYCOSIDAE ) * BY This is the second paper in a projected series of systematic studies of the Nearctic Lycosidae described primarily in the genus Lycosa. Over 50 species of medium to large size wolf spiders from the Nearctic Region have been placed in this genus. However, recent studies indicate that several distinct genera are included under Lycosa. Matters have been complicated at the generic level by C. F. Roewer (1954) who listed 44 new genera of Lycosinae in the Katalog der Araneae. They are nomina nuda, lacking descriptions. Later Roewer (1959, 1960) defined these 44 genera, thus validating the names, and added seven more new ones to the Lycosinae as well. These genera were established primarily on the basis of differences in the number of posterior cheliceral teeth and eye arrangement (particularly eyes of the anterior row). Investigations of North American Lycosidae (Brady 1962, 1972, 1979) indicate that the number of posterior cheliceral teeth is an unreliable character in delimiting genera. Recent studies indicate that color patterns on the dorsal surface of the carapace, length of legs relative to body size, and particularly the structure of the male and female genitalia are most reliable in determining generic relationships. Certain features of the eye arrangement, as well as information about habitat, behavior, and life history are also useful. In the final analysis, it is the unique combination of all these features that should be employed to distinguish genera. Figures 1-5 and described below.Description.Total length 7.8 to 18.8 mm.Carapace length 4.2 to 8.3 mm; width 3.1 to 6.4 mm.Carapace viewed dorsally, narrow- ing at level of PLE row, smoothly convex along lateral margins, with posterior margin concave; viewed laterally essentially the same height from eye region to posterior declivity (highest point is poste- rior cephalic region in front of dorsal groove with the carapace sloping very slightly anteriorly).Dorsal groove long and distinct.Dorsal color pattern with light uneven submarginal stripes and wide median light colored stripe, narrow between ALE, widening until just anterior to dorsal groove (where it is usually constricted), becoming wider again parallel to groove, and then narrowing as it 1986] Brady--Nearctic Gladicosa 287 follows thoracic declivity to posterior edge of carapace.Black mark- ings framing median stripe at posterior declivity.Dark areas of carapace brown to dark brown and black.Light stripes pale yellow to yellow-orange (Figs.1-5). Anterior median eyes (AME) slightly larger than anterior lateral eyes (ALE).Anterior eye row much narrower than posterior median eye row (PME), with dorsal tangent slightly procurved.Posterior lateral eye row (PLE) much the widest (see Tables 1-6). Chelicerae dark reddish brown to black; anterior and posterior margin each with three teeth, the anterior triad crowded more closely together. Male palpus with stridulatory file situated retrolaterally at tip of tibia.Cymbium with cluster of macrosetae at tip, and with stridula- tory scraper retrolaterally at base.Male palpal sclerites as seen in ventral view" Palea (pa) concave, largely hidden by embolus, visible along retrolateral margin.Embolus (em) blade-like, tapering to a point, with clockwise orientation (from left to right) in left palpus, which is opposite to that of most Lycosinae.Conductor (co) con- cave, with cuplike portion containing tips of the terminal apophysis (ta) and the embolus.Terminal apophysis large, flattened and paral- leling embolus, with its tip serving partly as a conductor.Median apophysis (ma) with a flattened ridge extending retrolaterally and Psyche [Vol. 93 coming to a point near margin of cymbium (cy); heavily sclerotized spur directed medially (Figs. 30,33,34). METHODS The techniques and methods employed in the study of Gladicosa were essentially the same as for Trochosa (Brady 1979) and are described there.Color descriptions are based upon appearance of specimens in alcohol illuminated by microscope lamp.Measurements are listed in millimeters, but for Gladicosa the mean and standard error (SEM) are listed instead of the mean and range as in the previous paper.Methods and techniques of measurement are described in the paper on Trochosa (Brady 1979).Under Records specific localities are given for uncommon species and the peripheral range for common species, otherwise localities of specimens exam- ined are indicated by counties. ACKNOWLEDGMENTS This study was made possible by the loan of large numbers of specimens from the Museum of Comparative Zoology, Cambridge, Massachusetts, the American Museum of Natural History, New York City, and the Canadian National Collection, Ottawa, Canada.I wish to thank sincerely the curators of those collections, Dr. H. W. Levi, Dr. N. J. Platnick, and Dr. C. D. Dondale respectively for the use of these materials.The loan of type specimens from the Museum of Comparative Zoology, the American Museum and the Phila- delphia Academy of Natural History is gratefully acknowledged. Thanks are offered to Mr. Donald Azum for loan of the latter. I am indebted to the following individuals and institutions for making available regional collections that provided a much better picture of geographical distribution and clarified the relationships Transverse piece (tp) of scape of epigynum irregular in shape (Figs.15-17) or, if rectangular, much wider than long (Figs.18-26) 3 Transverse piece entirely pearlescent in appearance.Longitudi- nal piece (lp) lacking indentations where it joins transverse piece (Figs.6-9) gulosa Transverse piece only partly pearlescent white.Longitudinal piece (lp) with indentations at posterior end where it joins transverse piece (Figs.10-14) pulchra Transverse piece irregular in shape and broadly joined by longi- tudinal piece (Figs.15-17) euepigynata Transverse piece somewhat rectangular, much wider than long and narrowly joined by longitudinal piece 4 Width of transverse piece greater than length of longitudinal piece.Longitudinal piece about the same width throughout its length (Figs.18-20) huberti Width of transverse piece equal to or less than length of trans- verse piece. diagnostic for this species.The locality given is North America, and that doesn't help.To complicate matters, Emerton (1885) misidenti- fled this species as Tarentula kochi Keyserling and transferred it to the genus Lycosa.Gertsch and Wallace (1935) discussed the syste- matic and nomenclatural problems associated with G. gulosa and suggested using the name Lycosa kochi Emerton for this species since Emerton (1885) had placed the species in a different genus.However, according to Article 49 of the International Code of Zoo- logical Nomenclature (1985): "A previously established species- group name wrongly used to denote a species-group taxon because of misidentification cannot be used for that taxon even if it and the taxon to which the name correctly applies are in, or are later assigned to, different genera, except when a previous misidentifca- tion is deliberately used in fixing the type species of a new nominal genus."Bonnet (1955) points out that the name nigraurata or pure- celli of Montgomery should have been used for the species.Mont- gomery (1904) himself synonymized nigraurata with purcelli and the name purcelli has been used only by Montgomery (1902Montgomery ( , 1904)). The name gulosa, on the other hand, has been employed numerous times since Gertsch and Wallace's (1935) invocation of kochi, and even by Gertsch (1949) in his book American Spiders.It therefore seems best to retain the name gulosa for this species to promote stability of nomenclature by preserving a long accepted name in its accustomed meaning. Color.Females.Face yellow or yellow-orange, to pale golden brown.Eye region darker with nacelles black.Chelicerae yellowish brown to dark reddish brown, almost black at distal ends.Condyles yellow or orange, to golden brown. Legs yellow or pale yellow-orange to yellowish brown, darker distally.Femora with dusky bands on dorsal and lateral surfaces. Ventral surface lighter yellow. Labium and endites brownish orange to brown with distal ends yellow to cream.Sternum yellow to light golden brown. Color.Males.Face yellow to yellow-orange, darker brownish in eye region.Chelicerae with basal areas yellow to orange-yellow, darker brown to reddish brown distally.Condyles orange-yellow to orange.Cymbia of palpi dark brown. Carapace brown with a broad median yellow stripe and irregular yellowish submarginal stripes obscured by thicker clothing of white hair. Dorsum of abdomen beige to light brown with black markings along sides beginning anteriorly and continuing posteriorly.Black markings often more prominent than in female.Posterior of dorsum without distinct chevrons as in other species.Venter of abdomen pale yellow to beige, clothed with white hair which is more abun- dant laterally. Legs yellow to brownish yellow.Darker dorsally without dusky markings on femora as in female. Labium and endites orange-yellow to orange-brown with distal ends lighter yellow to beige.Sternum orange to orange-brown. Measurements.Ten females and ten males from Allegan Co., Michigan.See Table 1. Diagnosis.Gladicosa gulosa is closest to G. pulchra in size and coloration.The markings of pulchra offer greater contrast, and chevrons are usually visible on the dorsum of the female abdomen (compare Fig. 5 with Fig. 4).The epigyna of the females and the palpi of the males also resemble one another in appearance, but are distinctly different when compared in detail.The epigynum of gulosa has the transverse piece entirely pearlescent white, whereas pulchra has some white, but nearly always shows darker brown sclerotized areas on the transverse piece (compare Figs. 6, 8, 9 with Figs. 10, 11, 13, 14).In gulosa the embolus is pointed at the end, whereas that ofpulchra is somewhat spatulate in shape (compare Figs. 35, 36 with Figs.37, 38).Kaston (1948) reports gulosa running over dead leaves on forest floors in Connecticut.I have found it in leaf litter of deciduous woods in Michigan.Here it is found in more open Oak woodlands as opposed to the shaded floor of Beech- Maple forests.In Michigan and New England gulosa usually matures late in the fall, overwinters as an adult, and mates in early spring.Kaston (1936) made the following observations of courtship behavior in the species: Immediately upon coming in contact with the female, or within 3 minutes thereof, the male begins to drum his palps rapidly against the floor of the cage.These drumming move- ments are made so rapidly that a distinct purring or humming sound can be heard.The palps are used alternately and are raised only a very short distance during the process.The body is held at an angle so that the posterior end of the abdomen almost touches the floor.As a consequence when the male begins to twitch his abdomen in a vertical plane the tip strikes the floor.However, I could not detect any sounds made by this part of the body.It is highly probable that the vibrations set up in the substratum by the tapping movements of the palps and abdomen are perceived by the female.This may exert an excit- ing influence on her in a manner analogous to that which occurs in web-building species, where the male tweaks the threads of the female's snare. The male now moves slowly toward the female without courting.When near her he reaches over to touch her.At first she may jump at him and chase him away.Later, if she is receptive she allows him to stroke her legs or abdomen.After this contact with the female the male resumes his courtship movements.Later on, if the male gets more excited he begins to raise his forelegs off the floor about or 2 mm, and lower them quickly.During this process the legs quiver violently. After 13 minutes of this courting one male began to mount the female, but before he could get into the final copulatory position, she ran away from him.Another male had courted only seven minutes when the female allowed him to mount.The position is the usual one for Lycosids, the male using his palps Psyche [Vol. 93 alternately during the 10 minutes the act lasted.This duration time may not be the usual one for the species, however, for one pair were observed in the field, when collected, which were already in copula and remained so for about another half hour.The sound produced during courtship was also reported by Allard (1936).Observations were made on a collecting trip in the Bull Run Mountains of Virginia during late April.He described the sound as a distinct purring produced by drumming rapidly upon dry leaf surfaces.He reports: The creatures were very wary, but with care I was able to examine their movements critically from a distance of only a few inches.When the spider moved and made its sounds, the fore part of the body quivered perceptibly and the palpi, too, executed gentle up and down movements.The quivering movements brought the chelicerae directly in contact with the dry leaf surface, and the latter alone appeared to be responsible for the rather loud sounds I had heard.According to Allard these tapping sounds could be heard a distance of 10 feet or more.Rovner (1975) investigated sound production in three species of Schizocosa and six species of Lycosa, including gulosa.Previous investigators, as with gulosa above, had regarded such sounds as being solely percussive, generally produced by a tapping or scraping of the palps or the chelicerae against the substratum.High-speed Figs.6-9.Gladicosa gulosa (Walckenaer) 6-7.Female from 4 mi.S of New Richmond, Allegan Co., Michigan, 16 Sept. 1974. 6. Epigynum.7. Internal geni- talia.8. Epigynum of female from Pepperell, Middlesex Co., Massachusetts, Apr. 1973. 9. Epigynum of female from Cove Creek Valley, 15 mi.S of Prairie Grove, Washington Co., Arkansas.Figs.10-14.Gladicosa pulchra (Keyserling). 10.Epigynum of female from Stone Co., Mississippi, 21 Dec. 1964. 11.Epigynum of syntype from North Ameri- ca.12-13.Female from Gainesville, Alachua Co., Florida, 14 June 1935. 12. Internal genitalia.13.Epigynum.14. Epigynum of holotype of Lycosa inso- pita Montgomery [--Gladicosa pulchra (Keyserling)] from Austin, Travis Co., Texas.Figs.15-17.Gladicosaeuepigynata (Montgomery).15-16.Female from Camp Verde, Kerr Co., Texas, Dec. 1939.15 Rovner (1975) revealed the prescence of a stridulatory organ at the tibio-tarsal joint.This apparatus consists of a file on the distal end of the tibia and a scraper at the base of the palpal cymbium.Further examination revealed a group of stout spines or macrosetae at the tip of the palpal tarsus.These spines apparently aid in coupling the tarsus to the substratum.Thus, the sound produced by gulosa (C)and other lycosids is not generated simply by drumming, but involves a rapid oscillation at the tibio-tarsal joint facilitated by macrosetae that anchor the palpus to the substratum.Kaston (1948) reports seeing mature females of gulosa from Sep- tember, through winter, to June suggesting that some may live for two years.Egg sacs appear in early April and are produced until late May.Egg sacs vary from 6-10 mm in diameter and egg counts range Figures 4, 10-14, 37-42.Map 2. Color.The range of color in G. pulchra is greater than that of G. gulosa.I have noted light forms and dark forms of pulchra. These do not represent a genetic polymorphism but are the extremes in a color continuum.There is no discernible correlation between geographic locality and color pattern among the specimens exam- ined.The darker forms are much more numerous than the light colored ones.The range of color is indicated in the following descriptions. Color.Female.Face orange-brown to dark reddish brown.Chelicerae dark reddish brown to black with condyles lighter orange-brown. Carapace dark brown to a dark reddish brown with a broad median yellow stripe suffused with white hair.Irregular lighter submarginal yellow stripes similarly clothed with white hair.Pattern as in Figure 4. Dorsum of abdomen brown to brown mottled with black. Anterio-lateral areas black, blending with similar black areas on cephalothorax.Five pairs of white spots (in well-marked specimens) beginning in cardiac area and continuing posteriad.White spots connected by dark brown chevrons as in Figure 4. Cardiac area darker brown, outlined by lighter brown or yellowish. Venter of abdomen dark brown to almost black posterior to epi- gastric furrow.Yellowish anterior to furrow. Legs light brown with darker black annulations on femora to dark reddish brown without distinct annulations. Labium and endites light brown to black with pale yellowish distal ends.Sternum yellow brown (golden), dark reddish brown to black. Color.Male.Face yellow-orange to orange-brown.Dark in ocular area.Chelicerae brownish orange to dark reddish brown.Cymbia of palpi yellow-orange to dark reddish brown. Carapace orange-brown to dark orange-brown with broad yellow to pale orange median stripe overlaid with white hair.Irregular submarginal stripes of same color, sometimes indistinct.Dorsum of abdomen with median area light to medium brown, bordered by black.Five pairs of white spots beginning in cardiac area and continuing posteriad.Spots joined by black chevrons. Cardiac area brown, enclosed by lighter pale brown to yellowbrown.Pattern similar to female.Venter of abdomen brown to black posterior to epigastric furrow.Light brown to pale yellow or cream anterior to furrow.Labium and endites yellow-orange to orange with distal ends cream.Sternum yellow-orange to orange. Measurements.Ten females and ten males from Florida.See Brady--Nearctic Gladicosa 303 than gulosa (compare 2) and is usually darker in color with a more distinct pattern (compare Fig. 4 with Fig. 3).In most specimens ofpulchra the venter of the abdomen is dark brown to black behind the epigastric furrow, while that of gulosa is yellowish to light brown.Differences between female and male genitalia of these two species are noted under gulosa and in the keys.Natural History.Little is known of the habitat or behavior of pulchra.I've collected this species in Florida from the trunks of deciduous trees where their color blends well with the bark substrate.G. B. Edwards (personal communication) has collected spec- imens from similar microhabitats in Florida.Pat Miller (personal communication) reported collecting both male and female pulchra from the trunks of pine trees at night in Perry, Florida, on December 5, 1982.Montgomery (1904) reported finding pulchra near Austin, Texas, in drier habitats than gulosa and less abundantly.He noted that the females live under stones where they make a shallow horizontal burrow lined with silk.Whether this behavior is consistent throughout the life cycle or represents a temporary adjustment to molting or egg laying is a question to be answered.Gladicosa pulchra is not the abundant inhabitant of deciduous leaf litter, as are gulosa and huberti.Of the species investigated pulchra is the most variable in coloration of the body and structure of the epigynum.It is possible that more than one species is represented in this complex. Roble (1986) reported rearing Mantispa viridis from a Gladicosa pulchra egg sac.It is the first record of a lycosid spider serving as a host of M. viridis.When the spider died, its egg sac was opened and a mantispid cocoon and 95 surviving spiderlings were found.This corroborates an earlier observation of high spiderling survival within a mantispid-infested egg sac of Lycosa rabida. Distribution.From Long Island, New York, along the East Coast to Texas in the southwest.Limited in its northern range inland to the southern parts of Kansas and Missouri and northern Kentucky.More abundant in the southeastern United States (Map 2). Color.Females.Face orange-brown to reddish brown with eye nacelles black.Chelicerae dark reddish brown (mahogany) to black.Condyles orange-brown. Carapace orange-brown to reddish brown with broad median pale orange stripe from PME to posterior edge.Lighter irregular submarginal stripes less distinct than median.Pattern as in Figure 1. Dorsum of abdomen brown to dark brown with cardiac area outlined in black.Chevrons faintly indicated along posterior half with white spots marking their lateral edges.Anterior lateral edges of dorsum darker as in Figure 1.Venter pale yellow-orange to darker brown.Lateral areas darker in pale-colored individuals, con- colorous brown in others. Legs yellow-orange to orange-brown, without darker annulations. Labium and endites orange-brown to dark reddish brown, with distal ends yellowish to cream.Sternum yellow-orange to light orange-brown. Color.Males.Face dark orange-brown to very dark reddish brown, eye region black.Chelicerae dark reddish brown to black.Condyles lighter.Cymbia of palpi dark red-brown. Carapace orange-brown to darker reddish brown with light orange broad median stripe from eye region to posterior edge.Light- er, irregular submarginal stripes, not so distinct as median one.Dorsum of abdomen medium to dark brown with cardiac area lighter, outlined by black line which is enclosed in turn with lighter color extending laterally.Anterior lateral areas marked by black color, which extends more posteriad than in female.Venter of BradymNearctic Gladicosa 307 abdomen orange-brown to dark brown.Central area somewhat lighter. Legs yellow-orange to orange-brown, somewhat lighter ventrally, without darker bands. Labium and endites yellow-orange to dark reddish brown, with distal ends pale yellow to cream.Sternum yellow to reddish orange-brown. Measurements.Ten females and ten males from Georgia and Florida. Natural History.Nothing concerning the natural history of this species is reported in the literature.I have collected it in leaf litter near the edge of woods in Georgia and in a marshy area near the edge of a pond beneath a pine tree canopy in Florida.The great majority of the adult specimens were collected from February through April (see Records below). Discussion.Gladicosa bellamyi was placed in the new genus Avicosa by Roewer (1954) with Avicosa avida (Walckenaer) [--Schizocosa] as the type species.Two other North American species now placed in Schizocosa (minnesotensis and wasatchensis mccooki) as well as Lycosa ceratiola and Tarentula pictilis (now Alopecosa pictilis) were also included in this new genus.Avicosa is certainly an artificial conglomeration without systematic foundation.Color.Females.Face orange-brown to dark reddish brown. Chelicerae dark reddish brown to black.Condyles lighter yellowish. Carapace dark brown to dark reddish brown with broad median yellow-orange to pale brownish orange stripe from PME to poste- rior declivity as in Figure 2. Indistinct submarginal stripes of same color. Psyche [Vol. 93 Dorsum of abdomen pale yellow-brown to medium brown, often with darker brown cardiac mark and darker chevrons posteriorly as in Figure 2. Slight indication of black counter-shading anterio- laterally.Venter of abdomen dark brown posterior to epigastric furrow; median area sometimes mottled with light orange-brown.Lighter yellowish anterior to furrow. Legs brown to dark brown dorsally.Pale yellowish brown to golden brown ventrally.Legs without distinct bands. Labium and endites dark reddish brown to orange-brown with distal ends lighter golden to yellow. Color.Males.Face dark red-brown.Eye region black.Che- licerae dark brown to black with inner distal margins lighter orange- brown.Condyles lighter orange to yellow.Cymbia of palpi brown to dark brown. Carapace dark reddish brown overlaid with fine black hair.Broad median pale yellow-orange to orange-brown stripe from PME to posterior edge.Dorsum of abdomen beige to light brown.Black countershading in anterio-lateral areas, extending posteriorly farther than in female. Indistinct chevrons posteriorly.In some specimens the median longi- tudinal area of the dorsum is pale yellow to cream with darker brown at edges and along sides.Venter of abdomen dark brown to black posterior to epigastric furrow, lighter yellowish brown ante- riorly.Lateral areas often somewhat lighter in color. Legs orange-brown to dark brown dorsally, paler golden to yel- lowish brown ventrally.Without darker bands.Tibia and metatar- sus black, tarsus yellow.Labium and endites orange-brown to dark brown with distal ends lighter yellow to golden.Sternum light orange-brown to darker reddish brown. of certain populations: Dr.Richard Brown and Ms. Pat Miller of the Entomological Museum, Mississippi State University; Mr. Tim helped with general sorting, compilation of locality data, and prep- aration of distribution maps.National Science Foundation grant DEB-7803561 assisted in defraying expenses of the investigation.A summer grant from the faculty development program at Hope College (1980) helped to initiate this project. Table 1 . Measurements of ten females and ten males of Gladicosa gulosa from Allegan Co., Michigan. Table 2 . Diagnosis.Gladicosapulchra is closest to G. gulosa in size, col- oration, and genitalic structure.Gladicosa pulchra is a larger species Table 3 . Measurements of ten females and ten males of Gladicosa huberti from Table 4 . Measurements of ten females and ten males of Gladicosa bellamyi from Table 5 . Measurements of ten females of Gladicosa bellamyi from Mississippi. Table 6 . Measurements of ten females and ten males of Gladicosa euepigynata from Texas.
5,348
1986-01-01T00:00:00.000
[ "Biology" ]
Application of Two-Photon-Absorption Pulsed Laser for Single-Event-Effects Sensitivity Mapping Technology Single-event effects (SEEs) in integrated circuits and devices can be studied by utilizing ultra-fast pulsed laser system through Two Photon Absorption process. This paper presents technical ways to characterize key factors for laser based SEEs mapping testing system: output power from laser source, spot size focused by objective lens, opening window of Pockels cell, and calibration of injected laser energy. The laser based SEEs mapping testing system can work in a stable and controllable status by applying these methods. Furthermore, a sensitivity map of a Static Random Access Memory (SRAM) cell with a 65 nm technique node was created through the established laser system. The sensitivity map of the SRAM cell was compared to a map generated by a commercial simulation tool (TFIT), and the two matched well. In addition, experiments in this paper also provided energy distribution profile along Z axis that is the direction of the pulsed laser injection and threshold energy for different SRAM structures. Introduction Single Event Effects (SEEs) are electrical disturbance in Integrated Circuits (ICs) or analogue circuits when they are hit by ionizing particles through or near sensitive nodes (a small area inside IC that is sensitive to external electronic stimulation) [1,2]. When the ionizing particle passes through the semiconductor materials in the IC, a number of Electron-Hole pairs (EHPs) will be generated, which is the reason for the circuits disturbance. A traditional testing method for SEEs is Heavy Ion (HI) or protons and neutrons testing, which uses an accelerator to provide particles with certain energy and ICs devices would be exposed to the ion beams for simulating the SEEs phenomenon in natural space [3,4]. Although HI testing is a direct way to study SEEs, it doesn't provide enough temporal and spatial information to understand mechanisms responsible for SEEs [5]. Pulsed laser systems have been introduced and widely used for simulating Single Event Effects procedure by generating EHPs through photon-particle interaction [4][5][6]. For SRAM and Flip-Flop cells, laser systems can be used to find the sensitive area [7][8][9][10]. Due to the ability of changing focus length, a 3-D SEEs profile can be also achieved by laser systems [11,12]. Meanwhile, for the abilities of fast scanning speed and repeatability, the laser system can be further used to find sensitive area in the whole area of IC circuit or analogue circuit [13,14]. As a complementary SEEs testing tool for particle accelerators, the injected laser energy could be correlated with the Linear Energy Transfer (LET) value in particle accelerator testing, which helps us to build a solid relationship between these two testing methods [15,16]. From basic physical theory, if an EHP needs to be generated in semiconductors, an electron located in Valence band of a semiconductor should receive enough energy that shall be no less than the bandgap (E g ) of the material, so as to be excited up to the Conduct band. Based on Planck's Equation (E = hν) and light speed Equation (c = λν), if we let the energy of a single photon be the bandgap of silicon (1.12 eV), we can calculate a threshold wavelength as 1108 nm. When the wavelength of injected laser light is shorter than this threshold value, one single photon can generate an EHP. This kind of absorption process is called Single-Photon Absorption (SPA); whereas, if the wavelength of injected laser is longer than this threshold value, an EHP can not be generated by a single injected photon. However, if we can specifically restrict the energy of one single photon of injected laser to be less than E g , but greater than E g /2, in the domain of non-linear optics, two photons can be absorbed simultaneously and generating an EHP in the material. We call this process as Two-Photon Absorption (TPA). The occurrence of TPA process is highly depended on the input laser irradiance based on non-linear optical theory, and can only occur in the focused zone [17][18][19]. The following equation is mostly used to describe the EHPs generation in a semiconductor material [20]. where N is the density of generated free carriers, I is the pulse irradiance, hν is the photon energy, z is the distance from the focus point along the beam axis (Z axis), r is the radial distance from the beam axis, h is Planck constant, ν is frequency, and α and β 2 are the linear SPA absorption coefficient and non-linear TPA absorption coefficient, respectively. The first item on the right hand side of Equation (1) describes the process of linear absorption (Single Photon Absorption). The irradiance will be attenuated exponentially by the penetration depth, which may give us inconvenience when the beam needs to penetrate deeply into the device [5,21]. In modern IC circuits, there would be many metal layers on top of the transistors; consequently, most of the laser beam would be reflected by these metal layers. A solution for this problem is to apply the laser from back (bottom) side of the chip, so the beam can reach the transistors layer first before reaching the metal layers. The price for this solution is the laser has to travel through the substrate of chips to reach the transistors layer, where the substrate is normally much thicker without any thinning process. Consequently, TPA process is required to accomplish this purpose, which is expressed by the second item on the right hand side of Equation (1). In TPA process, it requires ultra-short pulse laser with the pulse duration in femtosecond scale, and the Equation (1) can be rewritten as: Solve this equation, we can get the following expression. Assuming a Gaussian beam, to get final expression, we can use the following three equations [22]. where w is half the beam diameter, w 0 is the radius of spot size, z is the distance from the focused point, z 0 is the Rayleigh range, and I 0 and P 0 are the maximum beam irradiance and power. Finally, the expression of N(r, z) is shown like this. Furthermore, in real testing, we mainly consider the generated free charges around focus point, which means the parameter z should be quite close to 0, and w ≈ w 0 . Hence, Equation (7) can be rewritten as Equation (8). In the Equation (8), β 2 is around 1.75 cm/GW with wavelength of 1250 nm [17], hν is very close to 1 eV with wavelength of 1250 nm, w 0 can be calculated by equation with certain value of Numerical Aperture (NA), r can be measured in testing, and P 0 is a pulse-duration depended measurable power value. By Equation (8), we can theoretically calculate the generated free charges through TPA process. To obtain SEEs sensitive mapping by laser system is an interesting researching area. There are some studies related on SEEs mapping for commercial SRAM chips [7][8][9]. However, the dimension of these commercial SRAM cells are mostly several micrometers, and the structures of them are not hardened against radiation [23]. As a result, it is meaningful to do the mapping testing for SRAM cells in a smaller scale and with different structures. In this study, there are mainly two fields of work. Firstly, a laser system is built up for carrying out the TPA SEES mapping testing. Based on the derived Equation (8), certain methods and tools are used to measure or monitor key parameters that relate to the generated free charge density N. Secondly, after the laser system got prepared, the threshold energy for SRAM cells with different structures will be found ; afterwards, the energy distribution profile along Z axis will be achieved, and finally, sensitive mapping will be achieved based on a SEE-hardened SRAM cell. In addition, a comparison between the results from laser mapping and TFIT simulation is also presented by this work. Ultrafast Laser Testing System Introduction The laser beam for TPA SEEs testing used in the lab has the wavelength of 1250 nm, and with the pulse duration around 200 femtosecond. Because it's difficult to get desired pulse laser directly, there are two stages in our laser system that are used to get desired pulse laser. Stage one contains three devices, a seeding laser of Verdi-pumped ultrafast mode-locked Vitesse laser, with Ti:Sapphire as gain medium and fixed 80 MHz repetition rate and 800 nm wavelength; a pump laser of Nd:Vanadate continuous Verdi laser, with 532 nm wavelength and up to 18 W power; an amplifier of RegA 9000 to combine the seeding laser and pump laser together, with a tunable repetition from 10 kHz to 300 kHz and fixed 800 nm wavelength. Eventually, the output pulsed laser from RegA 9000 has the power around 50 mW, repetition of 10 kHz, wavelength of 800 nm, and pulse duration of less than 160 fs. As mentioned earlier, the wavelength for TPA SEEs testing should be larger than 1108 nm, so a second stage is added into our system. There's only one device in this second stage which is called Optical Parametric Amplifier 9800 (OPA 9800). The OPA 9800 can convert the wavelength in the range from 1200 nm to 1600 nm, with the pulse duration less than 225 fs. All these four devices are manufactured by Coherent, Inc., from Santa Clara, CA 95054 USA. The diagram of laser system is shown in Figure 1. After pulsed laser came out from OPA9800, a SEEs testing system is built up with such characteristics: (1) ability to monitor the waveform of laser in real time; (2) ability to control the power of laser precisely and repeatably; (3) ability to measure the power of laser in real time; (4) ability to get image of Device Under Testing (DUT); (5) ability to inject the pulse laser into certain region of the DUT; (6) ability to control the number of laser pulses that will be injected into the DUT. Figure 2 illustrates the optical setup of this system. As shown in the schematic above, a 2:1 beam reducer (could be replaced by a beam-expander) is used to control the beam size in far field. The fast photo-diode (D1) is used to monitor the waveform of pulsed laser. The Attenuator is used to control the power of laser that would pass through it. The Pockels cell is used to temporally control the number of laser pulse that shall be injected into the DUT. The power meter (P1) is used to measure the laser power that will enter the microscope. The Scanner module is used to inject laser beam into certain region of DUT. The Imaging laser is used to obtain the pattern of DUT. Verification of Pulse Duration From Equation (8), the TPA process is highly dependent on the time related parameter of power. Consequently, it is critical to verify the pulse duration. The device for this testing is NT&C Micro$cor autocorrelator from Germany. The measured pulse duration can be seen in Table 1. In the temporal scale of less than 200 fs, the pulse duration can make the TPA process happen in the focused zone of our laser beam. Verification of Spot Size Spot size is a key parameter for laser SEEs testing. Because the profile of laser beam is Gaussian distribution [22], with different spot size, the irradiation would be different correspondingly, and further affect parameters measured in SEEs testing [24,25]. Theoretically, for a Gaussian beam, the minimum waist w 0 has such relationship with the diverging angle θ, θ = 2λ/(πw 0 ). If D is the diameter of the objective lens, f is the focal length, then θ ≈ D/ f . Meanwhile, N A = D/2 f . So finally, we can get the expression of w 0 . In our system, the λ = 1.25 µm and NA = 0.65, so we can calculate the ideal spot size as 1.225 µm. If the diffraction limitation is considered with infra-red wavelength, based Airy ring theory, the ideal spot size can be calculated with this equation, w 0 = 1.22λ/N A, which can get the value of 2.346 µm. Practically, there are several ways to measure the spot size, either by sharp blade edge or proper scale of grid [26,27]. By blade edge method, the spot size was measured as 2.7 µm, which is closer to the value by Airy ring theory. Because the profile of laser beam obeys Gaussian distribution, from mathematical calculation [28], by choosing different laser energy, the spatial resolution of the injected laser beam could be changed based on corresponding energy [25]. Verification of Laser Energy The input energy of pulsed laser, equalling to power/repetition, is very important to SEEs testing. Unfortunately, it's hard to directly measure the power under the objective lens continuously during our testing. To solve this problem, a power meter (P1, FieldMaxII, Coherent.) behind Pockels Cell is set to continuously record the value of power that is going to enter the Microscope. At the same time, another power meter (P2, PM100USB, ThorLab Inc., from Newton, NJ 07860 USA) is placed under the objective lens to record the value of power that will be injected into the DUT. Apparently, there existing a solid linear relationship between P1 and P2, consequently we can get the value of power under the objective lens (P2) by reading P1 in real time. To verify the relationship between P1 and P2, we will record a look-up table with corresponding data from P1 and P2. After that, we will keep monitoring the reading from P1 and P2 through whole day. From the data, the output power from laser source would vary in a range, but if we tune the laser source to make the reading from P1 return to a certain value, the corresponding reading from P2 will be back to the same value that is in the look-up table. To verify the laser energy stability, different amount of energy were injected into the DUT, and recording the corresponding change of generated errors. In following Figure 3, the DUT is a Flip-Flop (FF) chain with BULK technique. Two values of energy (100 nJ and 50 nJ) were injected into a certain FF area in the testing chip, and the related generated number of errors would show the same trend of change: falling into half as the input energy. The percentage in the brackets represents the waveform noise level of laser pulses. By comparing the error number and noise level, it's also helpful to find out the trend of error change with different noise level, so as to find a stable working condition for laser system. In Figure 3, there are five groups of data, with increasing noise level from 12% to 62%. In real testing, if the level of noise could be controlled under 20%, the error number with 50 pJ would be close to half the error number with 100 pJ, which means the laser system works in a stable status; however, as the noise level increased, the total errors generated from both energy declined correspondingly, and the error number with 50 pJ deviates from half the error number with 100 pJ. Verification of Temporal Control by Pockels Cell Pockels Cell is a voltage controlled high speed Electro-optical device. Since the repetition of pulsed laser in our testing is 10 kHz, the opening window of Pockels Cell is set to 100 µs. In the real SEEs testing, "Beam scanner module" has the ability to inject a single laser pulse at each step inside a Region Of Interest (ROI) of the DUT. With the setup of Pockels Cell, a certain number of laser pulses could be injected into the ROI area. By this advantage, the injected laser energy can be calculated precisely, and furthermore, this energy can be used to make a comparison with the LET value in heavy ion testing. Verification of Beam Scanner Module (BSM) One remarkable characteristic of our testing system is a "Beam Scanner Module". It is applied to inject the laser pulse into a certain ROI in a DUT, rather than keep the beam still and move the stage to scan the ROI. BSM system consists of a pair of high reflective mirrors, each of which is controlled by voltage signal. The BSM system has the following advantages compared with the "stage moving system": 1. it moves laser beam with high velocity while scanning the ROI; 2. it moves the laser beam precisely inside the ROI; 3. Since the stage is still, there is no physical vibration applied to the DUT. A SRAM chip can be used to verify the performance of BSM. As shown in Figure 4, there is a SRAM block named "Regular_11T". It has 256 SRAM words (each unit has 8 SRAM cells inside) in this block with the allocation of 64 (row) by 4 (column). Furthermore, each SRAM word has a unique physical address, and if there is an error generated inside a word, the address of that word will be recorded automatically. In the verification, laser pulse was shot into the words of (1,0), (1,1), (2,0), and (2,1). The results of corresponding address and generated errors is shown in Table 2. From the results, we can see the address difference between words (1,0) and (2,0) is 1, whereas the address difference between words (1,0) and (1,1) is 64. It can verify our BSM can inject the laser beam precisely into the DUT. SRAM Chip in the Testing In the SRAM testing chip, there are five SRAM blocks with different structures, which are traditional_6T, Layout Design Through Error Aware Transistor Positioning_11T (LEAP_11T), Quatro_10T, Regular_11T, and Proposed_6T. The block diagram and die image is shown in Figure 5. Threshold Energy for Generating Errors in Different Structure of SRAM Threshold energy has been measured for different SRAM structures, which are traditional 6T, LEAP 11T, Quatro 10T, and Regular 11T, respectively. From the view of circuit structure, traditional 6T is the most vulnerable structure to radiation for it just uses two cross-coupled inverters to store bits information; the Regular 11T uses Cascode Voltage Switching Logic (CVSL) structure, and uses four complementary nodes to stand for two logic states, which gives this structure an ability of error correction; LEAP 11T is a modified design for regular 11T. It didn't change the structure of original design, but re-located the position of PMOS and NMOS so as to make the collected charge eliminated by themselves; finally, the Quatro 10T has a symmetrical structure that has the best ability of anti-radiation by turning off certain transistor to maintain original status [29]. In threshold energy testing with data pattern of all 0 in SRAM, laser pulse with the energy from 10 pJ to 100 pJ was injected into a single SRAM cell in each SRAM structure, by a step of 10 pJ. Once the threshold energy to generate error was found, repeated verification testing would be fulfilled for three times. Finally, the result is shown in Table 3, where the Quatro structure is the most non-sensitive design, whereas the Traditional 6T structure is the most sensitive design. For comparison, a result from former Alpha radiation testing to the same SRAM chip is also shown in Table 3 [30]. From the result, we can clearly see the sensitivity for each structure. With the same applied voltage (chip driven voltage), traditional 6T is the most sensitive structure than the other three, whereas the quatro 10T is always the most non-sensitive structure; meanwhile, the sensitivity of regular 11T and LEAP 11T locates between those two structure, and the regular 11T is the same sensitive with LEAP 11T. Finally, the Equation (8) can be used to calculate a theoretical number of free charge generated by certain threshold energy for each single laser pulse. For calculation, the T pulse duration ≈ 200 × 10 −15 s, while r = 2.7 µm and w 0 = 1.225 µm. The value of P 0 can be achieved by each threshold energy. Table 3 shows the theoretical number of free charge in column 4. The results from Alpha testing are coherent with laser testing results. The most sensitive design of Traditional 6T has the lowest threshold energy of 30 pJ, whereas the most non-sensitive design of Quatro 10T has the highest threshold energy of 70 pJ. The threshold energy of both Regular 11T and LEAP 11T has the same value of 50 pJ. This phenomenon is reasonable because they have the same structure, and from the result of Alpha testing, the sensitivity for Regular 11T and LEAP 11T is also the same. Energy Distribution along Z Axis In the analysis in Part 1, from Equation (8), the generation of charges is highly dependent on injected laser power. Consequently, it's important to research the distribution profile of laser energy along Z axis, so that an active zone of the focused laser beam can be achieved. From Equation (4), if the focus length r is within the Rayleigh range z 0 , we can assume the beam radius w is equal to the focused beam radius w 0 , so Equation (4) can be rewritten as: where z means a point on beam axis, I 0 is focused beam irradiance, w 0 is the focused beam radius. Here, we let I = 1 e I 0 , and w 0 is measured as 2.7 µm. Hence, we can solve Equation (10), and get the value of r as 1.91 µm. It means the focused zone along z axis will be ±1.91 µm. In testing, at the beginning, the laser beam is focused at the best position with the threshold energy. After that, we moved the beam upwards with the step size of 1 µm. At each position, the threshold energy would be recorded to monitor the change of energy. When this work was finished, we moved the beam downwards from the best focused position and repeated the same procedure. The data is shown in Table 4. From the results, the active zone along z axis is from −3 µm to +2 µm, which is quite close to the calculated value of ±1.91 µm. From the data, if we convert the relationship between energy and generated errors, we can get a laser beam profile along z axis, which is shown in Figure 6. Sensitive Map and Simulation for Quatro Structure of SRAM A specific system is developed to do the mapping testing. Three hardware is involved: a desktop is used to control the laser and generate sensitive map; a raspberry pi is used to control the FPGA and generate SEEs error information; a FPGA board and SRAM daughter card are used to generate output data. The total data flow is shown in Figure 7. The Quatro structure of SRAM was firstly proposed in 2009, which contains 10 transistors (6 NMOS and 4 PMOS) and has an ability of self soft error correction [29]. The diagram and layout of this structure are shown in Figure 8. Firstly, the stored data pattern was set to all '0'. From Figure 8a, the nodes of 'A', 'B', 'C' and 'D' should be '0', '1', '1', and '0'. If the transistor N3 is struck by a laser pulse, the voltage of its drain (node 'B') would drop, and as a result, the transistors N1 and N4 may be turned off. This may lead to a cascade of pulling up and flipping the voltage of node 'D', furthermore pulling down and flipping the voltage of node 'C', which would eventually change the voltage of node 'A' from '0' to '1', and generate an error. The related sensitive map for this case is shown in Figure 9. The dimension of whole SRAM Quatro cell is 1.10 µm by 1.80 µm, and the step size of scanning is 0.18 µm. From the circuitry analysis, in this sensitive map, we can observe the area of node 'B' turns red, which means it's the sensitive area with this setup, which is consistent with theory. By this setup, if the transistor N1 is struck by a laser pulse, the voltage of its drain (node 'A') would drop, and as a result, the transistors N2 and N3 may be turned off. This may lead to a cascade of pulling down and flipping the voltage of node 'D', and correspondingly pulling up and flipping the voltage of node 'C'. Eventually an error will be generated in the circuit. The related sensitive map for this case is shown in Figure 10, with the same dimension as before. Because the stored data is changed to all '1', the sensitive area is switched to the right part and covered the node 'A' area, which is also consistent with our analysis. TFIT Simulation of Quatro Structure TFIT is a common used commercial simulation tool based on SPICE netlist and chip layout. It can generate a simulation for sensitive area in a certain device according to different energy level (LET value). In order to examine the validity of the laser SEU mapping results, 10T Quatro cell simulation has been conducted on the 65 nm CMOS process model by using IROC TFIT tool.The similar device model is created and calibrated in previous [31][32][33]. As shown in Figure 11, TFIT simulation results are obtained with the LET from 0.01 to 0.50 pC/µm for "00H" and "FFH" data pattern respectively. The background of the SEEs map is the active region of the memory cell and the locations of upsets induced by ion strike are identified by colored squares. The color corresponds to the LET value. The TFIT simulation results show that the drain areas of OFF driver NMOS (node A/node B) and OFF pull-up PMOS (node C/node D) are sensitive to SEU for "FFH" and "00H" data pattern respectively. The SEU sensitive regions are well consistent with the laser SEU sensitivity mapping results and previous work. Meanwhile, it notes that the sensitive area obtained in mapping results with threshold laser energy is much larger than that obtained in simulation results due to the laser spot size and charge carrier diffusions [24]. It is shown in Figures 9 and 10 that the resolution of laser SEU sensitivity mapping technique is about 0.5 µm, indicating that laser mapping technology is reliable for this deep submicron technology device. From the simulation, we can see the most sensitive area (red area) is located on the "Node B" with all '0' data pattern, and "Node A" with all '1' data pattern, respectively. The simulation results are consistent with our testing results. Conclusions TPA laser facility has been widely used for SEEs testing in recent years; however, there are still several uncertainties in the practical testing. This paper derives an equation that can predict the generated number of charges by TPA procedure. After that, several verifications have been done to give us clear practical methods to control the system in an optimal working status. Three meaningful testing was carried out sooner. Firstly, the threshold laser energy for different SRAM structures has been confirmed by the data from former Alpha testing, which proves the energy resolution of the laser system. After this, the energy distribution along z axis was achieved, and the data is compared with theoretical data, which is quite close. This result will give us much inspiration in practical TPA laser testing. Finally, a sensitive map based on SRAM Quatro structure was achieved by a real-time mapping system. The generated sensitive area is consistent with the theoretical analysis, and is further verified by TFIT simulation results. Besides, the resolution of mapping can be achieved around 500 nm, which gives us a useful way to research the sensitivity inside a device with micron-metre scale. In the future, more work will be carried out with the purpose to decrease the spot size, and modify the spatial resolution of BSM in our laser system. By decreasing the spot size, a smaller resolution can be gotten and it will help us to reach present 28 nm devices or with even smaller size. By modifying BSM, we can control the movement of our laser beam in a more precise way.
6,533.8
2019-10-01T00:00:00.000
[ "Engineering", "Physics" ]
Fruit Fly in a Challenging Environment: Impact of Short-Term Temperature Stress on the Survival, Development, Reproduction, and Trehalose Metabolism of Bactrocera dorsalis (Diptera: Tephritidae) Simple Summary Bactrocera dorsalis (Hendel) is a widespread and economically important insect pest, infesting various fruits and vegetables. Due to the instability of climate change in early spring and autumn, extreme cold and hot temperatures were developed in a short period of time. Exposure to sudden short-term high or low temperatures may affect the reproduction, development, and physiological changes of B. dorsalis. In this study, we determined the effects of short-term temperature treatments on the growth, development, fecundity, and trehalose metabolism of B. dorsalis. The results showed that development and reproduction of the flies were negatively affected when temperature was below 10 °C; or more than 31 °C, even causing permanent sterility at extreme temperatures. The changes of glucose, glycogen, trehalose, and trehalose-6-phosphate synthase level had a correlation with the population dynamics of the fruit flies. Our present study can provide a scientific basis for population monitoring, prediction, and comprehensive prevention of the fruit fly. Abstract An understanding of physiological damage and population development caused by uncomfortable temperature plays an important role in pest control. In order to clarify the adaptability of different temperatures and physiological response mechanism of B. dorsalis, we focused on the adaptation ability of this pest to environmental stress from physiological and ecological viewpoints. In this study, we explored the relationship between population parameters and glucose, glycogen, trehalose, and trehalose-6-phosphate synthase responses to high and low temperatures. Compared with the control group, temperature stress delayed the development duration of all stages, and the survival rates and longevity decreased gradually as temperature decreased to 0 °C and increased to 36 °C. Furthermore, with low temperature decrease from 10 °C to 0 °C, the average fecundity per female increased at 10 °C but decreased later. Reproduction of the species was negatively affected during high-temperature stresses, reaching the lowest value at 36 °C. In addition to significantly affecting biological characteristics, temperature stress influenced physiological changes of B. dorsalis in cold and heat tolerance. When temperature deviated significantly from the norm, the levels of substances associated with temperature resistance were altered: glucose, trehalose, and TPS levels increased, but glycogen levels decreased. These results suggest that temperature stresses exert a detrimental effect on the populations’ survival, but the metabolism of trehalose and glycogen may enhance the pest’s temperature resistance. Introduction The oriental fruit fly, Bactrocera dorsalis (Hendel), has a wide host range and primarily infests over 250 fruit and vegetable species belonging to 46 families [1,2]. This species establishes a stable population in tropical and subtropical areas and is a destructive pest found in Fujian, Hainan, Guangxi, Guizhou, Yunnan, and other provinces in China, and it has spread rapidly and rampantly in southern and southwestern China [3]. Owing to its wide geographical distribution, host range, and economic impacts, B. dorsalis is considered a major threat to global agriculture [4]. Temperature is considered a key factor that influences insect biology and behavior [5,6], affecting them directly by controlling their physiological tolerance limits or indirectly by causing phenological changes and nutrient levels in adjacent environments [7,8]. Several studies have been conducted over the last few decades focusing on the survival and adaptation strategies of ectotherm animals under temperature stress [9]. Although many insects have evolved mechanisms that enable them to respond effectively to normal temperature changes, they continue to be affected by abnormal climate change in nature, which sometimes exceeds their physiological limit and results in the death [10,11]. For instance, a high temperature of 45 • C and a low temperature of −10 • C significantly affected the development and feathering rhythm of Sarcophaga crassipalpis [12]. Al-Behadili reported that pupation and adult emergence of Ceratitis capitata is significantly reduced after low temperature exposure [13]. In addition, Frankliniella occidentalis exhibits altered reproductive adaptation and levels of small molecular compounds such as carbohydrates after exposure to 45 • C [14]. Insects have evolved various strategies to adapt to harsh environmental conditions and respond to external temperature stress through various physiological and metabolic adaptations [15]. For example, heat shock proteins are known to be one important mechanism for resisting the stress caused by an external temperature change [16]. In addition, trehalose is reported to be an important stress metabolite associated with insect tolerance, protects cells from dehydration, drought, heat stress, freezing, and high osmotic pressure [17][18][19]. This sugar can be converted to glucose via hydrolysis, and glucose can be interconverted to glycogen during carbohydrate metabolism in insects. Trehalose contributes to energy metabolism to maintain insects' life activities such as growth, development, molting, flight, and chitin biosynthesis [20,21]. For example, Drosophila melanogaster has a significantly higher level of trehalose, which protects the structure of biological macromolecules from degradation in a dry and water-stressed environment [22]. After 3 h of low-temperature treatment, the trehalose level increased five-or sixfold in Heterorhabditis bacteriophora compared with that in the control group [23]. Furthermore, the levels of low-molecular-weight sugars in Myzus persicae gradually reduced at repeated high-temperature exposure to resist temperature-related damage [24]. At present, some studies regarding B. dorsalis biology focus on constant and lethal temperatures [36][37][38][39], but the temperatures often deviate far from the normal temperature in natural environment. In order to identify the physiological responses and developmental parameters of oriental fruit fly to temperature stress, we detected the effects of various short-term high-and low-temperature treatments on the survival rate, fecundity, levels of trehalose, and other resistant substances in B. dorsalis. This study aims to provide a better understanding of tolerance strategy in the fly, and predict its development, survival, and reproduction in environments with different temperature stress to develop better management of the pest. Test Insects Larvae or adult B. dorsalis were collected in 2018 from infected kiwifruits in Shuicheng County, Guizhou, China (26 • 33 6.264 N, 104 • 57 55.404 E; alt. 1796 m). They were reared in a growth chamber (26 ± 1 • C, 70 ± 10% RH, and 14/10 h light/dark cycle). Colonies of B. dorsalis were regularly rejuvenated with wild ones. Adults were housed in screen cages (30 cm × 30 cm × 30 cm) and supplied with artificial food and water. Eggs and larvae were given a diet mixture with 7.2 g wheat bran, 8.4 g bean dregs, 10.8 g yeast, 14.4 g sugar, 73.2 mL H 2 O, and other micronutrient antibiotics [40]. Adult flies were cultured using 60 g sugar, 20 g yeast powder, 6 g agar powder, 5 g peptone, 0.6 g methyl 4-hydroxybenzoate, 0.5 g sorbic acid, and 500 mL H 2 O [41]. The mature larvae were placed in 3 or 4 cm thick loose, moist sand soil to pupate. Healthy adults at 3-6 d of emergence were provided with bananas to lay eggs. The bananas were pierced with a toothpick before placing in cages [42]. Females readily laid eggs in the holes created by the toothpick. The areas with eggs were then washed and collected in water. Fresh eggs, larvae, pupae, and adults were selected as test insects and were assigned to each of 5 temperature regimens. Temperature Treatments and Materials The experimental group were subjected to short-term high-and low-temperature treatments at 0 • C, 5 • C, 10 • C, 31 • C, and 36 • C, corresponding to the temperature fluctuations that are common in agroecosystems. After 12 h in temperature-controlled incubators set to one of the above temperatures, the flies were removed and placed in a climatic chamber (26 • C) for further processing and observation. The control group was subjected to a constant temperature of 26 • C. A disposable plastic cup with a lid (top diameter: 4 cm, bottom diameter: 3 cm, and height: 3 cm) was used as the oviposition device, which was punctured evenly to make approximately 80 holes (<2 mm in diameter) from the bottom [39]. Next, banana pieces (approximately 3 cm 3 ) were placed in egg collection cups for adult oviposition. The cups were covered tightly. The bananas were purchased from a supermarket, washed, and refrigerated at 4 • C. Short-Term Temperature Stress on Egg, Larval, and Pupal Developments One day old eggs were used for this experiment. With a fine brush, 20 fruit fly eggs were placed on glass dishes (d = 10 cm) containing a 1.5 cm thick artificial diet. Exposure in an incubator that set at one of the 5 temperature stresses mentioned above for 12 h, and then the egg containing dishes were transferred to a climate chamber (26 • C) for rearing the insects until they hatched. The development and survival of the eggs were observed under a stereoscopic microscope every 24 h until the eggs turned black or died. Four replicates were used per treatment. For this study, 1-day-old larvae were captured and divided in groups of 20 larvae each. They were fed with an artificial diet of nearly 1.5 cm thickness placed on glass dishes. The larvae containing dishes were placed in the incubator that had been preset at a predetermined temperature for 12 h. The dishes were then moved to another chamber (26 • C) to continue rearing and ensuring adequate food supply. After the larvae attained maturity (exhibited bouncing behavior), they were placed in boxes (length: 17 cm, width: 11 cm, and height: 7 cm) filled with 3 or 4 cm sand. The larvae were observed under a stereoscopic microscope every day until pupation or death (larvae that died were counted as a straight body and no response to a mild stimulation with a fine brush was recorded). Each treatment was performed four times. Next, 1-day-old pupae were divided in groups of 20 each and were placed on a glass dish lined with aqueous filter paper. The dishes were placed in a climatic chamber set at a predetermined temperature for 12 h. Then, they were transferred and allowed to incubate (26 • C). The moisture of filter paper was maintained until emergence. The development and survival of pupae were recorded every 24 h until all pupae emerged or died (black or dried-out pupae were considered dead). Each different temperature was considered a treatment, each treatment was repeated four times. Short-Term Temperature Stress on the Survival, Longevity, and Fecundity of Adults Adults were selected within 24 h of emergence. Each pair of male and female insects were placed in a rearing box (length: 17 cm, width: 11 cm, and height: 7 cm). Boxes containing a dish (3.5 cm in diameter) with a moistened cotton ball and a 1 cm 3 artificial diet were subjected to temperature treatment for 12 h. After exposure to temperature stress, the adult insects were moved to 26 • C and supplied with fresh food. After 5 d of rearing, homemade oviposition cups containing fresh banana pieces were placed in the boxes to follow female insects' oviposition. Food and oviposition cups were replaced every day. The preoviposition, reproduction, and longevity of adult insects were observed daily until the death of the last fly. Individual survival was recorded every day until 4 d. Eight pairs of male and female insects were used for each treatment, and four replicates were used per treatment. Short-Term Temperature Stress on the Glucose, Glycogen, and Trehalose Levels of Adults Glucose level was measured using the glucose Assay Kit (Suzhou Michy Biomedical Technology Co., Ltd., Suzhou, China). Adult flies subjected to various temperature treatments were separately weighed (0.1 g fresh body weigh) in 1.5 mL tubes. They were rapidly frozen in liquid nitrogen. Then, 1 mL of distilled water was added to the tubes, and the contents were homogenized using a freezing grinder, followed by soaking them for 10 min in a 95 • C water bath. After cooling, the homogenates were centrifuged at 8000× g for 10 min at room temperature. The supernatant was taken, and extracts were spiked following the manufacturer's instructions. The absorbances of the obtained suspensions were recorded at 505 nm using a microplate reader. There were five replicates in each treatment. Glycogen level was determined using the glycogen Reagent Kit (Suzhou Michy Biomedical Technology Co., Ltd., Suzhou, China). In brief, 0.1 g of adult insects was weighed in 1.5 mL tubes, which were rapidly frozen in liquid nitrogen. Next, 0.75 mL of extraction solution was added to the tubes. The contents were ground using a freezing grinder, then homogenates were transferred to 10 mL tubes and shaken once every 5 min for 20 min in a 95 • C water bath. After the tissues were dissolved, distilled water was added to the tubes to adjust the volume to 5 mL. Afterwards, the samples were cooled and stirred. The mixture was centrifuged at 8000× g for 10 min at 25 • C to obtain the supernatant. A sample of each tube was spiked in strict accordance with the manufacturer's instructions. Absorbances were measured at 620 nm using a microplate reader. Each different temperature was considered a treatment, and five replicates were used per treatment. Trehalose level was measured using a trehalose Reagent Kit (Suzhou Michy Biomedical Technology Co., Ltd., Suzhou, China). In brief, 0.1 g of adult insects subjected to various temperature treatments was rapidly frozen in liquid nitrogen and weighed in 1.5 mL Eppendorf tubes. Then, 1 mL of the extraction solution was added, the tube contents were thoroughly ground using a freezing grinder. After 45 min, the samples were left at room temperature and shaken 3-5 times. The samples were centrifuged at 10,000× g for 10 min at 25 • C, and the supernatant was used for further analysis. The sample was spiked in accordance with the manufacturer's instructions. Absorbances were measured at 620 nm using a microplate reader. Each treatment was repeated five times. Short-Term Temperature Stress on Adult TPS Activity TPS activity was determined using the TPS Reagent Kit method (Suzhou Michy Biomedical Technology Co., Ltd., Suzhou, China). Samples were rapidly frozen in liquid nitrogen. Next, 1 mL of extraction solution was added, and then the mixture was ground using a freezing grinder. Samples were centrifuged at 8000× g for 10 min at 25 • C. The supernatant was chilled and subjected to TPS activity assessment in accordance with the kit manufacturer's instructions. Absorbances were measured at 505 nm using a microplate reader. Five replicates (six adults per replicate) were used per treatment. Protein level of the adult insects was measured via bicinchoninic acid method [43] with the Protein Assay Kit (Suzhou Kemin Biotechnology Co., Ltd., Suzhou, China). Absorbances were measured at 562 nm using a microplate reader. Statistical Analysis One-way analysis of variance (ANOVA) followed by Tukey' s multiple range test (p < 0.05) using IBM SPSS Statistics 19.0 was used to analyze the developmental periods, sex ratio, preoviposition period, fecundity, and adult longevity data. Paired t-test was used to determine the difference in longevity between male and female at the same temperature [44]. Before performing ANOVA, the data were subjected to exploratory analyses to verify normality and homogeneity respectively using the Shapiro-Wilk and Levene's tests. All values were expressed as means ± standard errors in the study. The percentage of sex ratio datasets was arcsine-transformed to improve normality and homoscedasticity. The bar graphs were created using Origin 2019 in the text. The estimated survival rate was analyzed using log-rank (Mantel-Cox) test with Kaplan-Meier analysis (p < 0.05). The figures were created using GraphPad Prism 8. When the factors varied significantly from homogeneous distribution, paired tests were performed on preordered means [45]. Effects of Short-Term Temperature Stress on Survival Rates The survival rates of all stages of B. dorsalis were significantly affected by different short-term temperature treatments ( Figure 1). After low temperature treatments, egg and larval stages had the lowest survival. When treated at 0 • C for 12 h, the survival rates of egg and larva were 47.50% at 3 d and 21.25% at 10 d, respectively. As the pupae and female adults were exposed to a short-term high-temperature treatment at 36 • C, their survival rates at 3 d still reached to 86.25% and 84.37%, respectively. In addition, the percent survival of male adults after exposure to 0 • C was significantly lower than that of the control, was 53.13% (χ 2 = 9.021; p < 0.05). After temperature stress, the survival rate of all stages reached to the lowest at 0 • C. The survival rates of B. dorsalis with the above-mentioned treatments except pupae decreased sharply on the first day after treatments. Effects of Short-Term Temperature Stress on Developmental Periods As shown in Table 1, significant differences in the egg, larval, and pupal stages were noted after various temperature treatments. At low temperatures, both the egg and larval stages developed for significantly longer than the control group, with the egg stage prolonging by >2-fold at 0 • C. The development duration of eggs, larvae, and pupae were the shortest at 26 • C and prolonged after exposure to 0 • C, 5 • C and 36 • C. The developmental periods of pupae were significantly longer after exposure to ≤5 • C than after exposure to other temperatures. Overall, short-term temperature treatments exerted a considerable effect on the egg and larval stages compared with the effects on the pupal developmental period, implying that the pupae were more tolerant to temperature stress. Effects of Short-Term Temperature Stress on the Female Ratio in the Pupae Emerged and Preoviposition Period No significant differences were noted in the female ratio in the emerged pupae of B. dorsalis subjected to various short-term temperature treatments (Figure 2). In addition, the proportion of female in the pupae emerged gradually decreased as the temperature increased or decreased, reaching the lowest of 42.30 ± 3.33% at 0 • C. Figure 2. Female ratio in the pupae emerged (a) and preoviposition period (b) of adults after 12 h of high-and low-temperature treatments. Different lowercase letters denote significant differences among insects subjected to various temperature treatments (p < 0.05, Tukey's test). Data are expressed as means ± standard errors of four replicates. The preoviposition duration of B. dorsalis was significantly affected after 12 h of exposure to various temperature treatments (F5,18 = 10.11; p < 0.05). Compared with that noted in the control group, preoviposition was significantly prolonged when the treatment temperature was 0 • C, or was 12.97 ± 0.69 d. Preoviposition duration was the shortest at 26 • C (9.37 ± 0.19 d). With the increase in temperature, the preoviposition period of B. dorsalis was first shortened and then prolonged. No significant difference was noted between control insects and those exposed to 36 • C (10.91 ± 0.23 d). Effects of Short-Term Temperature Stress on Fecundity Subjecting adults to short-term high-and low-temperature treatments significantly affected their fecundity (F5,18 = 8.805; p < 0.05) (Figure 3). During the short-term hightemperature treatment, the egg number per female decreased gradually with increasing temperature. When the temperature was increased to 36 • C, the number of eggs laid by a female was significantly lower than that of the control group. Upon low-temperature treatments, the egg number per female decreased gradually. The maximum fecundity was noted at 10 • C (775.53 ± 31.10 eggs), which was slightly higher than that noted in the control group (716.05 ± 44.38 eggs). Egg production at 5 • C and 0 • C was altered significantly, the lowest number of eggs produced was 519.63 ± 40.70 at 0 • C. Effects of Short-Term Temperature Stress on Longevity Exposure to various short-term temperatures significantly affected the longevity of adults (Figure 4). When the treatment temperature was decreased to 0 • C (Female: 60. 53 1.92 d). In general, the longevity of fruit flies decreased gradually with short-term temperature treatments. Figure 3. Fecundity of per female exposed to various short-term high-and low-temperature treatments. Different lowercase letters indicate significant differences among insects subjected to various temperature treatments (p < 0.05, Tukey's test). Data are expressed as means ± standard errors of four replicates. Effects of Short-Term Temperature Stress on Low-Molecular-Weight Carbohydrate Levels As shown in Figure 5, significant differences in the levels of glucose (F5,24 = 10.757; p < 0.05), glycogen (F5,24 = 14.173; p < 0.05), and trehalose (F5,24 = 20.369; p < 0.05) were detected after various short-term temperature treatments. The glucose level increased from 12.71 ± 0.87 mg/g of fresh body weight at 26 • C to 21.70 ± 1.02 mg/g of fresh body weight at 5 • C. However, it decreased as the temperature dropped to 0 • C, no significant difference was noted between the results obtained at 0 • C and in the control group. The glucose level increased at high temperature, but the difference in glucose levels noted at 31 • C and 36 • C was not significant. , and trehalose (c) levels of adult B. dorsalis exposed to various short-term high-and low-temperature treatments. Data are presented as means ± standard errors of five replicates. Different lowercase letters indicate significant differences (p < 0.05, Tukey's test). Compared to high and low temperatures, the glycogen level was the highest in the control group, where it was 12.77 ± 0.72 mg/g of fresh body weight. It was the lowest content at 0 • C with 5.71 ± 0.75 mg/g of fresh body weight. When the treatment temperature dropped to 0 • C, there was significant difference between the glycogen content of 0 • C and control group. Furthermore, the glycogen level at 36 • C was significantly lower than it was at the control temperature. Overall, as the temperature increased or decreased, the glycogen level decreased. Trehalose was the major carbohydrate found in B. dorsalis adults under various shortterm temperature stress conditions. The trehalose level was 16.47 ± 1.78 mg/g of fresh body weight in the control group. It increased linearly with decreasing temperature and peaked at approximately 39.06 ± 1.40 mg/g of fresh body weight at 5 • C. Afterwards, the level decreased slightly in response to temperature stress. The significant difference was noted in the results obtained at 0 • C and in the control group. Similar correlations were observed during high-temperature treatments. Effects of Short-Term Temperature Stress on TPS Activity of Adults The activity of TPS in B. dorsalis was significantly dependent on short-term temperature treatment ( Figure 6) (F5, 24 = 19.076; p < 0.05). Significant differences in TPS activity were observed at all low temperatures, and the highest activity of this enzyme was detected at 5 • C, which was 0.51 ± 0.02 nmol/min/mg prot. In contrast, the activity decreased as the temperature dropped <5 • C and increased by more than twice at 0 • C compared with that noted in the control group (0.17 ± 0.03 nmol/min/mg prot). TPS activity was altered in response to increases in the treatment temperature. The treatment at 36 • C resulted in an activity of 0.12 ± 0.02 nmol/min/mg prot, but the difference in TPS level (compared with the control group) was not significant. Furthermore, short-term high-temperature treatments exerted no effect on this enzyme's activity in B. dorsalis. Figure 6. Trehalose-6-phosphate synthase activity of adult B. dorsalis exposed to various high-and low-temperature treatments. Different lowercase letters indicate significant differences (p < 0.05, Tukey's test). Data are expressed as means ± standard errors of five replicates. Discussion Insects exposed to heat or cold stresses may exhibit altered behavior, morphology, life history, and physiological characteristics, which negatively or positively affected their population [12,15,46]. In this study, we examined how short-term high-and low-temperature treatments affect the growth, development, fecundity, and sugar metabolism of B. dorsalis. In addition to having a harmful effect on phenology of the fly, our findings are indicative of B. dorsalis being more tolerant to high-temperature stresses compared to low-temperature stresses. The similar results have been showed in previous study [39,47]. This might be an explanation for its widespread damage in tropical and subtropical regions. In our experiment, the survival rates of the insect at all stages decreased gradually after short-term temperature treatments. It is worth noting that our results were not totally the same with the previous ones. For instance, as the temperature decreased or increased, the survival rates of female adults were at least 72%, which was higher than those of other developmental stages. This is similar to the finding that adults are more resilient [12,47], but the larvae stages are more tolerant due to the lower supercooling capacity and poor mobility [48][49][50]. These differences observed are common when working on the temperature stress response of insects from different instar, insect orders, or the experimental setup and insect treatment. For example, old instar larvae of the species were used by Manrakhan, while ours were newly produced within 24 h [48]. Several studies have reported a decreased survival rate of insects at various developmental stages under short-term temperature stress, including those of B. carambolae [51], B. cucurbitae [47], and Ceratitis capitata [13]. At 31 • C, no significant correlation was noted between the survival rates of all developmental stages and the control group. This may be because the optimal temperature for the growth and development of B. dorsalis is between 20 • C and 30 • C [39,48]. Although a suboptimal temperature of 31 • C exerted some detrimental effects on the insects, it did not act as a stressor. Furthermore, our results are consistent with the findings of Huang who reported that short-term temperature stress significantly prolongs the developmental periods of flies [47]. The chill coma temperature threshold is 5 • C for B. dorsalis [52], which may be exceeded, resulting in the cessation or slowing of insect development. In the present study, adult insects were subjected to a short-term temperature treatment, which exerted a significant effect on their preoviposition period, fecundity, longevity, and sex ratio after pupal emergence. However, no significant correlation was identified in female ratio between the insect groups subjected to high-and low-temperature treatments. The overall male-to-female ratio showed a tendency to be nearly 1:1, and this ratio is largely determined by genes. The preoviposition period of B. dorsalis decreases with increasing temperature [3]. Our results support their conclusion-when the temperature exceeded the optimal temperature, preoviposition was delayed. In addition, decreasing temperature reduces longevity and fecundity, but a higher fecundity is noted in insects subjected to short-term treatments at moderately low temperatures [53]. The present study obtained similar findings in that the fecundity per female increased after a 12 h exposure to 10 • C followed by the constant temperature of 26 • C. The difference in reproduction may be because of the fact that 10.21 • C is the starting temperature for development [3], and treatment at 10 • C may contribute to the lab adaptation on adults. As evident from our results, the fecundity per female adult of B. dorsalis was reduced after various temperature treatments. We also found the one event temperature stress response of this fly was significantly different compared to their response of constant temperature stress, but both forms of high temperatures caused the low fecundity of the fly. For instance, thermal stress in B. dorsalis adults could still lay eggs of our experiment in a single event of 36 • C stress but at a constant temperature of 35 • C, no egg laying was observed for this species in a study conducted by Michel [39]. Insects often encounter unfavorable temperature environment, and they devote high levels of energy to metabolism, reproduction, and other life activities, which results in decreased adult longevity [54]. We observed that male adults had a shorter life span at 0 • C than female adults, indicating that the latter has a higher resistance than the former, and the shortest average life spans were 56.06 and 61.31 d for male and female adults, respectively. These results are inconsistent with the findings regarding the longevity of adults in the wild (a generation may survive for 26-61 (average: 39.2) d) [55]. This may be due to the more complex natural environment. Insects follow environmental cues such as different temperatures to initiate adaptation mechanisms. Although temperature fluctuations within a certain range are beneficial for insects, unfavorable temperatures exert significant effects on the physiological and biochemical indicators of resistance and enzymatic responses in most insects [26,56]. The processes underlying temperature tolerance include the reduction of supercooling points and use of carbohydrate cryoprotectants such as sugar substances [14,57]. Our findings show that short-term high-and low-temperature treatments of B. dorsalis adults exhibit a strong relationship with glucose, glycogen, and trehalose levels. This suggests that the resistance of B. dorsalis to unfavorable temperatures is a complex process involving the coordinated action of multiple substances associated with stress resistance. Our studies on the levels of these carbohydrates revealed that the highest increases in glucose and trehalose levels occurred at 5 • C and the lowest increases occurred at 0 • C. Furthermore, because glycogen degrades into monosaccharides and sorbitol, a decrease in glycogen level may result in an increase in glucose and trehalose levels. The production of carbohydrates is most likely a response to the cue of extreme temperatures, as they act as anti-resistance agents [17]. In general, the trehalose level increased initially but decreased later in response to the various temperature treatments. One possible explanation is that trehalose that accumulates in adult B. dorsalis and converts into glucose via the decomposition function of trehalose as protective substances, which plays a crucial role in exposing this pest to temperature stress [30,58]. This may explain why B. dorsalis has a lower survival rate at extreme temperatures, and energy is diverted from normal life activities to resist the temperature stress, which impairs insect fitness and even survival. This finding is consistent with previous findings that trehalose is a major component for energy storage and its pattern of utilization varies across insects subjected to various temperature treatments [14,21,32]. However, Gampsocleis gratiosa subjected to various temperature treatments for 24 h and 48 h exhibited changes in its trehalose level [58]. This indicates that the duration of exposure affects the trehalose level. Changes in the activity of insects' protective enzymes are considered a significant indicator of their stress tolerance [29,59]. The accumulation of trehalose has been associated with the induction of TPS in certain insects under extreme environmental stresses [20,22,28]. As demonstrated by our results, short-term high-and low-temperature treatments resulted in significant increases in TPS activity and enhanced activity at both low temperatures and 31 • C. Previous studies have established a relationship between TPS and temperature stress [32][33][34][35]. The present study demonstrated that exposure to various temperature treatments for 12 h significantly affected the levels of glucose, glycogen, and trehalose, as well as the activity of TPS in B. dorsalis, but whether the treatment duration exceeded their maximum level for resistance adaptation remains to be determined. In conclusion, treatments at extremely short-term temperatures not only affected the survival of eggs and larvae most, but also exerted detrimental impacts on development, adult longevity, and fecundity of B. dorsalis. We confirmed that temperature stress is a major potential contributor to generate trehalose metabolism substances. Such low-molecularweight carbohydrates and TPS activity may contribute to temperature tolerance. These findings are crucial for predicting the effects of climate change on this pest's population growth and resistance to heat and cold stresses. In this study, however, we provided adults access to food and water prior to and after treatment, although this might be more limiting in natural environment and subject to a large number of interacting variables. This can pose a challenge to our experimental results. Therefore, similar research conducted under natural conditions would help to provide further knowledge of the population dynamics of the spices. In addition, to control B. dorsalis infestation more effectively, additional studies on fruit fly's trehalose metabolism in response to temperature stress are warranted.
7,394.8
2022-08-01T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Microemulsion Delivery System Improves Cellular Uptake of Genipin and Its Protective Effect against Aβ1-42-Induced PC12 Cell Cytotoxicity Genipin has attracted much attention for its hepatoprotective, anti-inflammatory, and neuroprotection activities. However, poor water solubility and active chemical properties limit its application in food and pharmaceutical industries. This article aimed to develop a lipid-based microemulsion delivery system to improve the stability and bioavailability of genipin. The excipients for a genipin microemulsion (GME) preparation were screened and a pseudo-ternary phase diagram was established. The droplet size (DS), zeta potential (ZP), polydispersity index (PDI), physical and simulated gastrointestinal digestion stability, and in vitro drug release properties were characterized. Finally, the effect of the microemulsion on its cellular uptake by Caco-2 cells and the protective effect on PC12 cells were investigated. The prepared GME had a transparent appearance with a DS of 16.17 ± 0.27 nm, ZP of −8.11 ± 0.77 mV, and PDI of 0.183 ± 0.013. It exhibited good temperature, pH, ionic strength, and simulated gastrointestinal digestion stability. The in vitro release and cellular uptake data showed that the GME had a lower release rate and better bioavailability compared with that of free genipin. Interestingly, the GME showed a significantly better protective effect against amyloid-β (Aβ1-42)-induced PC12 cell cytotoxicity than that of the unencapsulated genipin. These findings suggest that the lipid-based microemulsion delivery system could serve as a promising approach to improve the application of genipin. Introduction Gardenia jasminoides J. Ellis (also named Zhi-Zi in China), an evergreen shrub that belongs to the Rubiaceae family, is widely distributed in China and Eastern Asia. Its ripe fruit has been used as yellow natural colorants and in Chinese traditional medicine for thousands of years [1,2]. As an important Chinese traditional medicine, G. jasminoides' fruit has been used to treat many different diseases due to its hepatoprotective, cholagogic, sedative, anti-hypertension, hemostasis, and detumescence properties [2]. Geniposide, an iridoid glucoside, is one of the major bioactive compounds in this medicinal plant and its content is used as the quality control marker of crude G. jasminoides fruit in Chinese Pharmacopeia [1]. Recently, multiple pharmacological activities of geniposide and its aglycone genipin have been reported, including its hepatoprotective effect [3], hypoglycemic [4] and antiinflammatory activity [5], protection of cerebral ischemic injury [6], neuroprotection [7], and anti-depressant effects [8]. Although geniposide has been widely considered as the main active component, some studies suggested that genipin was a more active ingredient [9][10][11]. The pharmacokinetic studies showed that geniposide is converted into its aglycone genipin fetal calf serum, and phosphate buffered solution (PBS) were obtained from Gibco (Carlsbab, CA, USA). All other chemicals were analytical reagent grade. Screening the Oils, Surfactants and Co-Surfactants on the Solubility of Genipin The solubility of genipin in different oils, surfactants, or co-surfactants was investigated using the method described by Pangeni et al. [27] with some modifications. Briefly, excess genipin was mixed with 1 g of oils (soybean oil, olive oil, IPM, MCT, and EO), surfactants (Tween 20, Tween 80, Labrasol, EL-35, CO-40, and Span 80), and co-surfactants (ethanol, ethylene glycol, glycerol, and PEG 400). The mixtures were vortexed and then held at 25 ± 1.0 • C in an isothermal shaker for 24 h to allow attainment of equilibrium. After being centrifuged (10,000 rpm, 15 min), the supernatants were filtered through a 0.45 µm membrane. The concentration of genipin was analyzed using the HPLC method as we previously described [31], with some modifications. HPLC was carried out on an Essentia LC-16 liquid chromatography system using an Ultimate XB-C18 column (250 × 4.6 mm, 10 µm, Welch Materials, Inc., Shanghai, China). After injecting 10 µL of sample, genipin was eluted isocratically with a mobile phase containing 0.15% phosphoric acid, 60% methanol, and 40% water (v/v) at a constant flow rate of 1 mL/min and detected at 238 nm. The content of genipin was calculated according to the calibration curve (peak area concentration). Construction of Pseudo-Ternary Phase Diagrams and Formulation of Microemulsions To investigate the effect of each component and the concentration on the formation of the microemulsion, the pseudo-ternary phase diagrams were constructed using the water titration method [32] at ambient temperature (25 ± 1 • C). The surfactants were blended with the co-surfactant with the ratios of 1:1, 2:1, 3:1, and 4:1 (w/w) to form the surfactant/co-surfactant mixtures (S mix ). In each ternary phase diagram, the ratios of oil phase and surfactant/co-surfactant mixture used were 1:9, 2:8, 3:7, 4:6, 5:5, 6: 4, 7:3, 8:2, and 9:1. The total amount of oil phase and surfactant/co-surfactant mixture was 10 g. The resultant solutions were mixed by a magnetic stirrer (450 rpm) and distilled water was added in a dropwise manner until the mixtures turned transparent. The mass fraction of each component was calculated at the critical point. The oil phase, water phase, and mixed surfactant were taken as the three vertices, and a pseudo-ternary phase diagram was drawn using Origin software (version 2018) (OriginLab, Northampton, MA, USA) [27], and the area of the microemulsion area was used as the inspection index to investigate the influence of each component and ratio on the formation of the microemulsion, and determined the composition. Entrapment Efficiency (EE) and Drug Loading Efficiency (DL) The encapsulation efficiency (EE) (%) and drug loading capacity (DL) (%) of the GME were studied by centrifugation [33]. The GMEs were transferred to an ultrafiltration centrifuge tube (MWCO 10 kDa, Millipore, Burlington, MA, USA) and centrifuged at 10,000 rpm for 15 min. The centrifuge (1 mL) was taken from the outer tube, diluted with 60% methanol aqueous solution, passed through a 0.45 µm organic filter membrane, and the free genipin measured according to the chromatographic conditions of Section 2.2. Another GME (1 mL) without centrifugation was taken, diluted with 60% methanol aqueous solution, and the total genipin content was measured according to the chromatographic conditions in Section 2.2. The EE (%) and DL (%) were calculated using the following equations, respectively: where W total drug is the total genipin content in GME, W free drug is the free genipin content after centrifugation, and W Total amount of GME is the total amount of GME, including carrier and genipin (2 mL). 2.4.2. Particle Size, Polydispersity Index (PDI), Zeta Potential, and TEM Analysis The microemulsions of genipin were analyzed in terms of mean particle size, particle size distribution (polydispersity index), and surface charge (zeta potential) by a Zetasizer (Malvern Panalytical Technologies, Malvern, UK). To avoid multiple scattering effects, the samples were diluted with 9 volume of deionized water (DI). All measurements were carried out in triplicate at a temperature of 25 ± 1 • C. The morphology of the GME was determined by TEM. A drop of the GME was placed on a holey carbon 400 mesh copper grid. After negative staining with 2% phosphotungstic acid solution, the copper grids were dried overnight and the morphology of the GME was observed by TEM (FEI Tecnai F20, Hillsboro, OR, USA) at an operating voltage of 80 kV. Differential Scanning Calorimeter (DSC) Analysis The DSC experiments were carried out using a DSC25 (TA Instruments, New Castle, DE, USA), as per the method described by Hart et al. [34], with modifications. The empty microemulsion and GME (3 mg) were added to the aluminum pans and sealed immediately. The sample was rapidly cooled to −20 • C by liquid nitrogen, and then heated to 80 • C at 10 • C/min. A blank aluminum pot was used as a reference. Evaluation the Effect of Ionic Strength, Temperature and pH on GME Stability To evaluate the effects of ionic strength on GME stability, genipin microemulsions were prepared using the method described by Chen et al. [35]. A total of 1 mL GME was mixed with 9 mL NaCl solutions to obtain the mixtures with different final NaCl concentrations (0, 100, 200, 300, 400, and 500 mM). The effect of temperature on GME stability was examined using the method described by Shi et al. [36], with a slight modification. The GMEs were placed in a water bath at different temperatures (20,30,40,50,60, and 70 • C) for 2 h. The pH stability of the GMEs was evaluated using the method previously described by Mohammed et al. [37]. The pH of the GME dispersions was adjusted with HCl or NaOH (0.1 M) to final values of 2.0, 4.0, 6.0, 8.0, 10.0, and 12.0. Ultrapure water was then added to the GME dispersions to obtain a tenfold dilution. After treatment, all samples were stored at room temperature for 24 h and the particle size, PDI, and zeta potential were measured by a Zetasizer (Malvern Panalytical Technologies, Malvern, UK), as described in Section 2.4.2. Characteristics of GME during Simulated Gastrointestinal Digestion The GME were sequentially digested in the simulated gastro fluid (SGF) and simulated intestinal fluid (SIF) to explore the fate of genipin encapsulated in microemulsions in gastrointestinal digestion, according to a protocol described previously [38]. All experiments were carried out at 37 • C in a shaker at 120 rpm and all solutions were preheated to this temperature prior to use. For the gastric phase, simulated gastric fluid (SGF) was firstly prepared by dissolving 2 g NaCl in 1 L ultrapure water and adjusting the pH value to 2.0 ± 0.1 with 5 M HCl. Porcine pepsin was dispersed in the SGF to a final concentration of 3.2 mg/mL. The GME (2 mL) was mixed with 20 mL SGF and the mixture was incubated in the incubator shaker for 1 h at 37 • C to mimic stomach digestion. For intestinal digestion, 20 mL of the stomach phase sample was withdrawn and the pH was adjusted to 7.0 with 1.0 M NaOH. Then, the resultant samples were mixed with simulated intestinal fluid consisting of 120 mM NaCl, 10 mM CaCl 2 , 20 mg/mL bile salts mixture, and 2 mg pancreatin, and the pH of the mixtures was readjusted to 7.0 with 0.1 M NaOH. The digestion mixtures were incubated in the incubator shaker at 37 • C for 2 h. At end of each digestion stage, the sample was withdrawn and filtered with 0.45 µm organic filter membrane. The particle size, PDI, and zeta potential of the digested GMEs were In Vitro Release Studies In vitro release behaviors of the genipin from GMEs and unencapsulated genipin solution were conducted using the dialysis bag method, as described by Subongkot et al. [39]. Briefly, 2 mL of each sample was transferred to the dialysis bag (with a molecular weight cut-off of 8000 Da) and dialyzed with 100 mL of aqueous hydrochloric acid (pH 1.2), ultrapure water, and PBS (pH 7.4) [40][41][42] in a beaker. To facilitate the diffusion of genipin, Tween 80 was added to all dialysis mediums to a final concentration of 0.5%. The test was maintained at 37 • C under stirring of 450 rpm. At predetermined time intervals (0.25, 0.5, 1, 2, 4, 6, 8, 10, 12, and 24 h), 1 mL of samples were taken out from the dialysis mediums and an equal volume of fresh dialysis medium was supplemented into the dialysis medium to maintain a constant volume. Eventually, all samples were then diluted with 60% methanol, filtered, and subjected to analysis by the HPLC method. Each sample was taken three times for parallel determination. The cumulative release contents of genipin (Qn) were calculated as per the following equation: where C n is the drug mass concentration measured at point n, and W is the total amount of administration. In order to study the release kinetics of genipin, the zero-order, first-order, Higuchi, and Weibull models were employed to fit the release profiles. Q represents the cumulative fraction of genipin released at time t. (1) Zero-order model Q = a + K 0 * t, where K 0 is the zero-order release rate constant; (2) First-order model Q = a * (1 − exp (−K 1 * t)), where K 1 is the first-order release rate constant; (3) Higuchi model Q = K H * t 1/2 + a where K H represents the Higuchi release rate constant; where K W represents the Weibull release rate constant. Cell Culture and Cell Cytotoxic Studies A Caco-2 human colorectal adenocarcinoma cell line and a PC12 rat adrenal pheochromocytoma cell line were purchased from the Cell Bank of the Chinese Academy of Sciences (Kunming, China). Caco-2 and PC12 cells were cultured in DMEM high glucose medium supplemented with 10% FBS, 100 U/mL penicillin, 100 µg/mL streptomycin, and 1% L-glutamine at 37 • C atmosphere containing 5% CO 2 . The cytotoxicity of empty microemulsion, unencapsulated genipin, and GME were examined using an MTT cell viability assay before the cellular uptake experiments. Caco-2 and PC12 cells (5000 cells per well) were incubated in 96-well plates for 24 h to allow attachment. The cells were then cultured in 100 µL fresh medium containing free genipin or GMEs with various concentrations (1.25-100 µg/mL) of genipin. For the empty microemulsion group, the cells were cultured in 100 µL of fresh medium containing empty microemulsion at various concentrations (50-6000 µg/mL). After 24 h incubation, 10 µL MTT solution (5 mg/mL in PBS) was added to each well and incubated for a further 4 h at 37 • C. The MTT solution was then removed and the formed insoluble purple formazan crystals were dissolved in 150 µL DMSO. The absorbance was measured at 490 nm using a microplate reader (SpectraMax, Molecular Devices, CA, USA). The cell viability was expressed as the percent of living cells compared with the control wells. Cellular Uptake Studies Confocal laser scanning microscopy (CLSM) and HPLC were conducted to evaluate the influence of encapsulated genipin by microemulsion on the cellular uptake of genipin by Caco-2 cells. To prepare the coumarin 6-labeled GME, both coumarin 6 and genipin were dissolved in EO, and then the oil phase was mixed with surfactant/co-surfactant mixtures and the GME was formed as per the method described in Section 2.3. For the analysis of the cellular uptake of genipin by CLSM, Caco-2 cells (2 × 10 5 cells/well) were incubated in 12-well plates for 24 h. Subsequently, the culture media were supplemented with free coumarin 6 (C6 group), coumarin 6, and GME mixture (mix group), and coumarin 6-labeled GME (GME group), respectively. After incubation at 37 • C for 0.5, 1, 2, and 4 h, the cells were gently washed three times using PBS and fixed with 4% paraformaldehyde (w/v in PBS pH 7.2) for 15 min. The cell nuclei were stained with DAPI. The cellular uptake was examined using CLSM 510 Meta (Olympus, FV3000, Tokyo, Japan) with an oil immersion objective (40×) [43]. For the quantitative analysis of the cellular uptake of genipin, Caco-2 cells (2 × 10 5 cells/well) were incubated in 12-well plates for 24 h. The culture media were then replaced by fresh medium supplemented with free genipin or GMEs to the final concentration of genipin 10 µg/mL. The cells were collected at 0.5, 1, 2, and 4 h and gently washed three times using PBS. The harvested cells were lysed with cell lysate for Western and IP (Beyotime Biotechnology, Shanghai, China) on ice for 15 min. After being centrifuged at 12,000 rpm for 15 min, the supernatants of cell lysate were filtered and genipin were measured by HLPC. The total protein content of the cell lysate was determined using a BCA protein assay kit. The cellular uptake of genipin was calculated and expressed as the amount of genipin (µg) per mg cell protein (µg/mg protein) [44,45]. Protective Effect on Aβ1-42-Induced PC12 Cell Cytotoxicity Beta amyloid (Aβ1-42) was dissolved in DMSO to obtain a 2 mM stock solution. PC12 cells were cultured in 96-well plates at a density of 5000 cells per well. After incubation for 24 h, the cells were injured by various Aβ1-42 concentrations (2.5, 5, 10, 20, and 40 µM) for 24 h and the cell viabilities were determined using the MTT method. The proper concentration of Aβ1-42 to induce PC12 cell cytotoxicity was chosen according to the cell viability. To determine the influence of encapsulated genipin by microemulsion on its neuroprotection effects, the protective effects of free genipin and GMEs on Aβ1-42-induced PC12 cell cytotoxicity were compared. The PC12 cells were cultured in 96-well plates at a density of 5000 cells per well for 24 h. The cultured medium was replaced with 100 µL fresh medium containing free genipin or GMEs with various genipin concentrations (1.25, 2.5, 5.0, and 10 µg/mL) for 2 h. The cells were further subjected to treatment with 20 µM Aβ1-42 for 24 h and the cell viability was examined by MTT assay. The cell viability of cells without the treatment with 20 µM Aβ1-42 was defined as 100%. Statistical Analysis The experimental results were analyzed by one-way ANOVA with post hoc Tukey's HSD test or Student's t-test (SPSS version 24.0, IBM, Armonk, USA). All experiments were carried out in triplicate (n = 3), with data expressed as mean ± standard deviation (SD). The significance levels were determined using p values as indicated in the legends. Solubility of Genipin in Oils, Surfactants, and Co-Surfactants The excipients have a major role for microemulsion formation. It was revealed that excipients with the best solubilizing capability for the drug ensured the maximum drug loading and the stability of the final formulation [46]. To select the excipients, the solubility of genipin in different oils, surfactants, and co-surfactants was investigated. As presented in Table 1, genipin was more soluble in MCT, IPM, and EO than in soybean oil and olive oil. MCT, IPM, and EO were therefore chosen as the oil phases for the construction of the pseudo-ternary phase diagrams. To select the surfactant phases, the equilibrium solubility of genipin in Tween 20, Tween 80, Labrasol, EL-35, CO-40, Span 80, and Tween 80: CO-40 (1:1, w/w) was compared ( Table 1). The dissolved amount of genipin in Tween 80, CO-40, Labrasol, and Tween 80: CO-40 (1:1, w/w) was higher than that of other surfactants. Therefore, Tween 80, CO-40, Labrasol, and Tween 80: CO-40 (1:1, w/w) were selected as the surfactant phases. As for the co-surfactant, the solubility of genipin in ethanol was higher than that of ethylene glycol, glycerol, and PEG 400 (Table 1). During microemulsion formation, the lower molecular weight of the co-surfactant could easily induce the opening of the tight junction of the surfactant and bring the interface film closer [25,32,35]. In addition, ethanol is a co-surfactant suitable for oral preparations, and thus was selected as the co-surfactant. Construction of Pseudo-Ternary Phase Diagrams and Formulation of Microemulsions Pseudo-ternary phase diagrams were constructed to determine the excipient ratios of genipin microemulsion formation. By using Tween 80/ethanol as a surfactant/co-surfactant mixture (surfactant/co-surfactant ratio 3:1), the influence of MCT, IPM, and EO on genipin microemulsion formation was compared. As shown in Figure 1A, the microemulsion area formed by EO was the largest, indicating that EOis the most suitable oil phase for preparing the genipin microemulsion. It needs to be noted that, as one of the harmless oil phases designated by the US Food and Drug Administration, EO is well-tolerated in terms of digestion and has been widely used in topical medicines [47,48]. and EO on genipin microemulsion formation was compared. As shown in Figure 1A, the microemulsion area formed by EO was the largest, indicating that EOis the most suitable oil phase for preparing the genipin microemulsion. It needs to be noted that, as one of the harmless oil phases designated by the US Food and Drug Administration, EO is well-tolerated in terms of digestion and has been widely used in topical medicines [47,48]. S u r f a c t a n t / C os u r f a c t a n The effects of Tween 80, CO-40, Labrasol and Tween80: CO-40 (1:1, w/w) on genipin microemulsion formation were conducted using EO as oil phase. As shown in Figure 1B, the pseudo-ternary phase diagram area of the Tween 80/CO-40 composite surfactant was larger than that of the single surfactant, demonstrating a synergistic effect between Tween 80 and CO-40. The composite surfactant was proposed to improve the emulsification efficiency and stability of the microemulsion by reducing the molecular steric hindrance of the surfactant and increasing the flexibility of the oil-water interface [49], which may explain the better emulsification efficiency of Tween 80: CO-40 composite surfactant. The surfactant and co-surfactant contribute to the reduction in the interfacial tension between water and oil and the proper surfactant/co-surfactant ratio is important for the stability of the microemulsion. To evaluate the influence of the Tween 80:CO-40/ethanol ratio on microemulsion formation, S mix with surfactant/co-surfactant ratios of 1:1, 2:1, 3:1, and 4:1 was prepared and the microemulsion areas were measured. As shown in Figure 1C, the area of the microemulsion gradually increased along with the surfactant/co-surfactant ratio from 1:1 to 3:1. However, the microemulsion area decreased when the surfactant/cosurfactant ratio reached 4:1. Excessive ethanol and surfactants would reduce the strength and stability of the interface film by the attractive force between the surfactant head groups, which are not conducive to the formation of microemulsion [35,50]. This may be the reason for the surfactant/co-surfactant ratio of 3:1 being the best Tween 80: CO-40/ethanol ratio for microemulsion formation. While excessive surfactant may cause safety problems and increase the viscosity of the sample, the surfactant/co-surfactant ratio of 2:1 was selected here for microemulsion formation. Based on the above pseudo-ternary phase diagram results, the GMEs were eventually developed using EO as the oil phase, Tween 80: CO-40 (surfactant)/ethanol (co-surfactant) (surfactant/co-surfactant ratio 2:1) as S mix , with oil: S mix ratio of 9:1. Characterization of Genipin-Containing Microemulsions (GME) The encapsulation efficiency (EE) (%) and drug loading capacity (DL) (%) are the crucial parameters to evaluate the performance of microemulsion formulation. The EE and DL of the GMEs were 64.11 ± 0.58% and 3.21 ± 0.03%, respectively. The EE (%) of the GMEs was higher than that of the geniposide liposomes (44.87%) prepared with lecithin/cholesterol, but its DL (%) was lower than that of the geniposide liposomes (5.16%) [51]. Microemulsions developed by EO, Tween 80: CO-40, and ethanol can effectively embed genipin. The particle size and PDI are critical to drug release and oral absorption. The particle size and PDI of the GME were measured by a laser particle size analyzer. As shown in Figure 2A, the mean particle size was 16.17 ± 0.27 nm and the PDI was 0.183 ± 0.013. The results indicated that the GME had small droplets with homogeneous dispersibility. Its zeta potential was −8.11 ± 0.77 mV. Particles with higher absolute value of charge (negative or positive) repulse each other, which contributes to the microemulsion stability in solution [52,53]. The GME had negative zeta potentials, indicating its stability in low ionic-strength aqueous solutions. The morphology of the GME was observed by TEM. As shown in Figure 2B, the surface of the GME was smooth, quasi-spherical, and rounded in appearance. The GME was uniformly dispersed and had a narrow size distribution with an average diameter of 16.48 nm, which was consistent with the results obtained by the laser particle size analyzer. Effect of Environmental Stresses on GME Stability Microemulsion-based delivery systems may experience a variety of environmental stresses (e.g., pH, ionic strength, or temperature changes) during storage, transportation, DSC experiments were conducted to investigate the physical state of the empty microemulsion and GME. As shown in Figure 2C, the empty microemulsion and GME did not show an absorption peak with temperatures ranging from −20 • C to 80 • C, indicating that microemulsions were in an amorphous state in the formulation. A similar observation was reported by Hart et al. [34]. Here, ethanol was used as co-surfactant for the microemulsion formation. Due to the relative polarity, co-surfactants are usually distributed between the oil and surfactant tail in the microemulsion; this can reduce the interfacial tension and break liquid crystalline structures [54,55]. Effect of Environmental Stresses on GME Stability Microemulsion-based delivery systems may experience a variety of environmental stresses (e.g., pH, ionic strength, or temperature changes) during storage, transportation, and utilization. Therefore, the influence of pH, ionic strength, and temperature on the physical stabilities of GMEs was examined and the results are shown in Table 2. The GMEs were placed in a water bath at different temperatures (20, 30, 40, 50, 60, and 70 • C) for 2 h and the particle size, PDI, and zeta potential were measured. The temperature did not have a significant effect on the particle size, PDI, or zeta potential (p > 0.05) of the GME, in the range of 20-70 • C. In fact, the surfactants were proposed to be stable at high temperature. Moreover, heat treatment can enhance the interaction between Tween 80 and CO-40 and form a strong protective barrier [36]. Tween 80 and CO-40 may form a stable mixture with water and ethanol in GMEs, which can ensure a perfect fit at high temperatures and ensure that the entire system is in a balanced state. In contrast, the particle size and zeta potential of the GME varied along with the increase in the pH value (p < 0.01) ( Table 1). The mean particle size increased from 17.62 ± 0.07 nm at pH 2.0 to 28.43 ± 0.41 nm, and the zeta potential decreased from −1.50 ± 1.33 mV at pH 2.0 to −12.40 ± 0.72 mV. The phenomenon was consistent with the results found in stearic acid-based lipid nanoparticles by Ife et al. [56]. As a non-ionic surfactant, the surface charge of Tween 80 is not affected by H+ concentration, which can make the system more stable and therefore confer its stability under acidic conditions. However, the surfactant (Tween 80) and the oil phase (EO) of the microemulsion were esters, which undergo hydrolysis under alkaline conditions. With the increase of hydrolysis, the surface charge of the microemulsion changed and the distance between the surfactants increased, which reduced the repulsive force between the microemulsion particles. Therefore, as the pH increased, the microemulsion polymerized and the particle size increased [37,41,56]. The PDI of the GMEs did not vary significantly within the pH range of 2-12, indicating that it has an ideal pH stability. To investigate the effect of ionic strength on its stabilities, the GMEs were prepared by dilution with different final concentrations of NaCl solution (0, 100, 200, 300, 400, and 500 mM) and the particle size, PDI, and zeta potential were measured. As illustrated in Table 2, the average particle size and zeta potential of the GMEs increased in a NaCl concentration-dependent manner. The average particle size of the GME under 500 mM NaCl was 41.16 ± 1.21 nm, which was more than twice the size of the GME without NaCl. The zeta potential of the GMEs increased from −7.25 ± 0.69 mV to −1.66 ± 1.78 mV with the increase of NaCl concentration from 0 to 500 mM. The influence of ionic strength on the PDI was relatively weak. The PDI increased significantly until concentrations of NaCl up to 500 mM. This may be attributed to the fact that, with the increase of NaCl concentration, the ion gradient between the inner and extra microemulsion membranes is increased, and the surface charge is reduced by the electrostatic shielding effect, which compromises the solubility of the surfactants and causes the microemulsion particles to coagulate [36,57,58]. In addition, the absorbency of electrolytes on the surface of the microemulsion can affect the hydration of the surfactant head group and cause a low surface tension, thereby exerting influence on the stability of the microemulsion [59]. In summary, although the zeta potential of the GMEs varied with pH and ionic strength, the PDI remained stable under most of the test conditions. The results indicated that the GMEs had good stability under different pH, temperature, and ionic strength environments. Stability of GME in Simulated Gastrointestinal Digestion In order to examine whether microemulsion could improve the stability of genipin, the GMEs were digested with simulated gastrointestinal digestive juice and its particle size, PDI, and zeta potential were measured. The particle size of the GME increased from 17.01 ± 0.53 nm untreated, to 32.44 ± 3.07 nm under digestion with simulated gastric juice, and finally to 62.93 ± 4.56 nm under intestinal digestion ( Figure 3A). Accordingly, the simulated gastrointestinal digestion also exerted a significant impact on the PDI of the GME, which increased from 0.143 ± 0.018 to 0.577 ± 0.034 under digestion ( Figure 3B). In contrast, the zeta potential of the GME increased from −9.13± 1.22 mV to 1.49 ± 0.68 mV after simulated digestion in gastric juice, while the zeta potential was reversed to −2.38 ± 1.14 mV under simulated intestinal conditions ( Figure 3C). The results showed that the microemulsion system could significantly improve the stability of genipin under simulated gastrointestinal digestion. Genipin can directly react with primary ammonia compounds [13]. The encapsulation of genipin in microemulsion prevents the reaction with amino acids or proteins, which might enhance its bioavailability [60]. EO was reported to resist lipolysis in simulated gastrointestinal conditions [61], and the non-ionic surfactant Tween 80 remained stable in gastric conditions [62], which may contribute to the stability of the GME in simulated gastrointestinal digestion. The results showed that the microemulsion system could significantly improve the stability of genipin under simulated gastrointestinal digestion. Genipin can directly react with primary ammonia compounds [13]. The encapsulation of genipin in microemulsion prevents the reaction with amino acids or proteins, which might enhance its bioavailability [60]. EO was reported to resist lipolysis in simulated gastrointestinal conditions [61], and the non-ionic surfactant Tween 80 remained stable in gastric conditions [62], which may contribute to the stability of the GME in simulated gastrointestinal digestion. In Vitro Release Kinetics of GME To investigate the in vitro release behaviors of genipin from GME, the GME and unencapsulated genipin were dialyzed with pH 1.2 hydrochloric acid aqueous solution, ultrapure water, and PBS (pH 7.4), respectively, and the release of genipin was measured by HPLC. Microemulsion demonstrated a significant effect on delaying the release of genipin. As shown in Figure 4A-C, the cumulative released fraction of unencapsulated genipin reached the equilibrium state at approximately 2 h, while the genipin cumulative released equilibrium state of the GME in hydrochloric acid solution, ultrapure water, and PBS was 8 h, 6 h, and 6 h, respectively. The release rate of the GME in hydrochloric acid solution ( Figure 3A) showed a slow-release behavior compared with that in ultrapure water ( Figure 4B) and PBS ( Figure 4C). Non-ionic surfactants, like Tween 80 and CO-40, were barely affected by pH, which may render it stable under acidic conditions. On the other hand, the salt ions compete for water molecules in the solution and reduce the solubilization ability of the surfactant, which may alter the microemulsion structure [59]. This may also contribute to a delay of genipin release in PBS. The kinetics models can be used to reveal the release mechanism of the drug from the microemulsion [63]. To gain an insight into the release mechanism of GME in different media, the zero-order, first-order, Higuchi, and Weibull models were employed to fit the GME release profiles ( Figure 4D-G). The equations and correlation coefficient (R 2 ) of the different models are shown in Table 3. The model with the highest correlation coefficient (R 2 ) was generally considered the most suitable [64]. As can be seen in Table 3, except free genipin dialyzed with PBS, the release profiles of the other samples fitted well with the Weibull model, with the R 2 values above 0.97942. In contrast, the first-order model fitted well with all of the in vitro release profiles of genipin, with the R 2 values above 0.92898, indicating that the first-order model accurately describes the release mechanism as the drug release from liposomes [65]. According to the first order model, the value of the release rate constant K 1 represented the release rate [66]. The K 1 of the GME for dialysis with hydrochloric acid aqueous solution, ultrapure water, and PBS (pH 7.4) was 0.41, 0.70, and 0.75, respectively, which were significantly lower than that of free genipin, suggesting that encapsulation could dramatically reduce its release rate. Table 3. The equations and correlation coefficients (R 2 ) of different release models. In Vitro Cellular Uptake of GME Study To evaluate the effect of lipid encapsulation on the genipin bioavailability, the cellular uptake was carried out using Caco-2 cells. An MTT assay was conducted first to establish the potential toxicity of the empty microemulsion, free genipin, and GME on the Caco-2 cells. The empty microemulsion did not significantly influence the cell viability within concentrations below 1500 µg/mL ( Figure 5A). As shown in Figure 5B, unencapsulated genipin and GME had no significant influence on the cell viability of Caco-2 cells within the concentration range of 0-10 µg/mL. Compared with genipin, the GME did indeed have an appreciable impact on the Caco-2 cell viability when the concentrations increased up to 25 µg/mL. The cell viability of the control group was approximately 89.97% at 25 µg/mL of genipin concentration, but for GME treatment, the cell viability decreased to 77.31% at the same concentration. It has been reported that microemulsions have a dose-dependent cytotoxicity [67], which may lead to reduced cell viability at high GME concentrations. For this reason, we selected 10 µg/mL of genipin concentration to evaluate the cell uptake, ensuring that the GME did not exert a significant effect on the cell viability of Caco-2. Table 3. The equations and correlation coefficients (R 2 ) of different release models. Zero-Order First-Order Higuchi Weibull Genipin (pH = 1. The cellular uptake of unencapsulated genipin and GME was evaluated through the Caco-2 cells. It was first investigated indirectly using CLSM. The cells were treated with free coumarin 6 (C6 group), coumarin 6 and genipin mixture (mix group), and coumarin 6-labeled GME (GME group) for 0.5 h, 1.0h, 2.0 h, and 4.0 h. As shown in Figure 6A, the coumarin 6-labeled GME group showed an obvious fluorescence signal at 1.0 h, while the fluorescence signal of the other two groups appeared at 2 h. Moreover, the fluorescence intensity of the GME (GME group) was much stronger than that of the other groups. The results indicated that microemulsion could improve the cellular uptake rate and quantity of genipin. The cellular quantities of genipin were further analyzed using HPLC. As demonstrated in Figure 6B, the intracellular accumulation of genipin in the drug-loaded microemulsion group was much higher than that of the free genipin group (p < 0.05) at all test times. Specifically, the Caco-2 intracellular accumulation of genipin was dependent on the incubation time within 2 h ( Figure 6B), while the quantities of genipin increased at 4 h. Genipin can react spontaneously with amino acids, and chemical instability may be the reason for its decrease in concentration [13]. The findings showed that with the microemulsion system, genipin was effectively internalized into the Caco-2 cells and accumulated in the cytoplasm. This enhanced penetration may be due to the presence of surfactants, which increased the permeability of the cell membrane and was conducive to the entry of genipin. In addition, the small particle size of the GME promoted better hydrophobic interaction with the Caco-2 cell membrane. The formation of a small particle size emulsion in the cell enhanced the uptake of genipin, thereby increasing the bioavailability. The GME was an anionic nanoparticle, which can be endocytosed by interacting with the positive site of the protein in the cell membrane [68,69]. Due to the repulsive interaction with the negatively charged cell surface, genipin can be readily captured by the cell [70]. In addition, the GME was an anionic nanoparticle, which can be endocytosed by interacting with the positive site of the protein on the cell membrane, and can be captured by the cell due to the repulsive interaction with the negatively charged cell surface. For the cell lines studied, the internalization of nanoparticles was highly dependent on size. The particles were only allowed to pass through the cell membrane when the size was between 10 and 30 nm. Therefore, the small droplet size of the surfactant in the microemulsion and the amphiphilic nature of the surfactant promoted genipin diffusion and receptor-mediated endocytosis. In this study, the GME droplets were smaller than 30 nm. A similar mechanism may cause the increase of genipin uptake by cells [71]. The findings showed that with the microemulsion system, genipin was effectively internalized into the Caco-2 cells and accumulated in the cytoplasm. This enhanced penetration may be due to the presence of surfactants, which increased the permeability of the cell membrane and was conducive to the entry of genipin. In addition, the small particle size of the GME promoted better hydrophobic interaction with the Caco-2 cell membrane. The formation of a small particle size emulsion in the cell enhanced the uptake of genipin, thereby increasing the bioavailability. The GME was an anionic nanoparticle, which can be endocytosed by interacting with the positive site of the protein in the cell membrane [68,69]. Due to the repulsive interaction with the negatively charged cell surface, genipin can be readily captured by the cell [70]. In addition, the GME was an anionic nanoparticle, which can be endocytosed by interacting with the positive site of the protein on the cell membrane, and can be captured by the cell due to the repulsive interaction with the negatively charged cell surface. For the cell lines studied, the internalization of nanoparticles was highly dependent on size. The particles were only allowed to pass through the cell membrane when the size was between 10 and 30 nm. Therefore, the small droplet size of the surfactant in the microemulsion and the amphiphilic nature of the surfactant promoted genipin diffusion and receptor-mediated endocytosis. In this study, the GME droplets were smaller than 30 nm. A similar mechanism may cause the increase of genipin uptake by cells [71]. Protective Effect of GME on Aβ-Induced Cytotoxicity of PC12 Cells Accumulated research data show that genipin possesses therapeutic potential for central neurodegenerative diseases, such as Alzheimer's disease (AD) and Parkinson's disease (PD) [72]. To evaluate the influence of microemulsion on its biological activity, the protective effect of the GME on Aβ-induced PC12 cell cytotoxicity was investigated. First, the potential toxicity of the GME on PC12 was examined using an MTT assay. Protective Effect of GME on Aβ-Induced Cytotoxicity of PC12 Cells Accumulated research data show that genipin possesses therapeutic potential for central neurodegenerative diseases, such as Alzheimer's disease (AD) and Parkinson's disease (PD) [72]. To evaluate the influence of microemulsion on its biological activity, the protective effect of the GME on Aβ-induced PC12 cell cytotoxicity was investigated. First, the potential toxicity of the GME on PC12 was examined using an MTT assay. Although the cell viability declined with the increase in GME concentrations, the cell viability was more than 85% at the concentration of genipin within the range of 0-10 µg/mL (Figure 7). Therefore, we selected the concentration of genipin within the range of 0-10 µg/mL in our PC12 cell protection studies. Although the cell viability declined with the increase in GME concentrations, the cell viability was more than 85% at the concentration of genipin within the range of 0-10 μg/mL (Figure 7). Therefore, we selected the concentration of genipin within the range of 0-10 μg/mL in our PC12 cell protection studies. To select an appropriate Aβ1-42 concentration for inducing cell damage, the PC12 cells were exposed to different concentrations of Aβ1-42 for 24 h and the cell viability was examined. As shown in Figure 8A, the survival rate of the PC12 cells declined in a Aβ1-42 dose-dependent manner. At 40 μM, the cell survival rate dropped below 50%. Consid-control1. 25 To select an appropriate Aβ1-42 concentration for inducing cell damage, the PC12 cells were exposed to different concentrations of Aβ1-42 for 24 h and the cell viability was examined. As shown in Figure 8A, the survival rate of the PC12 cells declined in a Aβ1-42 dose-dependent manner. At 40 µM, the cell survival rate dropped below 50%. Considering the cells' ability, 20 µM Aβ1-42 was used to perform the cellular uptake experiments. Figure 7. Cytotoxicity of GME on PC12 cells. Error bars are SD (n = 6); ** p ≤ 0.01. Compared with control group. To select an appropriate Aβ1-42 concentration for inducing cell damage, the PC12 cells were exposed to different concentrations of Aβ1-42 for 24 h and the cell viability was examined. As shown in Figure 8A, the survival rate of the PC12 cells declined in a Aβ1-42 dose-dependent manner. At 40 μM, the cell survival rate dropped below 50%. Considering the cells' ability, 20 μM Aβ1-42 was used to perform the cellular uptake experiments. To evaluate the protective effect of GMEs on Aβ-induced PC12 cell cytotoxicity, the PC12 cells were pre-protected by different concentrations of free genipin or GME (with the final concentration of genipin at 1.25, 2.5, 5, and 10 µg/mL) for 2 h, and were then treated with β-amyloid (Aβ1-42, 20 µM) for 24 h. As shown in Figure 8B, both free genipin and GMEs exhibited a protective effect for Aβ1-42-induced PC12 damage in a dose-dependent manner. As expected, the PC12 cells pre-treated with GME (2.5, 5.0, 10 µg/mL of genipin) had a significantly higher cell viability than that of free genipin (p < 0.05). The results indicated that the GME better protected the PC12 cells from the toxicity of Aβ1-42. These findings demonstrate that GMEs may significantly increase the cellular uptake of drugs and be an efficient delivery method for the drug treatment of CNS disorders [73]. Conclusions In this study, genipin microemulsions (GMEs) were developed using EO as an oil phase, Tween 80: CO-40/ethanol (surfactant/co-surfactant ratio 2:1) as S mix , with oil: S mix ratio of 9:1. The GMEs had a small size (16.17 ± 0.27 nm), with an encapsulation efficiency (EE) (%) of 64.11 ± 0.58% and demonstrating relatively high environmental (temperature, pH, and ionic strength) and simulated gastrointestinal digestion stability. GMEs significantly improve the cellular uptake rate and the protective effect on Aβ1-42induced PC12 cell damage. These results indicate that the lipid-based microemulsion genipin delivery system could serve as a promising approach to improve its application in food and pharmaceutical industries.
9,835
2022-03-01T00:00:00.000
[ "Biology", "Chemistry" ]
The epidemiology and aetiology of diarrhoeal disease in infancy in southern Vietnam: a birth cohort study Highlights • The diarrhoeal disease burden in a large, prospective infant cohort in Vietnam is defined.• Minimum incidence of clinic-based diarrhoea in infants: 271/1000 infant-years.• Rotavirus was most commonly identified, followed by norovirus and bacterial pathogens.• Frequent repeat infections with the same pathogen within 1 year.• Inclusion of rotavirus in the immunization schedule for Vietnam is warranted. Shigella spp, Salmonella spp, and Campylobacter spp as the aetiological agents of diarrhoea in hospitalized children under 5 years of age in Ho Chi Minh City (HCMC). 6 How well these data represent the community level burden of diarrhoeal disease is unclear. Further, these data suggest that the majority of hospitalized diarrhoea cases are in children <12 months of age, 6 which is the pivotal age group at which rotavirus vaccine should be targeted. Longitudinal community cohort studies provide an opportunity to evaluate the epidemiology and disease burden of diarrhoea to a fuller scale than hospital-based research. However, few studies have evaluated the incidence of diarrhoea in Vietnam, 3,7 and to date none have focused exclusively on the tropical south of the country. To address this knowledge gap, we sought to define the burden, aetiology, and risk factors for diarrhoeal disease through community cohorts of infants in two distinct settings in this densely populated, rapidly industrializing region. A better understanding of the epidemiology and aetiologies of diarrhoeal disease in southern Vietnam will inform rational public health interventions. Description of the cohort The cohort structure and methodology have been described previously. 8 Briefly, pregnant women were enrolled from 2009 to 2013 in southern Vietnam in two locations: women resident in central HCMC, the largest city in southern Vietnam, were enrolled at Hung Vuong Obstetric Hospital in HCMC; women resident in Cao Lanh District, Dong Thap Province, which is 120 km southwest of HCMC and situated in a semi-rural setting, were enrolled at Dong Thap Provincial Hospital. After delivery, infants were enrolled and followed up for the first 12 months of life with routine visits at 2, 4, 6, 9, and 12 months of age. A brief questionnaire detailing growth and illness in the preceding period since the last visit was administered, and a series of samples (blood, throat swab, nasopharyngeal swab) was collected at each routine visit. Diarrhoeal episode detection During the 12 months of follow-up, passive detection of diarrhoeal illness was performed, in which families were asked to take their child to a designated study clinic if the infant was unwell. At presentation, a brief clinical report was collected, as well as a stool sample. If the child was admitted, a detailed clinical evaluation was recorded. Blood samples were collected at the discretion of the treating physician. A new episode of diarrhoea was defined by !7 days between the onset dates of symptoms. Diarrhoea was defined as three watery loose stools or at least one bloody/mucoid diarrhoeal stool within 24 h, 9 or an increase in stool frequency as determined by the parent's judgement. A secondary source of data on diarrhoeal episodes were selfreports by the mother of diarrhoeal illness in their infant for the period prior to each study visit. Laboratory analysis Stool samples collected from diarrhoeal episodes were stored at 4 8C until transport within 24 h and were then stored at À80 8C until further testing. One-step reverse transcriptase (RT) PCRs for rotavirus and norovirus genogroups I and II (GI and GII) were performed using RNA Master Hydrolysis Probes (Roche Applied Sciences, UK) on a LightCycler 480 (Roche Applied Sciences, UK) with the primers and probe sequences and PCR cycling conditions described previously. 10 Real-time PCR cycling conditions for Shigella (target ipaH) and Campylobacter (Campylobacter jejuni target: hipO; Campylobacter coli target: glyA) were as follows: 95 8C for 15 min, followed by 40 cycles of 95 8C for 5 s, 60 8C for 30 s, 72 8C for 30 s, as described previously. 11,12 Salmonella was detected using an in-house assay targeting the invA gene, which is conserved across the eight Salmonella subspecies, with cycling conditions as follows: 95 8C for 15 min, followed by 45 cycles of 95 8C for 5 s, 60 8C for 60 s. The sequences of the primers and probe for the invA gene were as follows: forward 5 0 -TCATCGCACCGT-CAAARGA-3 0 , reverse 5 0 -CGATTTGAARGCCGGTATTATT-3 0 , probe: 5 0 -FAM-ACGCTTCGCCGTTCRCGYGC-BHQ1-3 0 . The limit of detection was 5 copies/reaction. Stool samples were not available from self-reported diarrhoea episodes. Statistical analyses Two separate incidence measurements were calculated: one evaluating diarrhoeal presentations at a study clinic and/or admitted to hospital, and the other based solely on self-reported diarrhoeal illness derived from information collected at the routine follow-up visits. These data were not merged. Infantyears of observation (IYO) for each infant were derived from the date of birth and date of exit from the study due to either completion of follow-up, documented early withdrawal, or loss to follow-up, defined by the last routine visit or illness presentation, whichever was later, if the full 12-month follow-up period was not completed. Pathogen-specific incidence estimates were not calculated due to low counts, but the incidence of aetiological groups (bacterial, viral, or mixed infection) was evaluated. Comparisons between groups were made using the Kruskal-Wallis test for continuous variables with non-normal distributions and the Chi-square test for categorical variables. Multivariable negative binomial regression was used to identify risk factors associated with severe diarrhoea presenting to a study clinic and/or admitted to hospital. Regression was performed independently for each study site due to the heterogeneity in risk profiles between HCMC and Dong Thap. Factors were included in the multivariable model according to hypothesized associations determined a priori (maternal characteristics, socioeconomic indicators, household elevation), as well as those found to be significantly associated in the univariable analysis (p < 0.05). All analyses were performed in Stata v. 13 (StataCorp, College Station, TX, USA). Spatial clustering analyses To investigate the presence of spatial clustering of diarrhoeal illness, we used a Bernoulli model with all diagnosed episodes of diarrhoea as cases, and children without any reported history of diarrhoeal episodes as the background population using SaTScn v. 9.1.1 (http://www.satscan.org/). Each pathogen in turn was also considered as a case, with the control group remaining all children in the cohort with no reported episode. For the analyses, the upper limit for cluster detection was specified as 50% of the study population. The significance of the detected clusters was assessed by a likelihood ratio test, with a p-value obtained by 999 Monte Carlo simulations generated under the null hypothesis of a random spatiotemporal distribution. Ethics Four hospitals in HCMC (Hospital for Tropical Diseases, Hung Vuong Obstetric Hospital, District 8 Hospital, Children's Hospital 1) and Dong Thap Provincial Hospital participated in the study. The protocol was approved by the institutional review boards of all these hospitals, as well as the Oxford Tropical Research Ethics Committee. Written informed consent was obtained from all participants. Baseline characteristics of the cohorts From July 2009 to December 2013, a total of 6706 infants were enrolled in the birth cohort from 6679 mothers (27 sets of twins). A total of 6239.4 infant-years of observation (IYO) were recorded for these children. In Dong Thap, there were 2458 infants enrolled with 2199.4 IYO, and in HCMC there were 4248 infants enrolled with 4040 IYO. The full 12-month follow-up was completed by 87% of the cohort, with 33% (289/884) of early exits occurring after at least 9 months of cohort membership. Slightly over half of enrolled babies were male (52%), with roughly 5% being of low birth weight (<2500 g) ( Table 1). The majority of children (91%) were breastfed after birth; 33% were exclusively breastfed. The use of milk formula after birth was more frequently reported in HCMC (92%) compared with Dong Thap (26%). Households in Dong Thap were more likely to have characteristics of lower socioeconomic status compared to HCMC, with a higher prevalence of household crowding, a lack of flush toilets, use of river water as the primary water source, and lower maternal education level (Table 1). Incidence of diarrhoeal disease During the follow-up period there were 1690 diarrhoeal presentations detected through clinic-based surveillance. The majority of these illnesses were treated on an outpatient basis (91.4%). The minimum incidence of diarrhoeal presentations estimated for the cohort as a whole was 271/1000 IYO. In Dong Thap, this figure was 604.3/1000 IYO and in HCMC was 89.4/1000 IYO. The minimum incidence estimates for hospitalized diarrhoeal illness in each location were 57.3/1000 IYO and 4.5/1000 IYO, respectively. There were 1656 self-reported diarrhoeal episodes at routine follow-up visits, corresponding to an incidence of 265.4/1000 IYO for the entire cohort. The incidence of self-reported diarrhoea was similar between the study sites: in Dong Thap it was 318.3/1000 IYO and in HCMC it was 236.6/1000 IYO. Stool samples were collected from a far greater proportion of diarrhoeal episodes in the Dong Thap cohort compared to the HCMC cohort (86% vs. 47%). For inpatient diarrhoeal episodes in particular, the completeness of stool sample collection was far higher in Dong Thap (103/126; 82%) than in HCMC (5/18; 28%). This was due to difficulties in identifying hospital admissions of cohort members in real time in HCMC. The proportion of samples positive for at least one pathogen did not differ between sites, but was collectively higher among inpatient samples than outpatient samples (69% vs. 56%). The distribution of aetiologies differed significantly between HCMC and Dong Thap (Chi-square p < 0.001), with viral infections more common in HCMC and bacterial and mixed viral/bacterial infections more common in Dong Thap ( Figure 1A). Mixed viral/bacterial infections were more common among hospitalized diarrhoeal cases than outpatients, however the overall distribution of aetiologies was not significantly different between outpatients and inpatients (Chi-square p = 0.09; Figure 1B). Among all detected diarrhoeal episodes, infections with a mixed viral/bacterial aetiology were most likely to be admitted to hospital (26%, 35/133), followed by viral infections (17%, 67/391). Repeat infections with the same pathogen were identified in a subset of infants. Rotavirus was identified in 365 infants, 32 of whom (9%) had at least two discrete rotavirus infections separated by at least 7 days. This proportion was the same for norovirus (15/163), Shigella (10/108), and Campylobacter (12/141). Of the 120 infants with Salmonella infection, 15 (13%) had at least two distinct episodes where Salmonella was detected. Figure 2 shows the distribution of the interval between repeated infections, by pathogen. The median interval between repeated infections ranged from 37 days for Salmonella to 106 days for norovirus, Clinical characteristics by aetiological group Amongst all 1690 diarrhoeal episodes detected by clinic-based surveillance, the median age of the affected infants was 6.5 months (interquartile range (IQR) 4.6-8.7 months). A total of 55% (n = 934) of all diarrhoeal cases were male. Amongst episodes with an identified aetiology, infants with mixed infections tended to be slightly older (median 8 months) compared to those with the other aetiological groups ( Table 2 ). The median axillary temperature at hospital admission was 37.8 8C (IQR 37-38.5 8C), which did not The numbers below each pathogen label indicate the total number of secondary or tertiary infections for that pathogen. The average length of stay in hospital for all admitted diarrhoeal episodes was 5 days (IQR 3-7 days). Risk factors for diarrhoeal disease Risk factors for diarrhoea were investigated by site. In the unadjusted analysis, increased maternal education was protective against diarrhoea in HCMC, whereas male sex, household crowding, use of a piped water supply, and filtering drinking water were all significant risks (Table 3). In a multivariable analysis, maternal education (incidence rate ratio (IRR) 0.75, 95% confidence interval (CI) 0.56-1.00) remained independently associated with protection, and household crowding (!2 people/ room; IRR 1.45, 95% CI 1.07-1.95) along with filtering drinking water (IRR 1.81, 95% CI 1.17-2.81) remained risk factors in this setting. In Dong Thap, the most important protective factors included maternal age at delivery, maternal education, and filtering of the drinking water supply (Table 4). Male sex and the lack of a flush toilet were risk factors in this setting. After adjusting for confounding, male sex remained the only strongly associated risk factor (IRR 1.20, 95% CI 1.04-1.40), and maternal age (IRR 0.98, 95% CI 0.96-0.99) and education (IRR 0.75, 95% CI 0.62-0.91) remained protective. Spatial clustering As shown in Figure 3, in Dong Thap there was evidence of spatial clustering for each detected pathogen. For all-cause diarrhoea, a cluster was identified with a radius of 6.7 km in the northwest region of the study area (relative risk (RR) 1.79, p < 0.001). All of the pathogen-specific clusters centred generally around the same area, in the more rural part of the Dong Thap study area, with radii ranging from 6.6 km (rotavirus) to 12.4 km (Campylobacter) and RRs from 2.3 (rotavirus) to 3.7 (Shigella). No significant spatial clustering was identified in HCMC (data not shown). Discussion Diarrhoea remains one of the most common yet preventable conditions affecting the poorest children globally. 1 Through a large, longitudinal birth cohort, a substantial burden of diarrhoeal disease in the first year of life was identified in southern Vietnam, with an estimated minimum incidence of 271/1000 IYO. This is an order of magnitude less than an estimate in infants aged <12 months from the late 1990s in rural Hanoi (3.3/child/year), 7 yet it is higher than the incidence estimated in children under 5 years of age in central Vietnam in 2001-2003 (115/1000 child-years). 3 Differences in disease incidence may have arisen from study design, as the study from rural Hanoi included partially active surveillance. Furthermore, although Dong Thap seemingly had a much higher minimum incidence (604/1000 IYO) than HCMC (89/1000 IYO), the large difference is very likely due to underascertainment in HCMC, as the number of healthcare providers in this urban setting is much greater than in semi-rural Dong Thap, 13 and cohort participants therefore had greater opportunity to seek care at non-study clinics. Viral infections represented the largest burden amongst all diagnosed diarrhoeal presentations in this study, confirming an earlier hospital-based study in HCMC. 6 The distribution of aetiologies between the two sites was comparable, with mixed viral infections identified more frequently in HCMC. This may be confounded by under-ascertainment of hospitalized cases, in particular in HCMC, since hospitalized cases were more likely to be bacterial. Campylobacter was the most frequently detected bacterial pathogen in our cohort, which is in contrast to the recently published Global Enteric Multicenter Study (GEMS), which identified Shigella as the third most common cause of disease, behind rotavirus and Cryptosporidium, in moderate to severe diarrhoea in the first year of life across seven different Asian and African countries. 2 As no control specimens were collected from healthy children in the present study, the aetiological role of the detected organisms cannot be determined. However, results from a hospital-based study in HCMC suggest that these organisms are not frequently identified in children without diarrhoea, with only 13% of approximately 600 non-diarrhoeal controls positive for an enteric pathogen. 6 Through this work a large burden of potentially vaccinepreventable rotavirus disease was identified in infants. Over half of all samples with an identified aetiology were positive for rotavirus, with 13% of all rotavirus episodes admitted to hospital. Rotavirus vaccine is available as a 'user pays' product in Vietnam (predominantly the Rotarix monovalent vaccine (GlaxoSmithKline)), but uptake is low due to the prohibitive cost (US$ 70-80) and a lack of vaccine availability in many regions, including Dong Thap. Only 24% of children in HCMC were vaccinated against rotavirus in our cohort. The Vietnamese Ministry of Health has sponsored a locally produced, live-attenuated monovalent rotavirus candidate vaccine, with some success in the early stages of clinical evaluation. 14 Previous work has shown that rotavirus vaccination, if GAVI-subsidized, would be cost-effective in Vietnam, 15 and safe when co-administered within the current expanded programme on immunization (EPI) structure. 16 Furthermore, immune responses (IgA and serum neutralizing antibody) measured against the pentavalent vaccine (RotaTeq) in Vietnamese children were shown in one study to be comparable to those among children in Latin America and Europe. 17 This suggests that rotavirus vaccination in Vietnam may not suffer from the same level of reduced immunogenicity that has been observed to occur with orally administered enteric vaccines in developing countries. 18 The majority of the identified infections were found to have occurred after 6 months of age, potentially due to the waning of protective maternal antibody and generally high rates of breastfeeding after birth, 19,20 and increased exposure to pathogens with the start of consumption of solid foods. The risk factors for diarrhoeal disease identified through this work, including household crowding, low maternal age, and male sex, are generally consistent with the literature. 4,5 In HCMC, drinking filtered piped water was a significant risk for diarrhoeal disease, although the number of families reporting filtering was relatively low. This may be due to the use of ceramic filters that have pores too large to mechanically prevent viruses from entering the drinking water supply. 21 The absence of a measurable protective effect of rotavirus vaccination in HCMC likely reflects the imperfect case ascertainment, as well as the fact that approximately 50% of diarrhoeal episodes with a known aetiology were associated with pathogens other than rotavirus. The identification of increased spatial risk for diarrhoeal disease in the north-western region of Cao Lanh District in Dong Thap may represent a hotspot of transmission, due potentially to poor sanitation or waste management practices. The most important limitation in this work was the passive nature of diarrhoeal disease episode detection. Although the staff made every effort to ensure disease episodes were recorded, an unknown number of infants with diarrhoeal disease may have attended clinics other than ours, especially in HCMC, and it is acknowledged that the interpretation of the present results is dependent on this limitation. Therefore, the minimum incidence measurements herein likely underestimate the true burden, particularly in HCMC, and the risk factor and spatial analyses may be biased by misclassification of some infants with undetected diarrhoeal illness. This may also have affected the conclusions on diarrhoeal aetiology, if the distribution of pathogens among episodes from which no specimen was available differed from those specimens tested. The overall loss to follow-up rate was low, although such bias may also be present and important to consider. Finally, the number of pathogens screened for was limited and may explain the lack of an identified pathogen in almost half of the cases. In particular, screening was not performed for any viruses beyond norovirus and rotavirus, and parasites and diarrhoeagenic Escherichia coli, which are known to be prevalent amongst children with diarrhoea in industrializing countries, were not investigated. 2 Further work to more fully determine the epidemiology of diarrhoeal disease in this setting is warranted, particularly in the face of emerging antimicrobial resistance. 6,22 Active, communitybased surveillance of high-risk populations would provide a more accurate estimation of the true extent of the burden. Furthermore, as roughly 40% of diarrhoeal episodes collected in the present cohort study lacked a final diagnosis, investigation into the prevalence of additional pathogens, particularly Cryptosporidium, 2 would help local clinicians to better understand the range of potential aetiologies and corresponding therapies for their patient population. To explore these questions, enrolment into a cohort study of young children aged 1-5 years, as an extension of this birth cohort study, has recently been completed, which includes active surveillance for diarrhoeal disease and diagnosis of viral and bacterial gastrointestinal pathogens. 23 Through this, it will also be possible to explore the relative pathogenicity of isolated organisms as well as distinguish reinfection from long-term carriage, due to the collection of stool from healthy children as well. In conclusion, the most comprehensive epidemiological description of paediatric diarrhoea in infancy in southern Vietnam, to date, is presented herein. A high burden of diarrhoeal disease in infants under the age of 12 months in both an urban and semi-rural setting is documented, with a large proportion due to vaccinepreventable rotavirus infection. Future efforts to integrate either a GAVI-subsidized or a domestically produced rotavirus vaccine into the national EPI schedule should be pursued.
4,702.6
2015-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Heat Transfer Analysis of a Co-Current Heat Exchanger with Two Rectangular Mini-Channels : This paper presents the results of research on heat transfer during fluid flow in a heat exchanger with two rectangular mini-channels. There was Fluorinert FC-72 flow, heated by the plate in the hot mini-channel, and co-current flow of distilled water in the cold mini-channel. Both fluids were separated by the copper plate. A thermal imaging camera was used to measure the temperature distribution of the outer surface of the heated plate. The purpose of the calculations was to determine the heat transfer coefficients at the contact surfaces: the heated plate—FC-72 and FC-72—the copper plate. Two mathematical models have been proposed to describe the heat flow. In the 1D approach, only the heat flow direction perpendicular to the fluid flow direction was assumed. In the 2D model, it was assumed that the temperature of the heated plate and FC-72 and the copper plate meet the appropriate energy equation, supplemented by the boundary conditions system. In this case, the Trefftz functions were used in numerical calculations. In the 1D model, the heat transfer coefficient at the interface between FC-72 and the copper plate was determined by theoretical correlations. The analysis of the results showed that the values and distributions of the heat transfer coefficient determined using both models were similar. Introduction Heat exchangers are currently used in all branches of industry in which heat transfer occurs. They are applied in the computer and IT industry for cooling integrated circuits, computer motherboards, server racks, etc. Recently, compact heat exchangers have also been looking for solutions to improve the work of renewable energy devices, such as PVT photovoltaic cells. The constant improvement and tightening of environmental protection requirements, as well as the growing efficiency of devices, contribute to the search for compact heat exchangers that would be increasingly energy efficient, effective and based on environmental-friendly fluids. The desire to ensure high efficiency of heat exchangers is a part of the common trend towards miniaturization of heating and cooling technology devices. Studies of compact heat exchangers with small gaps of different geometries were described in [1]. The authors demonstrated that the micro-channel with geometry produced in the plane surface jet in comparison to the straight one may dissipate more heat. It was presented that the flux position influences the total performance of the heat exchanger and needs to be optimized for a specific condition and geometry. The work [2] concerns experimental heat transfer data investigated on the plate heat exchanger dedicated for use in hydraulic cooling systems. The heat exchanger consisting of thin metal welded plates of stainless steel was tested. On the basis of the experimental data, a correlation was estimated for Nusselt number as a function of other dimensionless numbers, namely, Reynolds number and Prandtl number. In addition to convective and overall heat transfer coefficients, exchanger effectiveness was determined. Heat transfer in the plate heat exchanger with modified surface was discussed in [3]. In the research, the corrugated plate heat exchanger (PHE) was tested as a commercial model. On the basis of the results of investigation of the water-ethanol system, it was noticed that the heat transfer coefficient on the ethanol side achieved higher values for the modified heat exchanger. Furthermore, on the water side, higher values of the coefficient were gained than on the commercial one. The paper [4] describes the gasketed plate heat exchanger (GPHE) as a model commonly used in industries such as chemical processes and refrigeration. It is a type of heat exchanger that is used in condensation or evaporation systems. Due to the complex design of the corrugated surface, the fluid flow during its work should be highly turbulent. The plate-fin heat exchanger (PFHE) is a type of compact heat exchanger that has a lot of applications such as vehicle radiators, air conditioners and gas liquefiers. In paper [5], the three following types of nanofluid were applied as working fluids flowing through a PFHE: SiO2, TiO2 and Al2O3. The authors tested applying the nanofluids in a heat exchanger to influence the heat transfer rate. The effects on the thermophysical properties and heat transfer characteristics were realized in comparison to those obtained for the base fluid. The results showed that the thermal conductivity and the heat transfer coefficient increased with the addition of nanoparticles and TiO2. Today, there is increasing attention to heat exchangers with mini-or micro-channels. Numerous articles have been published concerning heat transfer during flow in minichannel test sections. A literature review on the investigation of flow boiling heat transfer in micro-scale channels, including physical mechanisms, models and correlations, was presented in [6]. The paper [7] presents mini-channel heat exchangers that were applied in small-scale organic Rankine cycle (ORC) installations. The authors have shown four 1-dimensional models of the wall thermal resistance in heat exchangers with rectangular mini-channels. The first model was with a single wall that separated two fluids. In the second model, the total volume of intermediate walls between layers of mini-channels and their side walls were taken into account. Two other models assumed the thermal resistance of the minichannel walls. After analyzing the models, it was indicated that the thermal resistance of the metal walls could be neglected. Moreover, models show that the optimal wall thickness is relatively small taking into account plastic walls. In the study [8], the authors focused on the boiling of deionized water during flow in a horizontally oriented rectangular mini-channel. Six types of flow patterns were noticed. An amendment correlation for the heat transfer coefficient was also proposed. In [9], the effect of channel size on the temperature field of the battery modules was tested. Inlet boundary conditions were also taken into account in the analysis. The channel width of the cooling plates was found to be highly influential on the temperature of the battery module. Compared to the other designed channel, the advantages of low manufacturing cost and low flow resistance of the rectangular flow with the straightshaped channels were underlined. Researchers are examining different cross-sections of mini-channels, because geometries have a strong influence on the flow characteristics. In the works [10][11][12], the problem of boiling heat transfer during flow in an annular mini-gap was investigated. Boiling heat transfer in a small circular and a small rectangular channel with refrigerant, R-12 was explored in [13]. The effects of channel geometry and fluid properties on heat transfer were presented. Furthermore, heat transfer mechanisms in small channels were analyzed. The results were compared with the correlations of other authors. In numerical simulations for the heat transfer process, commercial software is often used, such as ANSYS CFX/Fluid [14], ADINA [15] and Simcenter STAR-CCM+ Software [16]. In the paper, a number of well-known heat transfer correlations were used. There are many correlations in the available literature that describe flow boiling in conventional and small diameter channels. Analysis of the correlations of boiling heat transfer was presented in [17]. Dutkowski correlation concerns flow boiling heat transfer of R-134a, R-404a in channels with a diameter of 2.30 mm, circular mini-channels. According to this correlation, the Nusselt number is a function of two dimensionless numbers: the Reynolds number and the boiling number. Cooper correlation was proposed mainly for the description of pool boiling heat transfer [18]. Mikielewicz correlation is a modification of Dittus-Boelter correlation by introducing the vapour quality. The most commonly used correlation to determine the heat transfer coefficient for a fully developed turbulent flow in smooth tubes is the Dittus-Boelter correlation [19]. Mikielewicz correlation [20] was dedicated to determine the heat transfer coefficient in, both subcooled and saturated boiling regions. Previous studies conducted by the authors focused on heat transfer investigations, based on experimental data collected during fluid flow in the test section comprising: a singular mini-channel or a group of mini-channels of rectangular cross section [17,21,22] and with an annular mini-gap [10,12] Experiments covered single-phase forced convection, subcooled and saturated boiling regions during one medium flow in the channel system. Furthermore, mathematical models of heat transfer in the test sections and solution were proposed using methods based on the Trefftz functions helped by the FEM [22] and the hybrid Picard-Trefftz method [23] for time-dependent and stationary conditions. A model for subcooled flow boiling in mini-channels and numerical computations were performed using two commercial programs: ADINA [15] and Simens Simcenter STAR-CCM+ software [16]. Until now, the typical heat exchanger test section with two different working fluids flowing in two additional mini-channels has not been used for testing in the research setup. In general, the main novelties of this work cover are: • Testing a new construction of a mini heat exchanger, which required essential changes compared to previous constructions; • The proposition of a mathematical model to describe heat transfer in a mini heat exchanger with two channels, fixed on the mini-channel with hot fluid flow. In this paper, two-dimensional mathematical models of a co-current heat exchanger with two rectangular mini-channels are described. Solving the proposed system of energy equations with the appropriate set of boundary conditions leads to the solution of inverse identification problems. The parameters to be identified include the temperature of selected elements in the measuring section, temperature gradients at their boundaries and heat transfer coefficients between the working fluid and the walls of the channel with the flowing fluid. Solutions of inverse problems in engineering are highly sensitive to input data uncertainties and have troublesome ill-posedness. This sensitivity intensifies when three consecutive inverse problems are considered. Even advanced commercial software can fail in such cases. Thus, stable methods are necessary to solve inverse problems in engineering, including conjugate inverse problems. The Trefftz method [24] meets the requirements mentioned above as confirmed by the results shown in [25,26]. An extensive review of research devoted to solving inverse heat transfer problems in mini-channels using methods based on Trefftz functions, called T-functions for short, is presented in [23]. In the two-dimensional approach discussed in this paper, the Trefftz method allowed determining the temperature distribution in selected elements of the measuring section in the form of continuous and differentiable functions that exactly satisfy the relevant differential equations. Two sets of Trefftz functions were used: Trefftz functions specific to energy equation in fluids and harmonic functions [25]. The results of the calculation based on the experimental data were compared with the one-dimensional approach that used correlations derived from the literature, and the comparison results were consistent. Experimental Stand and Test Section The experimental setup is shown in Figure 1. Its main elements are: the test section with two mini-channels (1), circulating pumps (2,8), pressure meters (3), heat exchangers (4a,4b), filters (5,9), mass flow meter (6a), magnetic mass flow meter (6b), air separators (7,10), an ammeter (11), a voltmeter (12) and an infrared camera (13). The most important circuits realized on the experimental setup are two closed loops of the working fluids, including: one named the hot fluid circuit, in which the working fluid FC-72 circulates, and the other named the cold fluid circuit, in which distilled water flows. A data acquisition station (14), PC computer (15), and a power supply (16) complement the experimental stand equipment. The 3D view of the test section with mini-channels is shown in Figure 2a whereas the individual components of the test section are presented in Figure 2b. Its most important elements are three parallel plates, constituting the main walls of two mini-channels of a rectangular cross-section (each 1.5 mm deep, 24 mm wide and 240 mm long), Figure 3. Additional elements of the module are silicone gaskets. Through the mini-channels, separated by a copper plate, there is flow of FC-72 fluid in the hot mini-channel and distilled water in the cold mini-channel. The outer wall (10) of the hot mini-channel (1) is resistively heated. It is a thin plate (thickness δH = 0.45 mm) made of the Haynes-230 alloy. The electrodes made of Hastelloy X alloy (9) are connected to the power supply system. The thermal imaging camera is used to measure the temperature of the outer surface of the heated plate [27]. Heat transfer between co-current flowing media occurs through a copper plate (6) with a thickness of 0.3 mm. The outer wall of the cold mini-channel is a plate (5) with a thickness of 0.45 mm, also made of Haynes-230 alloy. K-type thermocouples and pressure gauges have been installed at the inlet and outlet of the collectors that supply each mini-channel. Experimental Procedure, Parameters and Errors After deration of the flow circuit installation and the test section, as well as stabilizing the pressure and flow rate of the fluids, the heat flux supplied to the heated plate is gradually increased by fluid adjustment of the current. The co-current flow of working fluids in mini-channels is forced by the operation of pumps. A thermal imaging camera is used to monitor the temperature of the outer heated plate surface. The main parameters and errors of the experiments are listed in Table 1. General Assumptions Local heat transfer coefficients between the FC-72 working fluid and the two channel walls (the Haynes-230 alloy heated plate and the copper plate) were determined assuming the following: • steady state in the test section and temperature independence of the physical parameters of the test section's elements, • negligible heat losses to the environment through the external surfaces of the test section; the system is insulated, • convective heat transfer in the mini-channels, Two-Dimensional Approach Two dimensions are included in the 2D model of heat transfer in the test section: dimension x parallel to the flow direction and dimension y perpendicular to the flow direction representing the plate thicknesses and channel depths. It is assumed that the temperatures of the heated plate TH, the hot fluid TFC and the copper plate TCu satisfy the adequate differential equations, that is: for the heated plate: for fluid FC-72: for the copper plate: ∇ 2 T Cu = 0 (3) For the Poisson's Equation (1), the temperature of the insulated outer wall is assumed to be known from thermal camera measurements and that both walls perpendicular to it are insulated. With these assumptions, the boundary conditions for Equation (1) can take the form: For Equation (2), the assumptions are as follows: the parabolic velocity w FC y is parallel to the heated plate and is satisfying the following condition: the temperature of the FC-72 fluid (flowing in the hot mini-channel) at the contact area with the heated plate at the mini-channel inlet and outlet is known, i.e.,: For Laplace Equation (3), adequate boundary conditions are adopted, that is, fluid FC-72 and the copper plate are in perfect thermal contact and the walls perpendicular to them are insulated as follows: Additionally, at the perpendicular walls the temperature TCu satisfies the conditions: (a)T 0,y = max T FC,in ,T w,in , (b) T Cu L,y = max T FC,out ,T w,out Solving Equations (1)-(3) with boundary conditions (4)- (12) leads to the solution of three consecutive inverse heat transfer problems within three adjacent areas (the heated plate, FC-72 fluid, the copper plate) that differ in size and physical parameters. Inverse problems are ill-posed problems [29] that require stable and effective solving methods. This requirement is met by the Trefftz method [24] in which the unknown solution of a partial linear differential equation is approximated by a linear combination of functions (called Trefftz functions or T-functions) that exactly satisfy this equation. In this study, sets of Trefftz functions were used in this study: harmonic functions for the Laplace's and T-functions specific to energy equation in fluids [25,26]. Two-dimensional temperature distributions are determined as in [26]. The known two-dimensional temperature distributions of: the heated plate TH, the fluid TFC and the copper plate TCu allow determining the values of corresponding local heat transfer coefficients at the boundaries between the FC-72 fluid and the heated plate (α 1, 2D (x)) and FC-72 fluid and the copper plate (α 2, 2D (x)) from the following formulas: the heated plate-FC-72 fluid: FC-72 fluid-the copper plate: In Equations (13) and (14), the reference temperature of FC-72 is calculated as in [30]: One-Dimensional Approach The results obtained from the 2D approach were verified with the proposed simplified 1D model which included only the dimension perpendicular to the flow. The assumption about the insulation of the test section allows assuming that the entire volumetric heat flux generated inside the heated plate is transferred to the flowing fluid FC-72 according to Fourier's law: Since the heated plate is very thin (δ H = 4.5·10 -4 m), the temperature of the heater plate in Equation (16) with the difference quotient. Therefore, by adjusting formula (13) to the one-dimensional model, the heat transfer coefficient in the contact area between the heated plate and the fluid FC-72 has the form [27]: where temperature of the hot fluid (FC-72) T FC,lin (x) is calculated from: Then, the heat transfer coefficient α 2,1D (x) at the interface between the FC-72 and the copper plate is determined using selected theoretical correlations known from the literature, Table 2. Table 2. Heat transfer correlations selected from the literature. Author Equation Remarks Cooper [ -Flow boiling - The saturated boiling region -For circulated mini-channels Results and Discussion Calculations were made for the heat flux density q from the range 12.26 ÷ 33.93 kW/m 2 . The values of the remaining experimental thermal and flow parameters are given in Table 3. Figure 5a shows the thermograms recorded with the thermal camera, corresponding to the temperature distributions on the outer heated plate surface. Calculations were made for the central cross section of the mini-channel along its length, where the heated plate temperature changes as shown in Figure 5b. When designing heat exchangers, it is important to determine the overall heat transfer coefficient and the Fanning friction factor. For the co-current heat exchanger, the overall heat transfer coefficient related to the heat transfer area A was calculated as follows: where means log mean temperature difference calculated as in [31] and Q is the average of the heat fluxes from the hot and cold mini-channels. The Fanning friction factor was obtained using the following formula, [32,33]: (24) where K is the channel aspect ratio equal to the ratio of the channel width to the channel depth. The value of the overall heat transfer coefficient, the Fanning friction factor and average values of selected dimensionless numbers, mainly Reynolds, Prandtl and Graetz numbers, determined for both mini-channels and each supplied heat flux, are given in Table 4. Reynolds number values indicate laminar fluid flow in both mini-channels. In the 2D approach with the Trefftz method, the temperature distribution was first determined for the heated plate, then for the FC-72 fluid, and finally for the copper plate. The basic properties of the functions obtained in this way are given in [23,25]. Figure 6 shows the two-dimensional temperature distribution of the three areas: the heated plate, the flowing fluid FC-72 and the copper plate. Table 3 (pictorial view, not scaled). It can be noticed that the FC-72 fluid entering the cold mini-channel is heated by the Haynes-230 alloy plate mainly at the contact surface. The liquid next to the heated wall has a strongly increased temperature, which decreases significantly with distance from it. In the hot mini-channel axis, there is a moderate temperature of the fluid which decreases as the cold channel approaches (see Figure 6a for lower value of heat flux). Based on data, it is obvious that in the hot channel, heat transfer proceeds by a single-phase forced convection starting from the channel inlet up to the middle along the FC-72 flow; then, the single-phase region is transferred to the subcooled boiling region near the channel outlet. Heat transfer is probably not greatly disturbed by the increase in the amount of bubbles in the flowing vapour-liquid mixture. The resulting heat transfer coefficient at the heated plate-fluid FC-72 interface reached values on the order of several hundred to a maximum above two thousand W/(m 2 K), increasing along the channel length, Figure 7. It confirms the authors' previously obtained results concerning asymmetrically heated minichannels with the flow of one fluid [15,23]. At the same time, cold water flowing in the second channel cools the copper plate and is, however, in the case of a higher heat flux, less intense (see Figure 6b). In the cold channel, the differences between the plates temperature and water temperature are small and single phase convection occurs in the entire mini-channel. For both approaches, the calculation results are similar, with higher heat transfer coefficients obtained from the 2D approach (Figure 7b) compared to the corresponding based on the 1D approach (Figure 6a). The values of the maximum relative differences for the heat transfer coefficients obtained with the two approaches (Equations (13) and (17)) and calculated as in [26] does not exceed 67% and decreases with increasing heat flux, Figure 8. (13) and (17). The values of the heat transfer coefficient α2 at the interface between FC-72 and the copper plate are shown in Figure 9. Figure 9a shows the variability of the heat transfer coefficient calculated by the Trefftz method, while Figure 9b-d show the coefficient values calculated based on the correlations given in Table 2. For the same experimental data, the α2 values are lower than those of α1, see Figures 7 and 9. As in the case of the heat transfer coefficient α1, the values of α2 increase with increasing distance from the channel inlet and the heat flux supplied to the heated plate. Heat transfer coefficients α2 at the FC-72-copper plate contact surface were calculated according to the 2D approach-Equation (14), Cooper correlation-Equation (19), Mikielewicz correlation-Equation (20) and Dutkowski correlation-Equation (22). The results are shown versus the distance from the mini-channel inlet in Figure 9a-d, respectively. The coefficients determined according to Dutkowski correlation show good agreement with the experimental results. The maximum relative differences, between the heat transfer coefficient calculated from Equation (14) and the heat transfer coefficients obtained from the correlations listed in Table 2, range from 12.69% (for q = 25.94 kW/m 2 ) to 70.5% (for q = 12.26 kW/m 2 ), Figure 10. Table 2. Analyzing the results shown in Figure 10 that illustrate the comparative results according to the 2D approach and obtained using selected correlations from the literature, the values of the maximum relative differences were lower for higher heat fluxes. The highest was reached, up to 70.5%, for q = 12.26 kW/m 2 when the Mikielewicz correlation was tested. The trend of decreasing the maximum relative differences with increasing heat flux supplied to the heated plate was detected, although the smallest values were obtained for q = 25.94 kW/m 2 (not the smallest heat flux value). Furthermore, it was observed that the smallest values of relative differences for each heat flux were achieved when the Dutkowski correlation was used in comparative analyses. The smallest relative differences equal to 12.69% were observed for q = 25.94 kW/m 2 when the Dutkowski correlation was applied in the calculation. For the 2D approach, the mean relative error of the heat transfer coefficient was calculated as in [34] while the uncertainties of the measurements were taken from Table 2. Analogically, the mean relative errors of the heat transfer coefficient were calculated for the 1D approach. Table 5 compares the mean relative errors of the heat transfer coefficients in both mathematical approaches. The values of the mean relative errors occurred as smaller while the one-dimensional approach was used in comparison to the 2D approach. For both calculation methods, the mean relative errors decrease with increasing heat flux supplied to the heated plate and reach the highest value of 14.7% for q = 12.26 kW/m 2 in the 2D approach. Figure 11 presents the values of the heat transfer coefficient together with error bars in the case where the highest value of the mean relative error was obtained, i.e., when q = 12.26 kW/m 2 . For the 1D approach, the errors are evenly distributed along the entire length of the mini-channel. In contrast, for the 2D approach, the errors increase with the distance from the inlet to the mini-channel, achieving the highest values at the outlet of the mini-channel. Conclusions The paper discusses the results of tests related to heat transfer during two fluid flows in two rectangular mini-channels separated by a copper plate while the test section was oriented vertically. Heat flux was supplied to the outer surface of the hot mini-channel wall in which there was a Fluorinert FC-72 flow. The co-current flow of distilled water occurred in the cold mini-channel. The objective of the calculations was to determine the heat transfer coefficients characterizing the transfer of heat from the heated plate to the FC-72 fluid and from the FC-72 fluid to the copper plate. Two approaches were proposed that describe the heat flow in the test section: one-dimensional (1D) and two-dimensional (2D) for which Trefftz functions were used in calculations. Based on the results of the experiments and their analysis, the following conclusions can be drawn: • In the hot mini-channel, heat transfer was transferred by single-phase convection and subcooled boiling occurs near the channel outlet; the heat transfer coefficients determined for both contact surfaces, that is, the Haynes-230 plate-the fluid FC-72 (α1) and the copper plate-FC-72 (α2) increased with increasing heat flux regardless of the calculation method chosen; • In the cold mini-channel, the temperature differences between plates and distilled water were low, as single-phase convection occurs in the entire mini-channel; • The resulting heat transfer coefficient at the heated plate-fluid FC-72 interface α1 reached values on the order of several hundred to a maximum of more than two thousand W/(m 2 K); • For the same experimental data, the α2 values are lower than those of α1; • For both mathematical approaches, the calculation results are similar, with higher heat transfer coefficients from the 2D approach compared with the corresponding coefficients from the 1D approach; • For the heat transfer coefficients on the heated plate-FC-72 contact surface (α1), the maximum relative differences between the results obtained from the two approaches (1D and 2D) decrease with increasing heat flux and do not exceed 67%; • For the heat transfer coefficient on the FC-72-cooper plate contact surface (α2), the maximum relative differences between the results (obtained from the 2D approach and the selected correlations) decrease with increasing heat flux supplied to the heated plate; good agreement with the experimental results showed those determined from Dutkowski correlation: the smallest relative differences equal to 12.69% were obtained for q = 25.94 kW/m 2 . • The values of the mean relative errors are smaller for the 1D approach compared to the 2D method and, for both calculation methods, decrease with increasing heat flux supplied to the heated plate reaching the highest value of 14.7% for q = 12.26 kW/m 2 . For the 1D approach, the mean relative errors are evenly distributed along the entire length of the mini-channel, while for the 2D approach, they increase with the distance from the inlet to the mini-channel. Further research will address modification of the test section in order to provide temperature measurements from the plate separating the channels and to calculate heat transfer coefficients regarding the cold mini-channel, as well as testing heat transfer during counter-current flows in mini-channels. The main interest will be focused not only on the subcooled boiling region but also on the saturated boiling region, taking into consideration fluid flow in the hot mini-channel. In future investigations, enhanced surfaces of the plates will be used to verify whether their use can intensify heat transfer processes. Different materials of plates will be tested. In experiments, several working fluids of various physical properties will be applied. Further studies will also include modification of the mathematical model and application of the Picard-Trefftz hybrid method. Conflicts of Interest: The authors declare no conflict of interest.
6,486.2
2022-02-12T00:00:00.000
[ "Engineering", "Physics" ]
Squeezed and Entangled Gluon States in QCD Jet Theoretical justification for the occurrence of multimode squeezed and entangled colour states in QCD is given. We show that gluon entangled states which are closely related with corresponding squeezed states can appear by the four-gluon self-interaction. Correlations for the collinear gluons are revealed two groups of the colour correlations which is significant at consider of the quark-antiquark pair productions. Introduction Many experiments at e + e − , p p, ep colliders are devoted to hadronic jet physics, since detailed studies of jets are important for better understanding and testing both perturbative and non-perturbative QCD and also for finding manifestations of new physics.Although the nature of jets is of a universal character, e + e − -annihilation stands out among hard processes, since jet events admit a straightforward and clear-cut separation in this process.In the reaction e + e − → hadron four evolution phases are recognized by various time and space scales (Fig. 1).These are (I) the production of a quark-antiquark pair: e + e − → q q; (II) the emissions of gluons and quarks from primary partons -perturbative evolution of the quark-gluon cascade; (III) the non-perturbative evolution and the hadronization of quarks and gluons; (IV) the decays of unstable particles.The second phase of e + e − -annihilation has been well understood and sufficiently accurate predictions for it have been obtained within the perturbative QCD (PQCD) [1].But predictions of the PQCD are limited by small effective coupling α(Q 2 ) < 1 and third phase is usually taken into account either through a constant factor which relates partonic features with hadronic ones (within local partonhadron duality) or through the application of various phenomenological models of hadronization.As a consequence, theoretical predictions both for intrajet and for interjet characteristics remain unsatisfactory.For example, the width of the multiplicity distribution (MD) according to the predictions of PQCD is larger than the experimental one.The discrepancies between theoretical calculations and experimental data, for example the width of MD, suggest that the non-perturbative evolution of the quark-gluon cascade plays important role.New gluon states, generated at the non-perturbative stage, contribute to various features of jets.In particular, such a contribution to MD can be in the form of the sub-Poissonian distribution [2,3]. It is known that such property is inherent for the squeezed states (SS), which are well studied in quantum optics (QO) [4]- [6].Squeezed states posses uncommon properties: they display a specific behaviour of the factorial and cumulant moments [7] and can have both sub-Poissonian and super-Poissonian statistics corresponding to antibunching and bunching of photons. Therefore we believe that the non-perturbative stage of gluon evolution can be one of sources of the gluon SS in QCD by analogy with nonlinear medium for photon SS in QO.Gluon MD in the range of the small transverse momenta (thin ring of jet) is Poissonian [8].Quark-gluon MD in the whole jet at the end of the perturbative cascade can be represented as a combination of Poissonian distributions each of which corresponds to a coherent state.Studying a further evolution of gluon states at the non-perturbative stage of jet evolution we obtain new gluon states.These states are formed as a result of non-perturbative self-interaction of the gluons expressed by nonlinearities of Hamiltonian.Using the Local parton hadron duality it is easy to show that in this case behaviour of hadron multiplicity distribution in jet events is differentiated from the negative binomial one that is confirmed by experiments for pp, pp-collisions [9]- [11]. At finite squeezed parameter r a continuous variables entangled state is known from quantum optics as a two-mode squeezed state [4,14] where Ŝ 12 (r) = exp{r(â + 1 â+ 2 − â1 a 2 )} is operator of two-mode squeezing.It is not difficult to demonstrate that the state vector |f describes the entangled state.Each of these entangled states has a uncommon property: a measurement over one particle have an instantaneous effect on the other, possibly located at a large distance. The dimensionless coefficient is the measure of entanglement for two-mode states [15], 0 ≤ y < 1 (entanglement is not observe when y = 0).Here âi â+ j = âi â+ j − âi â+ j , âi , â+ j are the annihilation and creation operators correspondingly.Averaging the annihilation and creation operators in the expression (2) over the vector |f (1) at small squeeze factor we have y √ 2r. ( Two-mode gluon states with two different colours can lead to q q-entangled states.Interaction of the quark entangled states with stochastic vacuum (quantum measurement) has a remarkable property, namely, as soon as some measurement projects one quark onto a state with definite colour, the other quark also immediately obtains opposite colour that leads to coupling of quark-antiquark pair, string tension inside q q-pare and free propagation of colourless hadrons.Therefore the investigation of the gluon entangled states connected with the corresponding squeezed ones is issue of the day. Multimode squeezed states of the gluons By analogy with QO [6] a multimode squeezing condition for gluons with different colors i 1 , . . ., i p is written as where , N is a normal ordering operator, the phase-sensitive Hermitian bi j λ − ( bi j λ ) + are linear combination of the annihilating (creating) operators bi j λ ( bi j + λ ), i 1 , . . ., i p = 1, 8 are gluon color charges, λ is a polarization index.Averaging in ( 4) is performed over final state vector which describes gluon system later small time t.Operators Ĥ(3) I (t) and Ĥ(4) I (t) describing the three-and four-gluon selfinteractions include combinations of three and four annihilating and creating operators [16]: Ĥ( 4) QUARKS-2016 Here g is a self-interaction constant, d k = , k 0 is a gluon energy, ε μ λ is a polarization vector, f ahb are structure constants of SU c (3) group. Initial state vector |in describe gluon system at end of perturbative stage [8] and is product of the coherent states of the gluons with different colours and polarization indexes λ is fixed.Averaging the annihilation and creation operators bi j λ , bi j + λ in (4) over the evolved vector |f which is defined according to (5) and taking into account of chosen initial state vector we write the multimode squeezing condition in the form where ĤI (0) = Ĥ(3) I (0) + Ĥ(4) I (0).It can be shown that only the four-gluon self-interaction can yield the multimode squeezing effect since Ĥ(3) Indeed, the multimode squeezing condition can be written in the explicit form as In particular for the collinear gluons we have corresponding squeezing condition Here |α b λ 1 | and γ b λ 1 are an amplitude and a phase of the initial gluon coherent field, f ahb is a structure constant of the color group S U c (3).The multimode squeezing condition (12) QUARKS-2016 apart from if all initial gluon coherent fields are real or imaginary.Obviously, the larger are both the amplitudes of the initial gluon coherent fields with different colour and polarization indexes and coupling constant, the larger is multimode squeezing effect. By analogy with two-mode photon state (1) we can make corresponding gluon states with squeezed operator Ŝ 12 (r) = exp r ( bh In case of the evolved gluon system during small time t we have Two-mode squeezing condition is In particular for the collinear gluons we have corresponding two-mode squeezing condition Thus non-perturbative four-gluon selfinteraction is source of the multimode squeezing effect. Gluon entangled states One of entangled condition is 0 where entangled measure is defined by dimentionless value y (2) which in our case is Entangled condition is Obviosly the entangled condition (19) imposes greater restrictions than the squeezing condition (15).Indeed, corresponding entangled condition for collinear gluons is Obviously squeezed gluon states are simultaneously entangled if the amplitudes of the initial gluon coherent fields are small enough.Thus by analogy with quantum optics as a result of four-gluon self-interaction we obtain two-mode squeezed gluon states which are also entangled. QUARKS-2016 4 Colour correlations of the collinear gluons Second-order colour correlation function is defined as Corresponding function for the collinear gluons is It is obviously this function is sin-like function of the difference between the phases of the investigated and other colour phases of the coherent gluon states.Indeed, the colour correlation is positive at 0 < (γ b λ 1 +γ c λ 1 )−(γ h λ +γ g λ ) < π (the gluon bunching) and one is negative at 0 < (γ h λ +γ g λ )−(γ b λ 1 +γ c λ 1 ) < π (the gluon antibunching).Moreover correlation of colours h and g is defined by the structure constants f hab , f gac of S U c (3) colour group. Conclusion Investigating of the gluon fluctuations we have proved theoretically the possibility of existence of the multimode gluon squeezed states.The emergence of such remarkable states becomes possible owing to the four-gluon self-interaction.The three-gluon self-interaction does not lead to squeezing effect.We have shown that QCD jet non-perturbative evolution leads both to squeezing and entanglement of gluons.It should be noted that the greater are both the amplitudes of the initial gluon coherent fields with different colour and polarization indexes and coupling constant, the greater is multimode squeezing effect of the colour gluons.We have demonstrated that entanglement condition of the gluon states with two colours imposes greater restrictions than the squeezing condition and is also defined by the the amplitudes and phases of initial coherent gluon fields. Because two-mode gluon states with two different colours can lead to q q-entangled states role of colour correlations could be very significant for explanation of the confinement phenomenon.The QUARKS-2016 study of color correlations for collinear gluons revealed two groups of colour correlations: first group of the correlations (h = 1, 2, 3 and g = 1, 2, 3; h = 4, g = 5; h = 6, g = 7) depends only on distinguished of the gluon polarization, in the second group (h = 1, 2, 3 and g = 4, 5, 6, 7; h, g = 4, 5, 6, 7, 8) correlation behaviour is defined in addition by gluons with other colours. 1 | and phase γ b λ of the given gluon field α b λ 1 = |α b λ 1 |e i γ b λ 1 . By analogy with QO gluon coherent state vector |α b λ is the eigenvector of the corresponding annihilation operator bb λ with the eigenvalue α b λ 1 which can be written in terms of the gluon coherent field amplitude |α b λ In each gluon coherent state |α b λ the gluon number with fixed colour b and polarization λ is arbitrary (the average multiplicity of given gluon is equal to square of the gluon coherent field amplitude n b λ = |α b λ | 2 ) and phase of considering state γ b ) is fulfilled for any cases
2,569
2016-10-28T00:00:00.000
[ "Physics" ]
TILINGS IN TOPOLOGICAL SPACES A tiling of a topological spaceX is a covering ofX by sets (called tiles) which are the closures of their pairwise-disjoint interiors. Tilings of R2 have received considerable attention (see [2] for a wealth of interesting examples and results as well as an extensive bibliography). On the other hand, the study of tilings of general topological spaces is just beginning (see [1, 3, 4, 6]). We give some generalizations for topological spaces of some results known for certain classes of tilings of topological vector spaces. Introduction. A tiling of a topological space X is a covering of X by sets (called tiles) which are the closures of their pairwise-disjoint interiors. Tilings of R 2 have received considerable attention (see [2] for a wealth of interesting examples and results as well as an extensive bibliography).On the other hand, the study of tilings of general topological spaces is just beginning (see [1,3,4,6]). By way of notation, for a subspace A of a topological space X, the symbols A, FrA, and A • denote, respectively, the closure, boundary, and interior of A. For a collection Ꮽ of subsets of a set X, we denote by, Ꮽ and Ꮽ, respectively, the union and intersection of the members of Ꮽ.For a point x ∈ X, we define Ꮽ(x) = {A ∈ Ꮽ : x ∈ A}. The following definitions are basic to our discussion.Letbe a tiling of a topological space X.We define the set of frontier points ofto be the union of the boundaries of the tiles inand denote this set by F(-).The protected points ofconstitute the set P (-) = {x ∈ X : x ∈ ( -(x)) • } and the complement of this set in X is U(-), the set of unprotected points of -.The set of improper points ofis defined as The set of 2-protected points is P (-) 2 , the subset of protected points common to exactly two tiles. A singular point ofis a (frontier) point at whichis not locally finite (that is, a point every neighborhood of which intersects infinitely many tiles in -).The collection of all singular points ofis denoted by S(-).Finally, we define S 0 (- The relations between the sets defined above have been established in [4,6] (for topological vector spaces, but they remain valid for topological spaces) and are summarized in the following proposition.Proposition 1.1.For any tilingof any topological space X, the following inclusions hold: S(-) ∩ P (-) ⊂ S 0 (-) ⊂ S(-) ⊂ F(-) and I(-) ⊂ U(-) ⊂ S(-).Moreover, F(-) and S(-) are closed sets. None of these inclusions can be improved, as [4,Ex. 1.1] shows. Star-finite tilings. A tiling is called star-finite if each tile meets only finitely many other tiles. Two tilingsand -1 of the same space X are said to be topologically equivalent if and only if there exists a homeomorphism h : X → X such that if T ∈ -, then h(T ) ∈ -1 (thus, also if Nielsen defined in [4] four properties that a tiling in a vector topological space could satisfy.Only two of these properties have sense in topological spaces and are quoted below. A tilingof a topological space X satisfies property P 3 if and only if for each T ∈ -, x ∈ Fr T , and neighborhood U of x, there is an open neighborhood V of x, V ⊂ U, such that V − Fr T has exactly two connected components. A tilingof a topological space X satisfies property P 4 if and only if for each proper subcollection ⊂and each pair of distinct tiles T 1 and T 2 in , the set It is proved in [4] that P 3 implies P 4 and one can easily see that the proof given there for topological vector spaces remains valid for arbitrary topological spaces.The reverse implication is false. We denote, by Cond A, the set of condensation points of A. The following result is the first part of [4,Thm. 3.1]. Lemma 2.2.Suppose M and N are closed subsets of a topological space such that M = Cond(M), N ⊂ M, and N is nowhere dense in M. Then M = Cond(M − N). Theorem 2.3. Ifis a star-finite tiling that possesses P 3 of a topological space such that no connected open set can be disconnected by deleting a countable subset, then U(-) = S(-) = I(-) = Cond(I(-)). Proof.The proof is essentially the same as that of [4,Thm. 3.4].We only have to note that the role of the point 0 in that proof can be equally replaced by any other point of X and that we can avoid the reference to [4, Lem.3.2] (which uses essentially the vector structure) using the additional hypothesis over X and over -(in [4,Thm. 3.4] it is only supposed thatis P 4 ) in the following way. We need to show that M ⊂ Cond M, where M is the frontier of the union of a finite set of tiles.If this were not true, then there would be a point x ∈ M such that every open neighborhood of it intersects M in a countable set of points.Let T be a tile such that x ∈ Fr T .By P 3 , we have that if V is a connected open neighborhood of x, V − Fr T = V − (V ∩ Fr T ) has two components, contradicting both the fact that the set V ∩ Fr T is countable and the hypothesis over X. Note that the topological hypothesis imposed on X is verified by every topological vector space of dimension greater than one.And the more restrictive hypothesis of being -P 3 instead of P 4 is easier to check than the other. Theorem 3.1.Let X be a topological space such that every closed subset of it is a Baire space in the subspace topology (for example, completely metrizable spaces).Let be a countable tiling of X with property P 3 .Then the set Note that Corollaries 3.2 and 3.4 have the same thesis for two noncomparable hypotheses (P 3 implies P 4 , but it does not imply that F(-) − I(-) is of second category in F(-) at each point of F(-)). On the other hand, Nielsen [6, Thm.1.8] has a topological thesis, but the vector structure is essential in the hypothesis.Theorem 3.5.Let X be a separable normed space such that every closed subspace of it is a Baire space in the subset topology.Letbe a countable tiling of X such that any line that sectionscontains at most countably many points of F(-).Then P 2 (-) is dense and (relatively) open in F(-). A carefull study of the proof shows that the vector hypothesis is used once in order to establish a lemma with topological thesis and supposed that thesis, the rest of the proof is valid for topological spaces.The lemma needs the following definition.Definition 3.6.Let X be a topological space.A tilingon X is called connected if given an open set V intersecting F(-), the set of points of V common to exactly two tiles inis of second category in V ∩ F(-). And, hence, Theorem 2.3.12 of Nielsen's Ph.D. Thesis states the following: Lemma 3.7.Let X be a separable normed space such that every closed subspace of it is a Baire space in the subset topology.Letbe a countable tiling of X such that any line that sectionscontains at most countably many points of F(-).Then the tiling is connected. For topological spaces, we have the following results: Lemma 3.8.Let X be a topological Baire space,a connected countable tiling, and let U be an open set in X intersecting F(-).Then there are two tiles T 1 and T 2 inand Proof.The proof is the same as in Nielsen's Ph.D. Thesis for the case of normed separable spaces, but as it does not appear in [3,4], or [6], we quote it here for the sake of completeness. From the hypothesis, the set of points of V common to exactly two tiles inis of second category in U ∩ F(-).Sinceis countable and F(-) is closed (and, thus, is a Baire space by assumption), it follows, from the Baire category theorem, that for some two tiles T 1 and T 2 in -, the set of points common to exactly T 1 and T 2 is not nowhere dense in U ∩ F(-).Thus, there is an open set V ⊂ U intersecting F(-) such that the points in V common to exactly the tiles T 1 and T 2 are dense in V ∩ F(-).But T 1 ∩ T 2 is closed, so we must also have Theorem 3.9.Let X be a topological Baire space anda connected countable tiling. Then P 2 (-) is dense and (relatively) open in F(-). Proof.Again, the proof is the same as in Nielsen's Ph.D. Thesis.That P 2 (-) is relatively open in F(-) follows immediately from the definitions.Let U be a connected open set in X intersecting F(-).We show that U contains points of P 2 (-), which proves the theorem. By the preceding lemma, there are two tiles T 1 and T 2 inand an open set Assume, to reach a contradiction, that Applying again the preceding lemma, we obtain a tile This, however, contradicts the fact that E T 1 ,T 2 is dense in U ∩ F(-).Thus, we must have V ∩ (F (-) − F(-i )) ≠ ∅.Since F(-) − F(-i ) ⊂ P 2 (-), the proof is now complete.Corollary 3.10.Let X be a topological Baire space anda connected countable tiling.Then S(-) is nowhere dense in F(-). Compare with Corollaries 3.2 and 3.4. Facets and vertices of a tiling. To complete our study of tilings of topological spaces, we are going to consider a generalization of a result by Breen that Nielsen did not consider in the realm of topological vector spaces. Given a tilingof a topological space X, a facet of the tiling is a connected component of the intersection of some finite set of tiles.We say that a facet is degenerated if it is a singleton (in that case, we call that facet a vertex).In any other case, we say that the facet is nondegenerated.(In the case of countable tilings by topological disc, these definitions correspond to those of [1].) Call D(-) the set of points in F(-) that are not in a nondegenerated facet of -.In [1], one can find a relation between the cardinals of D(-) and S(-) for countable tilings of the plane by closed topological discs. We show that one of the two parts of that relation can be extended to topological spaces with weak restrictions on the tiling and that the other is false even for countable tilings by topological balls of the euclidean space. Theorem 4.1.Let X be a topological space anda tiling such that Fr T is locally connected for each tile T ∈and suppose that for every T ∈ -, every x ∈ Fr T and every neighborhood U of x, the set (U −{x})∩Fr T is nonvoid.Then D(-) ⊂ S(-), and so, |D(-)| ≤ |S(-)|. Proof.Let x be a point of D(-).We show that x is a singular point.Since I(-) ⊂ S(-) by Proposition 1.1, it is enough to show that x is an improper point. In case every neighborhood of x contains a boundary point ofwhich belongs only to a tile T , one can use the above case to construct a net of singular points converging to x, which completes the proof.Now, given a point x ∈ D(-), we can assume that x ∈ Fr T for certain T ∈and given any neighborhood U of x, every point of Fr T ∩ U belongs to at least two tiles. Choose x 1 ∈ Fr T ∩ (U − {x}) (since the hypothesis ensures that the set is nonvoid) and a tile T 1 ≠ T with x 1 ∈ T 1 .Suppose that there is some neighborhood V of x such that Fr T ∩ V ⊂ T ∩ T 1 .By the hypothesis of local connectedness, we can suppose that Fr T ∩V is connected, and so, it is contained in a connected component of T ∩T 1 .Thus, Fr T ∩ U is not a singleton, which means that x belongs to a nondegenerate facet and so x ∉ D(-).So for every neighborhood U of x, there is some point x 2 of Fr T ∩V −{x} with x 2 ∉ T 1 .Take again the neighborhood U and select a tile T 2 ≠ T 1 ,T with x 2 ∈ T 2 .An induction yields prove that U has points in infinitely many different tiles.Hence, x is a singular point.This result is clearly an extension of [1,Lem. 1,Cor. 1].One can ask if the other part of the theorem ([1, Lems. 2 and 3, Cor.2]) can be extended to topological spaces under suitable hypothesis.The answer is completely negative since we can construct in the following example a countable tiling of R 3 by topological balls that does not satisfy [1,Cor. 2]. Example 4.2.Take the product with [0,1] of the tiling by triangles given in Figure 1 and extend to a countable tiling of R 3 adding cubes.All the points of the product with [0,1] of the set of singular points of that tiling are singular in the tiling of R 3 , but all of them are in a nondegenerated facet. Corollary 3 . 2 . If X andare as in Theorem 3.1, the set S(-) of singular points ofis nowhere dense in F(-).Theorem 3.3.If X andare as in Theorem 3.1 andis a countable tiling of X with property P 4 and with F(-) − I(-) of second category in F(-) at each point of F(-), then P 2 (-) is dense and open in F(-).Corollary 3.4.If X andare as in Theorem 3.3, the set S(-) of singular points of -(and thus I(-)) is nowhere dense in F(-).
3,257.8
1999-12-01T00:00:00.000
[ "Mathematics" ]
Analyzing engineered point spread functions using phasor-based single-molecule localization microscopy The point spread function (PSF) of single molecule emitters can be engineered in the Fourier plane to encode three-dimensional localization information, creating double-helix, saddle-point or tetra-pod PSFs. Here, we describe and assess adaptations of the phasor-based single-molecule localization microscopy (pSMLM) algorithm to localize single molecules using these PSFs with sub-pixel accuracy. For double-helix, pSMLM identifies the two individual lobes and uses their relative rotation for obtaining z-resolved localizations, while for saddle-point or tetra-pod, a novel phasor-based deconvolution approach is used. The pSMLM software package delivers similar precision and recall rates to the best-in-class software package (SMAP) at signal-to-noise ratios typical for organic fluorophores. pSMLM substantially improves the localization rate by a factor of 2 - 4x on a standard CPU, with 1-1.5·104 (double-helix) or 2.5·105 (saddle-point/tetra-pod) localizations/second. Introduction Fluorescence microscopy is frequently employed in biological sciences due to its high selectivity and non-invasiveness. Conventionally, the obtainable optical resolution in fluorescence microscopy is given by Abbe's diffraction limit which is equal to the wavelength of the light divided by double the numerical aperture of the objective (~ 200 nm for visible light). A multitude of techniques summarized by the term superresolution (SR) microscopy or nanoscopy [1][2][3] have been developed, however, to obtain spatial information well below this limit. These techniques include (d)STORM (direct stochastic optical reconstruction microscopy) [4,5], PALM (photoactivatable localization microscopy) [6], SIM (structured illumination microscopy) [7], STED (stimulated emission depletion microscopy) [8], RESOLFT (reversible saturable optical fluorescence transitions) [9], SOFI (super-resolution optical fluctuation imaging) [10], SRRF (super-resolution radial fluctuations) [11] and MINFLUX (minimal photon fluxes localization microscopy) [12]. Single-molecule localization microscopy (SMLM) is the subcollection of super-resolution techniques in which the fluorescent emission profile, ordinarily referred to as a point spread function (PSF), of a single fluorophore is localized with a precision (~ 5 -40 nm) that can exceed the classical resolution limit by more than one order of magnitude [13][14][15][16]. SMLM is therefore an integral part of STORM and PALM, and has been extensively used in biological research [17][18][19], for example to study DNA transcription [20,21], CRISPR-Cas DNA screening [22][23][24], nuclear pore complexes [25,26], and microtubules [27]. In a conventional fluorescence microscope, a PSF from a single emitter in focus resembles an Airy pattern, which can be approximated by a 2-dimensional Gaussian function. This approach has been the basis of the earliest localization algorithms [13,16,28], which allow for determination of the emitter locations [16] as long as overlapping of PSFs is negligible. Besides Gaussian-based methods, these symmetrical PSFs have been analyzed via other mathematical frameworks, such as radial symmetry [29], cubic splines [30], or phasor (Fourier) analysis [31]. The shape of the PSF quickly deteriorates, however, if the emitter is out of focus (~100s of nm), leading to both a limited available axial range and inaccessibility of the absolute axial position [32]. Therefore, a variety of methods have been developed to modulate the shape of the PSF depending on the emitter's axial position [33]. Historically, the first method (astigmatism; AS) introduced a cylindrical lens in the emission pathway to create ellipsoid PSFs if the emitters are out of focus [34,35]. The extent of the deformation along with its orientation allows for determination of the axial position after a calibration procedure, and fitting of these PSFs could usually be performed by derivatized localization algorithms as the ones used for 2D PSFs [28,31,36]. However, the available axial range of astigmatism is limited to less than ~1 µm, which lead to the development of more advanced PSF shaping procedures that involve modulating the light in the pupil (Fourier) plane. Using a spatial light modulator (SLM), the principle was first employed to create a double helix (DH) pattern, in which the PSF is split in two separate lobes that non-degeneratively rotate around each other based on the emitters axial position, resulting in an usable axial range up to 2.5 µm [37]. Later, the same group theoretically maximized the information content of PSFs resulting in the Saddle-Point (SP) or Tetra-Pod (TP) designs, which are suitable for 3 µm (SP) or ≥ 6 µm (TP) axial ranges [38,39]. PSFs for both SP and TP are altered in the Fourier plane via a phase mask [38,39] or deformable mirror [40]. Determining the sub-pixel positions corresponding to the emitters via DH, SP, or TP PSFs, however, is more challenging than for symmetric or AS PSFs, as fitting with a single 2D Gaussian is insufficient. The current state-of-the-art fitting algorithms [41] rely on phase retrieving methods [40,42] or spline interpolation [26] to determine a PSF model based on calibration samples. A high-resolution PSF model can then be determined from these models which is fitted on experimental data. These methodologies can work with arbitrarily shaped PSFs, including DH, SP and TP. However, these methods are computationally expensive and thus time-consuming. Recently, real-time fitting localization of experimental PSFs have been achieved using graphical processing units (GPUs) [26], but this has not yet been achieved on computation processing units (CPUs), which would increase the accessibility and might allow implementations directly on the camera hardware. Here, we show fast retrieval of DH (1.5·10 4 loc/s) and SP/TP (2.5·10 5 loc/s) PSF localizations on a standard CPU via novel adaptations of the phasor-based single-molecule localization microscopy (pSMLM) algorithm [31]. We first explain the underlying methodology for DH and for SP/TP, termed circulartangent (ct-)pSMLM, and then explore the performance of the methods by analyzing simulated and experimental data. We have implemented all pSMLM versions (2D, AS, DH, SP/TP) in a recently published software package (SMALL-LABS [43]), resulting in user-friendly and open-source software to quickly perform sub-pixel localization including advanced background filtering options. Software and hardware All software was written and ran in MATLAB (MathWorks, UK) version 2018b on a 64-bit Windows 10 computer equipped with an Intel i5-8600 CPU @ 3.10 GHz, 16 GB RAM. SMALL-LABS software Our software package expands the original SMALL-LABS software [43] in several ways. Firstly, we added the original pSMLM-3D algorithm for 2D or astigmatism PSF sub-pixel localization, as well as the novel variations discussed in this manuscript. Next, a custom GUI was written to increase user accessibility. Lastly, the pre-and post-processing options are expanded with wavelet filtering [44], cross-correlation drift correction in three dimensions [45], and average shifted histogram result image generation [46,47]. Saddle-point PSF simulations PSF simulations have been performed as described earlier [16,31] with NA = 1.25, emission light at 500 nm, 100 nm/pixel camera acquisition and 1000 PSFs for every intensity/noise combination. We used a full vectorial model of the PSF needed to describe the high NA case typically used in fluorescent superresolution imaging. The center of the PSF is located within ± 1 pixel of the center of the image. Zernike polynomials (primary astigmatism) and (secondary astigmatism) are introduced in a 0.5:-0.65 ratio [40], and z-positions were chosen randomly between -1.5 and +1.5 µm away from the focal plane. Sub-pixel localization of single-molecule data For double-helix (DH) sub-pixel localization, the four datasets from the 2016 SMLM challenge [41] were analyzed, which use experimental PSF models. These datasets differ in signal-tonoise (SNR) values ('high SNR', which mimics Alexa647 fluorophores, and 'low SNR', which mimics fluorescent proteins) and in emitter density ('low density' (LD) at 0.2 loc/µm 2 and 'high density' (HD) at 2 loc/µm 2 ). For double-helix localization, the following settings were used. For SMALL-LABS-pSMLM-DH, the temporal window length and the minimum duration of fluorophore on-time before it is discarded were both set to 150 frames. Filtering for region-ofinterest (ROI) finding was performed with a β-Spline wavelet filter with the threshold set to 1.9 times the standard deviation of a filtered frame. Single-lobe DH location was performed with a phasor radius of 4 pixels (low density) or 2 pixels (high density). The z-position was calculated via a calibration with identical phasor radius. For the SMAP software with fit3dSpline sub-pixel localization [26], a calibration was performed with a 33 x 33 pixel ROI. Then, localizations were identified via a mean calibrated PSF, with a 2.9 pixel Gaussian blur, using all calibrated z-positions. A threshold set to an absolute cutoff value of 86 (high SNR), 76 (low SNR, LD), or 29 (low SNR, HD) photons was used. The calibrated spline PSF was fitted with a 15 x 15 pixel ROI. Then, localizations with relative log-likelihood lower than -2 (low density) or -5 (high density) were discarded. For the localization of simulated saddle-point PSFs, we note that in order to prevent localization artefacts at specific z positions for ct-pSMLM (SI fig 1), a large ROI (> ~ 1.2x the maximum distance between the lobes) had to be used to determine lateral localization accurately, while a smaller ROI (~ 2.3 x 2.3 µm) was required for accurate axial localization, as ct-pSMLM with a large ROI cannot accurately describe the axial position around the focus. Then, all PSFs were localized directly with ct-pSMLM as described in the result section or via the SMAP software with fit3dSpline sub-pixel localization [26]. For ct-pSMLM, we used a 23 x 23 pixel ROI to calculate the z-position and a 43 x 43 pixel ROI to calculate the x and y position, and a 4 th -order polynomial was used to fit the calibration curve. For SMAP, calibration was performed with a 15-px Gaussian blur to find PSFs, a 51 x 51 pixel ROI, and 10 nm axial distance between every localization. For localization, a 10-px Gaussian blur was used to find PSFs, with a threshold of 20. A 41 x 41 pixel ROI spline fitting with 100 iterations based on the calibrated data was used to localize the PSFs. Localization of experimental saddle-point data was performed with the SMALL-LABS-pSMLM software package. A median background subtraction with temporal window length of 150 frames and minimum duration of fluorophores to be discarded of 100 frames was used. Localizations were identified via a bandpass filter with a 95 threshold percentile. Potential lobes of saddle-point point spread functions were identified with a 3 pixel radius ROI 2D phasor fitting routine. Ct-pSMLM fitting was then performed with an 11 pixel radius ROI around the center of localizations. Calibration was performed using simulated point spread functions at varying z positions, consisting of 5000 photons on a noiseless background, with deformations similar to experimental data. Three-dimensional cross correlative drift correction was performed via the SMALL-LABS-JH software, with 10 lateral subpixels and 10 temporal bins. The average shifted histogram image was created using ThunderSTORM [46], using 50 nm axial bins and 10 lateral subpixels. Assessment of localization performance For double-helix (DH), localizations between ground-truth (GT) and software 1 (S1; SMALL-LABS-pSMLM-DH) and between GT and software 2 (S2; SMAP with fit3dSpline) are linked on a frame-by-frame basis, with a maximum allowed lateral distance of 250 nm, and a maximum allowed axial distance of 500 nm. The median offset between GT and S1 and between GT and S2 is calculated and subtracted from the S1 and S2 datasets, to avoid introducing consistent offset errors in the RMSE calculations. The linking of localizations between GT and S1/S2 is repeated, as localizations can be shifted in/out of the maximum linking distance due to the median offset. Of this linked dataset, the Jaccard index is calculated as follows: = + + where TPo, FP, and FN are the true positive, false positive, and false negative localizations, respectively. Then, only localizations that are present in all three datasets (GT, S1 and S2) are selected, and of these localizations, the root mean square error (RMSE) in a single dimension is calculated as follows: where pi indicates the position of localization i in any dimension, and S indicates S1 or S2. For saddle-point (SP), 1000 PSFs for every signal-to-noise combination were simulated (section 2.3), after which a calibration curve was created in SMALL-LABS-ct-pSMLM or SMAP with PSFs containing 3·10 4 photons on a 1 photon/pixel background. For SMAP localizations, obtained localizations well outside the expected regime (10 pixels or further removed from center) were discarded, and frames containing multiple or no localizations were fully discarded. Note, localizations obtained with SMAP that were clearly misfitted (an offset in z by at least 3 times the average z offset calculated by ct-pSMLM) were discarded; no such discarding was performed for ct-pSMLM. For both ct-pSMLM and SMAP, the x, y and z positions were compared with the ground-truth, and the standard deviation of this offset was calculated for every intensity and background combination and is shown in the results. We note that the mean of the offset was centered around 0 for every tested intensity/noise/software combination. Single-molecule microscopy For SMLM experiments, we used a home-built super-resolution microscope similar to one reported previously [24]. Briefly, light from a fiber-coupled 642 nm laser (Omicron, Germany) was collimated using an achromatic lens (f = 30mm, Thorlabs) and conducted to a parabolic mirror (RC12APC-P01, Thorlabs). The laser light was then focused using an achromat lens (f = 150mm, Thorlabs) in front of a polychroic mirror (ZT532/640rpc, Chroma) into the backfocal plane of an 100x oil-immersion objective (CFI Plan Apo, NA = 1.45, Nikon Japan) such that a highly inclined illumination (HiLo) profile with a total laser power of ~70 mW was achieved. Emitted fluorescence passing the objective, the polychroic mirror and a bandpass filter (ZET532/640m-TRF, Chroma) was then guided into a 4f geometry using the following lenses ( STORM experiment A SAFe sample containing immobilized Cos7 fibroblasts from Green African Monkeys (ATCC) with Alexa Fluor 647 labeled tubulins was purchased from Abbelight (Paris, France). A nitrogen-flushed buffer containing 50 mM TRIS pH8, 10 mM NaCl, 10% glucose, 50 mM 2-mercaptoethanol, 68 µg/mL catalase, and 200 µg/mL glucose oxidase [27] was added to the sample chamber which was sealed off before the measurements. 60.000 frames of 20 ms length were recorded using the setup described in section 2.6. Analysis of the singlemolecule data was performed as specified in section 2.4. Principles of engineered PSF localization with pSMLM: Double-helix: DH-pSMLM To localize double-helix (DH) PSFs, we rely on the fact that pSMLM-2D provides accurate lateral localization even when using a relatively small ROI around the center of an emitter [31]. Therefore, the two lobes rotating around each other (Fig. 1a) can be localized separately. During calibration, the distance and rotation between the two lobes is plotted against the axial position (Fig. 1b). The rotation is fitted with a third-order polynomial. This polynomial is weighted on the inverse of the standard deviation of each axial position if more than one calibration bead has been used. The lateral position is calculated as being the average lateral position of the two lobes, corrected for a 'wobble' factor. This wobble factor is determined in x and y as function of the emitters axial position (Fig. 1c) by comparing the lateral localization at all axial positions with the lateral localization at the axial center of the calibration dataset. The average of this wobble effect over an axial sliding window (user-defined, default value is set to 5 axial positions) is determined during calibration and stored for future correction of lateral localization calculation (Fig. 1d). To extract positional information, first a standard pSMLM-2D fitting is performed [31]. The localizations in each frame are compared with each other to find pairs within the expected distance regime (determined during calibration; minimum and maximum of distance between lobe centers, with a ~10% error margin), and are discarded if no pair can be found. During the linking of the lobes, priority is given to lobes that only have a single possible counter-lobe over those that have multiple options to reduce mis-fitting of closely positioned DH PSFs. The axial position is then determined from the rotation of the two lobes via the calibration curve. The obtained distance between lobes is checked against the distance determined during calibration at the found axial position, and the localization is discarded if these values differ more than ~100 nm (user defined). Lastly, the lateral position is determined from the mean of the 2D-determined position of the two lobes, and corrected for the wobble determined during calibration (Fig 1d). 3.2 Principles of engineered PSF localization with pSMLM: Saddle-point and tetra-pod: ct-pSMLM We analyze saddle-point (SP) and tetra-pod (TP) PSFs with an adapted phasor-based localization methodology. SP and TP have similar characteristics and show separation of a single point when in focus into two lobes above and below the focus in perpendicular directions [38,39]. Moreover, they are based on similar PSF deformations introduced by primary and secondary astigmatism Zernike coefficients [40]. We modified a spectral phasor-based approach [48] in which the convolution of arbitrary profiles in real space is a linear combination of their respective phasor representations in phasor space. In this approach, the normalized intensity ratio between the original profiles in the convoluted profile (real space) is represented as the distance of the original phasor profiles to the convoluted phasor profile (phasor space). This entails that if two profiles are combined with a 1:1 ratio, the convoluted phasor representation is on the mid-point of the line between the phasor representations of the original profiles. In SP and TP PSFs, the final spatial representation of the PSF is a convolution of two separated lobes of identical intensity. Thus, SP and TP PSFs can be treated as a 1:1 ratio of arbitrary profiles that are separated at a varying distance, depending on the axial position of the emitters. Note that by orientating the respective optical components correctly, this separation can be achieved perfectly on the xor y-axis. Therefore, the value for the separation dlobes, along with the orientation of this separation provides suitable information for calibration of SP and TP PSFs. To determine the separation dlobes with phasor-based singlemolecule localization microscopy (pSMLM), we assume that the width of the individual lobes in the direction of the convolution is identical to the width of the convoluted PSF in the other, unconvoluted spatial direction. For illustration, we show a combination of two 2-dimensional Gaussian distributions (Fig. 2a,b,c). The phasor representation of the individual Gaussian distributions is represented by a single phasor for both dimensions, each having a certain, but different, angle representing the emitter's position in real space [31]. Then, if reasoned from the convoluted PSF (Fig. 2c) to obtain the individual lobes, the tangent at the magnitude circle in the convoluted spatial dimension (broad spatial dimension; small phasor magnitude; black cross located on the red circle in Fig. 2c) will intersect the magnitude of the smaller spatial dimension (large phasor magnitude; represented as a blue circle) at two points ( Fig. 1c; magenta and orange dots). These points are a measure for original arbitrary profiles with identical spatial sizes in both dimensions that combine in a 1:1 intensity ratio to result in the convoluted profile. The angle between these two obtained intersectional points (θlobes) in phasor space is a direct normalized value for the distance dlobes in real space (Fig. 2c). We call this method circular-tangent pSMLM (ct-pSMLM). The obtained dlobes is used to create well-defined calibration curves for both SP and TP (SP shown in Fig. 2d,) that can be fitted with arbitrary functions (e.g. a fourth-order polynomial) to deduce axial positional information from experimental PSFs (Fig. 2e). The lateral localization information of the SP or TP PSFs is still inherently present in the original phasor-representation of the complete PSF. Determining localization of the SP or TP PSFs in the SMALL-LABS-pSMLM software consists of two parts: finding the central positions and further analysis with ct-pSMLM. The mid-points of single PSFs are determined by first checking whether two detected emitters that could represent two lobes of a single SP/TP PSF belong to the same PSF. If these emitters have little deviation in one dimension (<0.5 px) and are slightly separated in the other dimension (less than the calibrated maximum distance), the mid-point of these emitters is calculated and stored. If no other lobe can be found, it is assumed the located emitter is the mid-point of the SP/TP PSF. Then, ct-pSMLM is performed around the central point with a reasonably large region of interest (> 2 µm) to obtain dlobes and to calculate the axial position. Double-helix To evaluate the performance of DH-pSMLM, we performed fitting of simulated datasets [44] via the full pSMLM-updated SMALL-LABS software package and compared with the currently best performing non-machine learned localization algorithm (experimental PSF spline fitting methodology incorporated in SMAP [26]). As the ground truth of these datasets is publicly available, we were able to extract (Table 1) quantitative performance parameters such as the expected deviation of localization accuracy in all three dimensions (root mean squared error, RMSE) and the Jaccard index JACC (a measure for correctly and incorrectly localized particles [41]). These performance parameters are calculated from localizations that were found in both software packages and in the Ground-Truth datasets. We observe that both SMALL-LABS-pSMLM and SMAP have comparable RMSE errors (Table 1) in the order of 10-25 nm for the low density (LD), high signal to noise (SNR) dataset which are similar to the ones reported previously for SMAP [26,41]. However, at low signal to noise levels, SMAP outperforms SMALL-LABS-pSMLM on all performance indicators. This is presumably due to SMAP using the full PSF at once, while SMALL-LABS-pSMLM splits localization in two steps. This results in SMALL-LABS-pSMLM working with a lower apparent signal to noise level, causing a lower localization accuracy. We note that the reported RMSE values for SMAP analysis of the LD, low SNR dataset are counter-intuitively better than those of SMAP analysis of the LD, high SNR dataset. This is a result of the RMSE calculation methodology used (Material and methods), as only localizations that are found in both software analyses as well as in the ground-truth are used for RMSE calculations. We observe that SMALL-LABS-pSMLM outperforms SMAP in terms of localization recall rates (Jaccard index, Table 1) at high SNR (23% increased), but not at low SNR (4% decrease). The Jaccard values for SMAP are slightly lower than reported earlier [41] (Material and methods), but can be compared directly with the Jaccard values for SMALL-LABS-pSMLM reported here. We note that no background subtraction is performed in SMAP, while SMALL-LABS-pSMLM subtracts the background based on foreground temporal variations (SMALL-LABS [43]). Both software packages are not capable of recognizing HD PSFs with a good recall rate, although SMALL-LABS-pSMLM outperforms SMAP in all conditions, as single DH lobes are localized with only 5 x 5 pixel ROIs, decreasing the influence of the other nearby emitters. SMALL-LABS-pSMLM is ~3 -4 x faster compared to SMAP for low density datasets, and ~2x faster for high density datasets. However, most analysis time for SMALL-LABS-pSMLM (~65%) is spend on the background correction and format conversion rather than approximate localization or sub-pixel DH-pSMLM localization (Supplementary Table 1). The localization procedure itself can achieve 1 -1.5·10 4 localizations per second on a standard CPU. Saddle-point and tetra-pod The performance of the localization of saddle-point (SP) PSFs was assessed and compared to experimental PSF spline fitting [26]. As ct-pSMLM is a non-iterative method, high localization rates of up to 2.5·10 5 localizations per second were achieved on standard CPUs (Fig. 3a). This is an order of magnitude lower than traditional pSMLM-3D [31], mostly due to the large required region of interest around the PSF (>2 µm; here 23 x 23 px), and partly due to the additional computations required for ct-pSMLM. Taken alone, the additional computations of ct-pSMLM compared to pSMLM-3D only result in a 10 -40% decrease in localization rates (~10% for at a large region of interests 23 x 23 px; ~40% decrease for 7 x 7 px). The lateral localization accuracy of ct-pSMLM is in line with that of experimental PSF spline fitting (Fig 3b), and decreases from ~100 nm (~1 pixel) at typical (~200 -1300) photon values for fluorescent proteins to ~10 nm (~0.1 pixels) at typical (~(2 -11) x 10 3 ) photon values for organic fluorophores. The localisation accuracy is roughly one order of magnitude lower than the lateral localization accuracy of non-engineered PSFs at high photon values (~0.08 pixels and ~0.01 pixels, respectively [31]), and roughly 1.5x worse than AS PSFs (~0.05 pixels [31]) caused by lower effective signal to noise ratio due to the expanded PSF. We observed a lower limit in lateral localization accuracy for SMAP fitting of ~10 nm (~0.1 pixel), which has an unknown origin. The axial localization accuracy of ct-pSMLM increases with increasing photon values as well (Fig 3c). The average axial accuracy at typical photon values for organic fluorophores is ~40 nm, which is ~3x worse than AS PSFs with similar total photon counts and photons/pixel background [31]. The best obtainable axial accuracy is limited by the sub-optimal fitting of the calibration curve to around 11 nm, which is similar to AS PSFs (Fig 3c, SI fig 2, [31]). We attribute this lower axial accuracy of SP PSFs compared to AS PSFs again to lower effective signal to noise ratios due to expanded PSFs. Up to 20% of SMAPlocalized emitters had to be discarded from calculating the z offset, as these were substantially misfitted (> 3 times the corresponding ct-pSMLM z offset, see Methods). We furthermore demonstrate the implementation of ct-pSMLM in SMALL-LABS-pSMLM by analysing an experimental STORM experiment showing labelled microtubule of a Monkey cell line (Fig 3d, Material and methods). The total analysis time for the SMALL-LABS-pSMLM analysis for the ~8 GB, 60.000 frames dataset containing 1.5 million localizations was ~15 minutes on a standard CPU, including file conversion (Supplementary Table 1), of which the ct-pSMLM sub-pixel fitting routine comprised just 77 seconds. Discussion Here we present additions on the phasor-based single-molecule localization microscopy (pSMLM) framework to localize double helix (DH), saddle-point (SP), and tetra-pod (TP) PSFs with very good accuracy and speed on standard CPUs. In the current implementation, DH-pSMLM can achieve up to 1.5·10 4 localizations/second on a 3.10 GHz processing unit, while ct-pSMLM, the basis for SP and TP localization, can achieve up to 2.5·10 5 localizations/second. Specifically, ct-pSMLM is designated for real-time localization methods, combined with computationally inexpensive filtering and background subtraction methods, to better enable (automated) feedbackoriented SMLM instrumentation. Possibly a pSMLM-based methodology could be implemented on the integrated circuits of cameras to further increase end-user accessibility of advanced single-molecule techniques. The DH-pSMLM implementation in the SMALL-LABS software has similar performance as the current state-of-the-art methods when using organic fluorophores, while decreasing the overall analysis time when ran on a CPU. We note that our implementation is not particularly sensitive to overfitting, as stringent constraints on the pair-finding are used. This allows for some false positives during the initial localization steps which are later discarded. Ct-pSMLM improves on our previous phasor implementation [31] by offering a direct way of determining the distance between two emission peaks of a single PSF, well-suited for the quantification of the axial position in SP and TP PSFs. Here, perfect horizontal and vertical elongation of the PSFs is a requirement for ct-pSMLM to perform. Our algorithm is capable of retrieving the emitters location with a precision similar to current best non-machine learning localization algorithm [41], and is mostly limited by the fitting of the calibration curve. Naturally, for ct-pSMLM to work correctly, no other emitters or highly inhomogeneous background should be present in the fitting region. As SP and TP PSFs require large ROIs (~23 x 23px), this results in substantially lower accessible emitter density compared to approaches using standard and astigmatic PSFs. For high-density engineered PSF localization approaches, we point to alternative approaches such as deep learning [49,50] or matching pursuit [51]. We incorporated the novel pSMLM-derivative localization methodologies in the SMALL-LABS software [43]. The updated SMALL-LABS-pSMLM software package expands the original work with a user-friendly GUI, wavelet filtering, drift-correction in 3D, and result image generation. We believe that the software package strikes an excellent balance between fast analysis, accurate results, experimental freedom, good expandability, and hassle-free installation and operation. The software is freely available at https://github.com/HohlbeinLab/SMALL-LABS-pSMLM.
6,493.2
2020-04-16T00:00:00.000
[ "Physics" ]
Fictionalism of Anticipation A promising recent approach for understanding complex phenomena is recognition of anticipatory behavior of living organisms and social organizations. The anticipatory, predictive action permits learning, novelty seeking, rich experiential existence. I argue that the established frameworks of anticipation, adaptation or learning imply overly passive roles of anticipatory agents, and that a fictionalist standpoint reflects the core of anticipatory behavior better than representational or future references. Cognizing beings enact not just their models of the world, but own make-believe existential agendas as well. Anticipators embody plausible scripts of living, and effectively assume neo-Kantian or pragmatist perspectives of cognition and action. It is instructive to see that anticipatory behavior is not without mundane or loathsome deficiencies. Appreciation of ferally fictionalist anticipation suggests an equivalence of semiosis and anticipation. Introduction opportunistic forwards seeking to beat offside traps. Coaches do much anticipation work as well. And then there are expectations of football fans around the globe. At the same moment as Hirving Lozano scored a goal against Germany on June 17, 2018, seismic stations in the Mexico City registered a small earthquake (Semple and Villegas 2018). Plausibly, it was caused by jubilating fans in the city. How else can a ball kick in a Moscow stadium cause a geological event on other side of the globe, but by powers of captive anticipation? The FIFA World Cup illustrates that anticipation is a key feature of masterly performance, better life experiences and grand scale coordination. It is indispensable for vigorous economy and functional society. An ambitious academic view is emerging that anticipation, broadly understood, is a fundamental attribute of biological life, cognition, artificial intelligence, and even of emerging, self-organizing natural phenomena beyond mechanical matter interactions. Certain universality of anticipation is noticed by Poli (2010): "... the major surprise embedded in the theory of anticipation is that anticipation is a widespread phenomenon present in and characterizing all types of realities. Life in all its varieties is anticipatory, the brain works in an anticipatory way, the mind is obviously anticipatory, society and its structures are anticipatory, even nonliving or non-biological systems can be anticipatory." The growing interest in broad studies of anticipation is evident (Nadin 2016;Poli 2017). Nasuto and Hayashi (2016) write: "... anticipation is an emerging concept that can provide a bridge between both the deepest philosophical theories about the nature of life and cognition and the empirical biological and cognitive sciences steeped in reductionist and Newtonian conceptions of causality." According to Nadin (2016: 283), anticipation is "a definitory characteristic of the living". This echoes Rosen's (1985) distinction between simple, mechanical systems and complex, living systems. Similarly, predictive coding (Clark 2013;Pezzulo et al. 2018) and active inference (Friston et al. 2016) are key features of cognitive and biological processes in their free energy formalization (Friston and Stephan 2007;Ramstead et al. 2018). Working definitions of anticipation in academic literature (Poli 2017: Ch. 1) refer either to future prediction (Poli 2010), or to representation of self and the environment (Rosen 1985). These definitions do not mention fictionalist aspects as a conspicuous feature of anticipation. According to the linguistic definition (Matti 2019), fictionalism accepts statements of a discourse not as literal truth but as useful fiction of some sort. Similarly, I see anticipatory cognition as having a pragmatic heuristic rather than rigidly representational character, and as generally resilient to possible and inevitable errors. As an alternative condition to belief or disbelief, anticipation is compellingly understandable in fictionalist terms. This article constitutes a primer introduction to the overlooked fictionalist facets of anticipation and their deep going implications. It is worth mentioning that fictional expectations in economics are accentuated by Beckert (2013). The fictional character of anticipation is demonstrated amply by the current COVID-19 pandemics that causes huge disruptions in the global economy, travel, sports events, and thereby reveals the regular expectations as fictitious plans at heart. Ingrained routines became unsettled or counterproductive. Bryant's (2020) early philosophical essay on the pandemics is a good accompaniment to this article. I highlight two fictionalist aspects of anticipation that appear to counter the leading contemporary paradigm of cognition based on predictive coding (Friston et al. 2016). Firstly, anticipatory action includes not only exciting possibilities of learning, novelty seeking, rich experiential existence, but also mundane or even repellent facets such as prejudiced behavior and stressful reactiveness. If human judgment can be patently biased, fallible and irrational (Kahneman 2011), more primitive forms of anticipation can be expected to be even more superficial, fallacious, crude. With a contrasting reference to behavioral economics (Minton and Kahle 2014), the ambitious thesis of predictive coding that cognitive and living systems are effective probabilistic prediction machines is comparable to rational choice theory (Gilboa 2010). Secondly, I argue that the established frameworks of anticipation, prediction, autonomy still under-appreciate active, generative drives whereby anticipating beings seek to fulfill or impose their existential agendas. The frameworks of representation, predictive coding, and autopoiesis (Maturana and Varela 1980) portray a reactive, stasis-oriented manner of observation, learning and adaptation. Even the time-centered approach (Poli 2010) has a flavor of reactiveness to future. But anticipation can be spatial as well, as in venturing to new locations or encountering new objects. New experiences and exploits are often attained by own new behaviors, improvised persistence. Complementarily to the approach of enactive embodiment (Varela et al. 1991) of the environment, cognizing anticipators effectively seek to enact their destined actions in the world. The next section reappraises the scope of observed anticipatory behavior, including mundane or loathsome manifestations. Section "Philosophical Parallels" defines the emergent fictionalist stance of anticipators, and finds similitude in several philosophical currents, particularly in the Kantian synthetic a priori categorization and American pragmatism. Vaihinger's "The Philosophy of As If" (Vaihinger 1935) and Santayana's "Scepticism and Animal Faith" (Santayana 1955) match well with the anticipatory fictionalism in complementary ways. Section "Existential Agenda, Feral Anticipation" gives key definitions of anticipatory plots, existential agendas, discusses formalization of anticipation itself, and touches on causal powers of feral anticipation. Section "Logistics and Mythology" contrasts entrenched, dependable plots of functional anticipation with indefinite, uncertain scripts. This localizes applicability of the stronger mythological language. Section "Embodiment and Semiosis" explicates embodiment and semiotic unfolding of anticipations and existential agendas. The last section underscores broad significance of fictionalism. The Scope of Anticipation Fragility and forcefulness of being alive constitute a subtle polarity. On the one hand, the environment is ever changing and rudimentarily unpredictable. There is no certainty that an acorn will turn into an oak tree. At best, an acorn effectively anticipates favorable conditions for appropriate employment of its nutty nutrients and DNA guidance. Even animals have objectively limited control over own fates. Some of their maturation phasessuch as winning a duel for status, finding a sexual partnerare only roughly determined by the fixed biochemical mechanisms or scenarios. The whole trajectory of the Aristotelian telos of a living being depends on many things going right, sometimes sporadically and extraordinarily right. In a sense, an organism lives in anticipation of favorable luck and certain outside help. On the other hand, organisms act powerfully on the environment. Fulfillment of anticipation is followed by resolute activity that intervenes in the ambient dynamics of the environment and own organic development. In aggregate, the biosphere changes the geology and the atmosphere of the Earth. Representational models of anticipation capture this polar dynamics poorly. Rosen (1985: §6.1) defined an anticipatory system as a natural system that contains an internal predictive model of itself and of its environment, which allows it to change state at an instant in accord with the model's predictions pertaining to a later instant. This presupposes significant cognitive capacities that normally require a brain. The advance from prediction to action at an instant is not clear; say, how does a predicted scenario lead to a decision when the scenario is unfavorable? Rosen's formal structure of anticipatory modeling is particularly inapplicable to the animal behavior in predator-prey races, where the action is very fast, hardly predictable, contingent on accidental features of the environment, and the outcome is uncertain. Organisms cannot have a comprehensive model of the environment and its possible changes. Instead, an organism works from its Umwelt (von Uexküll 1957; Kull 2010), i.e., its functionalist-semiotic view of the environment (and itself). A living being filters the perceived environment for existential necessities, threats and affordances (Gibson 1966). Action is triggered by rather few cues out of a mass of environmental information. For an example, consider seasonal phenological cycles (Schwartz 2003;Forrest and Miller-Rushing 2010), particularly the spring revival. They constitute webs of anticipatory attentions, responses and influences without any organism apprehending wholly its environs. To appreciate the scope of anticipation, we should recognize it in mundane, commonly failing, or even loathsome forms as well. Examples in human social contexts are: stereotypes, prejudice, superstition, strong first impressions, adoration of leaders. These anticipations determine human behavior to a larger extent than rational thinking. Comparable anticipations in the biological world are checked perhaps only by natural selection. A different example is the physiological stress response (Sapolsky 1994). For most animals, it is an episodic anticipatory reaction to adverse environmental conditions. But it is chronically triggered in the modern human life with harmful effects on health. On the other hand, higher levels of existence beyond being mere matter require determined anticipation, in a sense. Just being alive is inherently an anticipation of further favorable conditions. Anticipation or being anticipated can define agency (Poli and Valerio 2019;Simondon 1964). Anticipators act elementally from anticipatory fictions rather than from representations of future or the world. Workable fictions are often reflexive (Bourdieu and Wacquant 1992): they "represent" worlds that would not exist without following those fictions, including reflexively anticipated worlds that do not exist yet and may never exist. The fictional character of anticipation is particularly notable in carrying through long-term purposes of existence. Own action is the critical reflexive element. Its productive effect is characterized much more elegantly as anticipatory fiction rather than representation. An example of elaborate action with far-reaching reflexive expectancies is niche construction (Laland et al. 2019), exemplified by dam building by beavers, or soil modification by earthworms (Nuutinen 2011). Niche construction is supposed to improve the quality of the environment for the offspring, also under the ensuing long-term ecological dynamics. I argue that a worldly cognitive being does more than playing "the game of predicting the sensorium" (Allen andFriston 2018: 2464). It has an existential agenda delineated by its anticipatory plots, as I define later. The fictions have variable significance and probability of actualizing. For a while here, I start testing mythological language in its both delusional and generative or stimulating meanings to underscore these variabilities. The penetrative contrast between observing and active anticipation is well captured by the famous quip of Marx (1845: Thesis 11): "The philosophers have only interpreted the world in various ways; the point, however, is to change it." Ironically, the prototypical examples of consequential impetuous change happen to be capitalists like John D. Rockefeller. The modus operandi of entrepreneurs is brazenly mythological rather than analytical. Their innovative action is formed by incomplete visions, ambitious anticipations, and quickly devised plans. For example, Rockefeller's success was furthered by his determined, optimistic appraisal of the risks in the early oil industry (Chernow 1998: Ch. 6, 16). He daringly expanded his oil business in an unstable market, despite uncertainty of how much oil would ever be yielded from the Pennsylvania fields or anywhere else. He entreated partners to hold onto Standard Oil shares, or willingly bought them from disgruntled stockholders (Chernow 1998: 168, 181, 380). Entrepreneurs rely on their experience largely in a mythological mode as well; high rates of venture failure attest to that. Crises are commonly resolved by essentially betting on a fortunate strategy. For example, the diverging fortunes of Kodak and Fujifilmthe two largest manufacturers of photo films until the 2000sare attributed to different decisions in coping with the swift competition of digital photography (Kmia 2018). Fujifilm wagered on massive production of LCD screens, even if the competition from the plasma technology was intimidating. I argue that the anticipatory aspect of aspirational mythology deeply unifies human sciences with biology, ecology, and eventually with self-organizing phenomena in general. Living or complex existing forms require specific dispositions, habits (Fernández 2012), systemic-communal "practices" and established interaction patterns for effectual adherence to own survival interests. These associations tempt toward anthropomorphic generalizations and pansemiotics (Salthe 2012). While making a similar argument, Ulanowicz (2010) quotes Bertrand Russell (1960: Ch. II): "Every living thing is a sort of imperialist, seeking to transform as much as possible of its environment into itself and its seed. [...] We may regard the whole of evolution as flowing from this 'chemical imperialism' of living matter." More benign but similarly active aspects of human experience and learning are underscored by Dewey (1916: Ch. II, XI). The direction-to-fit distinction (Searle 2001: 37-38) between beliefs (as having to fit the world) and desires (as seeking to alter the world) is a kindred philosophical discussion. Let us take a look at other philosophical confirmations. Philosophical Parallels Western philosophy has been in opposition to mythological interpretation of the world since the Greeks (Robinson 2004: Lect. 2). Modernist philosophy, especially positivism (Ayer 1936), has yet greater distaste for speculative, metaphysical narratives. But reversal of Comte's (Comte and Lenzer 1975) theological-metaphysical-positive historical progression of knowledge is worthwhile to consider when formulating a primitive epistemology for simpler living or cognizing beings. A good reference point is MacIntyre's (1981: Ch. 10) view of the ancient societies, where everyone had to know own place in the community as well as correspondent privileges, duties, performance norms; where courage, loyalty determined reliance for friendship, et cetera. My proposal boils down to assigning a pragmatic fictionalist (Matti 2019) and fallibilist stance to cognizing, anticipating beings towards future, own capacities and fate, and the indirectly apprehended environment. They are corporeally ready to employ their developmental stories as useful, even vital fictions rather than comprehensive, unambiguous verities. As I discuss here, indirect support for viability of the fictionalist stance can be found in philosophy of science and post-modernist ideas. The stance embraces the Kantian a priori categorization and American pragmatism liberally. The fictionalist stance is anti-realist epistemically, but onticity of reality is acknowledged implicitly: there would be no set out fiction without the opposition to reality. Popper (1962: 66) writes: "Science must begin with myths, and with the criticism of myths." Living out anticipatory myths is similarly inescapable as falsification of scientific theories. Biological cognition and anticipation are probably closer to superstition, faith than to the best scientific practices such as Bayesian inference (Knill and Pouget 2004). Rather than focusing on a few well-defined, immediate problems of life, the organisms may inherently follow reflexive behavioral myths that encompass necessary wisdom for their whole term of existence. Downsides of a priori beliefs and anticipatory organization can be mild, while probable rewards could be existentially enormous, like in Pascal's wager (Hájek 2018). From the skeptical perspective, life is an art of being right for wrong reasons. Or in other words, the organisms rely substantially on epistemic luck (Pritchard 2005), particularly when making fight-or-flight, migration or mating decisions. The relation between aspirational fiction and life is reminiscent of psycho-physical parallelism (Walker 1911), particularly of the Spinozian notion that mental and physical events do not interact causally, but are coordinated as two attributes of God. In our context, the fictions and the physical reality are coordinated by a generalized natural selection. Thereby emergent mythological meaning defines the teleology of the being and intentionality of its behaviors. The extent of the parallelism can be extraordinary: the DNA guides the development and the living of organisms within viable contexts; values of individuals or societies direct their fate and history. Operative myths constitute the semiotic DNA of the being, a critical causal factor of its ways. Extending Kant's (1998) transcendental turn, the myths can be seen as the synthetic a priori knowledge of the cognizing being. They dynamically organize and mold its perception (and action!), impose "intuitive" frames of apprehension, stabilize experience and performance. Anticipation itself is a kind of categorization of future scenarios. Fictional expectations as assorted Kantian-like categories determine the Umwelt (von Uexküll 1957) and routine perceptions of the cognizing being. Vaihinger's (1935: III.A) interpretation of Kant's ideas of pure reason as self-conscious fictions with practical benefits grounds his philosophy of As If. Vaihinger (1935: III.D) credits Nietzsche with alike association of neo-Kantian ideas of instrumental cognition with Darwin's natural selection. In the same vein, evolutionary epistemology (Lorenz 1977) affirms that the synthetic a priori knowledge is shaped by natural selection. This implies that workable semantics and competences appear first in partly ad hoc ways. The world is thereby a natural selection of myths. The fictionalist perspective matches well with subtleties of post-modernism. One point of agreement is that all cognition is inferential and mediated by signs (Cahoone 2010: Lect. 31). Variable slicing by different perceptions and categorizations naturally leads to perspectivism. Derrida's (1974) critique of Western logocentrism is conforming here, but his radical deconstruction is antithetical to appreciation of myths. Eventually though, a workable myth is to be understood roughly uniquely. Brashly rephrasing Foucault (1980), mythology is powerno less potent as organizing or generative power than possibly oppressive. Contrarian and pluralistic confirmations can be found in Lyotard's (1983) critique of metanarratives, and his account of the postmodern abundance of little narratives, language games. Not least, the outlined fictionalist stance matches well with American pragmatism (Legg and Hookway 2019), particularly with: & Peirce's (Peirce et al. 1935: 1.141) fallibilism; i.e., the epistemological view that no belief or theory can ever be certain; & anti-skepticism (Putnam and Conant 1994: Ch. 8); & Peirce's inquiring logic of abduction and speculative grammar (Fann 1970;Ejsing 2007;Bellucci 2018); & James' (1896) will to believe as the necessary practical will for required, purposeful action and fulfilling experience; & James' functionalist, purpose-driven psychology (Robinson 2004: Lect. 47). Peirce (1935: 1.545) replaced Kant's preformed categories of understanding and forms of intuition by a dynamical stock of signs (Cahoone 2010: Lect. 17). Just as Peirce's (Peirce et al. 1935: 5.283) implicit theory of mind postulates that all thoughts are signs, biosemiotics (Emmeche and Kull 2011) proposes that animal perception, communication, behavior and metabolism are ubiquitously mediated by signs. Anticipation within systems is recognized as a semiotic process by Kull (1998) and Nadin (2012). Individual anticipation can be bluntly seen as a Peircian triadic sign (Savan 1988): a cause to anticipate can be viewed as a signifier (i.e., representamen), fulfillment of the anticipation as the correspondent signified (i.e., object), and the consequential process or its supposed scenario as the interpretant. Own action of the anticipator is typically a crucial part of the interpretant process of converting a signifying affordance to a welcome consequence. Accordingly, anticipators or their habits (West and Anderson 2016) could be considered as general manifestations of Peirce's thirdness. The difference between pragmatism and Vaihinger's (1935: viii) fictionalism is that the latter admits theoretical falsity of usable ideas, while pragmatism ties fruitful ideas to the definitions of truth and knowledge. I lean to the pragmatist side in seeing reflections of reality in workable notions and dispositions. Santayana's (1955) naturalism is even more to the point. It postulates animal faith of vital, ingrained beliefs that are essential for action and cognition. Continuing the pragmatist gist, Rorty (1979) denied foundational justification of knowledge and definability of truth. He affirmed Davidson's (2001) veridicality of existing beliefs. The truth of (mythological) knowledge could be established by the depth and the temporal extent of the parallelism with the surrounding reality and, pragmatically, with own existential purposes. The parallelism can be limited by environmental change, ecological or parasitic invasion, or own unsustainable influence. Existential Agenda, Feral Anticipation Broad universality of anticipation invites recognition of anticipatory capacities, teleological agendas in simplest cognizing, self-organizing beings. Contrary to (Rosen 1985;Nadin 2012), I consider perception-reaction cycles as prototypical anticipating entities already. Primed dynamical systems of (Vidunas 2019) can be recognized as radically open (Chu 2011), critically sensitive, provoking and causation delegating anticipators. Here I give resonating definitions of anticipatory plots, existential agendas, and discuss briefly formalization of anticipation itself. Instead of affirming the coextensiveness of semiosis and life in biosemiotics (Sebeok 2001), I suggest an equivalence of semiosis and anticipation. In addition, I formulate the causal power of feral anticipation. An anticipatory plot is a sequence of anticipations, responding actions, set outcomes, and further anticipations, actions of a cognizing being. It is an implicit script of what could happen given the right context. The script does not have to be rigid or definite, but may be approximate or flexible, and may have relative gaps to be filled in opportunistically. Anticipatory plots should match cognitive capabilities of the anticipator; excitatory (though not necessarily productive) reaction to anticipation fulfillments has to be possible or probable. The prescribed reaction may be objectively possible only under extraordinary circumstances, or with some "magic" assistance not specified by the anticipation. For example, an elephant might fly steadily under exceptional stormy conditions, possibly filling in a plot gap thereby. In the next section, I differentiate anticipatory plots by their plausibility or routine reliability, and suggest mythological terminology for the less dependable yet vital anticipated scenarios. Anticipatory plots address autonomy, subsistence and relational organization of the anticipator. Interesting plots are those enhancing quality or probability of prolonged existence of the anticipator. An existential agenda is a set of anticipatory plots of a cognizing being, together with their semantic meaning to its existence. It is a set of implicit anticipations, adumbration of what should happen. For example, a stray cat seeking an owner has an existential agenda, with several behavioral scripts to attract her or him. Biological life can be defined as an existential agenda that includes metabolism, self-repair, and reproduction. Emergence and evolution of life could be described within a spectrum of existential agendas. This spectrum can be imagined starting with Maslow's (1943) hierarchy of human needs by extrapolating it to existential agendas of mammals, vertebrates, multicellular and unicellular organisms, and eventually to virtually biotic hypercycles of chemical reactions. Existential needs will vary across the food chain, within territorial or hierarchical species, down to parasitic organisms, and so on. The variable complexity of agendas allows variable complexity of requisite biochemistry and information processing. Graves' (1970) levels of existence follow Maslow's hierarchy to a great extent, and fit into the delineated spectrum of existential agendas even better. A technical definition of anticipation itself is perhaps premature, because usage of this notion shifts with newly appreciated limitations of representational models and future prediction. Radical openness of anticipation is well characterized by Deacon's (2011: 27) ententionality; he uses the term ententional as "a generic adjective to describe all phenomena that are intrinsically incomplete in the sense of being in relationship to, constituted by, or organized to achieve something non-intrinsic". Cryptically, ententionality encompasses self-preservation, adaptation, functionality, satisfaction conditions, purposes, subjective experiences (Logan 2012)in a word, anticipation. The primary aspect in my focus is structural readiness for favorable conditions and predisposed self-enhancing reactions, behaviors or dynamics. That readiness constitutes a whole anticipatory story. Delegated causality in (Vidunas 2019) stipulates structural readiness for external perturbation, but the positive value of the ensuing interaction may be missing. We would not say that humanity anticipated the COVID-19 pandemics with its unpreparedness and institutional vulnerability. Contemporary biosemiotics postulates that life and semiosis are coextensive (Sebeok 2001), as both are teleological processes of functional organization (Kull et al. 2009). I rather suggest that semiotic processes are coextensive with broadly understood anticipation. As mentioned in Section "Philosophical Parallels", anticipation is a semiotic process on systemic (Kull 1998;Nadin 2012) and participatory individual levels, and even a Peircian triadic sign. An anticipator is a signifier of being alive or of the signified performance, opportunity, while own action, environmental processes, energetic particles are the interpretants. Anticipatory readiness points to a future scenario; thereby it performs a semiotic indication and is teleological. Anticipation pertaining to own action is tantamount to intention. The transpiring functional scripts and existential agendas have a holistic character, like good literary fiction. The association of anticipation with semiosis, recognition of anticipatory behavior in simple dynamical structures, and fictionalist construal of potential meaning are likely to lead to definition of very low semiotic thresholds (Rodríguez Higuera and Kull 2017). The protosemiotic (Sharov and Vehkavaara 2014) threshold of associating signs with action ("know-how") rather than with objects ("know-what") can be met by the mentioned primed dynamical systems already. Fernández (2015) highlights "appropriate receptive structures" in the interplay of semiotic and physical regulation. Effective anticipation of the receptive structures constitutes a dual causal force to the developed top-down regulation. Speculative realism (Harman 2002;Bryant 2020) articulates the reality that capacities of existing objects are inexhaustible by cognitive schemes of observers and consumers, or wilderness of feral things (James 2019). In contrast, I propose that complexity and life arise prototypically from interactions of feral anticipators, that is, from feral categorization, association or semiotics. Logistics and Mythology Working representation of anticipatory plots or enaction of existential agendas require material embodiment and a whole logistical system of furnishing essentials. Anticipatory plots are fulfilled by following them by means of dispositions, habits, learned behaviors, recognition of the expected context, referral to information carriers. They identify systemic (or ecological, social) constraints and familiar patterns as signs. For example, consider the DNA molecule that constitutes basically an embodied mythological story of the development and the living of an organism. It gives the "words" to the anticipating biochemistry. The reflexive machinery with ribosomes, RNA polymerase, transfer RNA (Berg et al. 2006) exemplifies existential, material modalities of the biochemical mythology. Incidentally, is the language of mythology justified right here? On the one hand, biochemical functionality and organic development are amazing and still mysterious in their arrangements. They are mythical in the nihilistic sense as well, since so many physical interventions may wreck the fine biological organization. On the other hand, routine biological meanings have to be taken at face value in organic employment or investigation. Numerous instances of optimized biochemical or physiological functionality (Bialek 2012) constitute a firm basis for organic behaviors and their biosemiotic interpretation. Mythological vocabulary should rather not be used beyond initial rhetorics to characterize entrenched, dependable functionality. Still, the current point is that biochemical fictions of normative organic functionality require a lot of logistical support. Besides genetic guidance, resourceful systems rely on nutrient supply, waste removal, homeostasis, neural and hormonal coordination on various scales. The right contexts and logistical support are parts of anticipatory plots. Many vital physiological mechanisms are structurally deeply protected from surprises. Organic health and well-being depend on orderly actualization of developmental plots and regular anticipations. Importance of the functional logistics is acknowledged by constructor theory (Deutsch 2013; Marletto 2015). For any physically possible circumstance or transformation, constructor theory postulates existence of a constructor, that is, an object or a process that can repeatedly and reliably bring that circumstance about. Like relational biology (Rosen 1985) or the notion of autopoiesis (Maturana and Varela 1980), constructor theory focuses on abstract organizational requirements and processes. The organizational relations have an anticipatory character, really: each involved substance fills in an expected requisite role, and more importantly, the material substances are radically open to particular demanded interventions or informational guidance. Reliability of designated functional mechanisms is variable. Allostatic (Sterling 2012) regulation through anticipatory change of somatic parameters is less firmly reliable than homeostasis. The neural-cognitive control of behavior is no less prone to errors. Here probabilistic models of predictive coding (Friston et al. 2016) apply most fittingly. Further, the genetic script for the whole lifespan may contain gaps, that is, relatively much less specified scripts for developmental or living events. In particular, sexual mating may "purposely" have indefinite, open-ended facets that would, for example, channel environmental conditions and stabilize natural selection. Lifetime learning may evolve not only through cognitive capacities, but also through anticipated patterns of growth, trials and lifetime semiosis. The comprehensive lifespan script may include a habit change, entailing a messy cognitive overhaul. Campbell's (1968) monomyth of Hero's Journey could be a good guidance to archetypical metamorphoses that are subtly anticipated in biological-cognitive lives. It is for these underspecified, barely probable scenarios that mythological language would be appropriate. Anticipatory plots do not have to be restricted to learning from past experiences or resemblances. They may encompass merely feasible but bold existential agendas, and some implicit wisdom regarding unknown unknowns (Logan 2009). Less definite but gradually effective semiotics should be particularly characteristic of ecological interactions (Ulanowicz 2010). Synergetic mutualisms arise from congruous anticipations whose actualization is somehow protected. Interactive categorizations can have a flavor of socio-cultural framing (Cassirer 1953). Both evolution and a single life prompt action in learning environments of low validity (Kahneman 2011: Part III). The list of human cognitive biases, fallacies, and heuristics (Kahneman 2011) should be a good guide of how spontaneous or anticipated semiosis happens routinelyeven if common failures to employ more objective means of cognition would remain to be explained. Particularly interesting are the cognitive biases based on story formation: the narrative fallacy, the halo effect, valuing associative or causal coherence. The propensity to story development reflects key importance of anticipatory plots in any evolving cognitive-semiotic system, I reckon. The operative stories could be analyzed using the multivalued semiotics of Greimas' (1987: Ch. 6-8) narrative grammar. From the perspective of Peircian semiotics, representational cognition and predictive learning rely on indexical signs mainly. Code biology (Barbieri 2015) describes the wellestablished, reliable semiotics of homeostasis, development and reproduction. It is profuse with indexical signs as well. The less specified semiotics of fairly opportunistic living requires association and interpretation. Biohermeneutic approaches (Markoš 2002;Chebanov 1999) are applicable then. The emergent interpretation ought to aim at fitting the existential agenda of the organism. "Creative" association of triadic symbols generates varied anticipatory plots that could meet viable contingencies productively. Embodiment and Semiosis How does semiosis develop, either spontaneously or by inherited anticipation? The focus should be on employment of already available material and cognitive resources, or semiotic scaffolding (Hoffmeyer 2015). Anticipatory relations can buildup innately bottom-up, starting from primed structured materials and their "idealistic" demand for particular interventions. That demand is normally satisfied eventually by distinct substances. The whole vehicle of living relations is reconstructed in a born organism as a "free market" of primed genes and proteins. Available and emergent signs are linked on various scales into hypothetical patterns whose experiential affirmation is anticipated. Less reliable signs and awaited coincidences fit sporadically but productively into anticipatory plots and existential agendas. Emerging demands of the functional organization can be satisfied only by present substances, which are likely to have unrelated other roles or original conditions of existence. The substances become new affordances (Gibson 1966) for the most openendedly anticipating components. This dynamics constitutes a form of embodiment (Glenberg 2010) and semiotic scaffolding (Hoffmeyer 2015). For example, biological information careers probably evolved as successful targets of guidance "requests" from the anticipators, starting from arbitrary, "superstitious" sensitivities of the anticipators. This fits the paradigm of extended cognition (Clark and Chalmers 1999), epitomized by the behavior of consulting a map or a notebook. A general mechanism of embodied fulfillment of anticipatory inquiries could be quick organic development of rich motor repertoire and mannerisms by referring to loosely related experiential memory. With that, possibly fitting knowledge is transferred across physical modalities or scales by expedient analogy. The transplanted information can be most completely encoded in one perceptual-motor modality in a manner insinuated by the theory of visual, auditory or kinesthetic learning styles (Pashler et al. 2008). These virtual embodiments are based on cognitive rather than physical resources. In that vein, behavioral economics (Kahneman and Tversky 1984) describes how human choices are determined primarily by emotional or contingent framing rather than objective merits of the choices. Likewise, momentary animal interpretations and decisions are spontaneously generated based on contingent clues, impulses or impressions, without anything like objective deliberation generally. Embodiments arise as spandrels (Gould and Lewontin 1978) rather than adaptations: they are incidental scaffolds for emerging new capacities and substantive purposes. Semiosis translates recognized resources and dynamic processes into expected utility under my view that semiosis and anticipation are coextensive. Affordances and recurrent sequences of events become Peircian signs, whereby initial perceptions or triggers signify eventual benefits or outcomes under "interpretant" action or dynamics. The meaning of the signs is pragmatically fictionalist rather than precise, logocentric. Bounds of the recursive semiosis (Peirce et al. 1935: 1.339)presumably, toward fundamental physical interactions in one direction, and some cosmic selection in the otherare disregarded by the fictionalist stance of anticipators, as their operative level of interpretation ignores dynamical details, thermodynamic limitations, higher meanings. The most reliable signs establish persistent patterns of behavior and experience. They provide the embodiment frame for semiotic scaffolding towards rich functionality and interaction. Less reliable signs are the focus of emergent creative manipulation by a kind of free association; they become leverage points for flexible adjustment, learning and communication. Systemic or communal tendencies may evolve for stabilizing precedents and "customs". Semiotic scaffolding may recursively continue beyond material embodiment. This virtual embodiment across cognitive levels can be recognized in the techniques of competitive memorization through rich association or navigation scenarios (Foer 2011;O'Connor 2019), and in abstract cognition through metaphorical bodily sensations (Carpenter 2011;Sapolsky 2017: Ch. 15). An example of the latter is moral disgust registered as physical disgust. The James-Lange theory (James 1884) that emotions are initiated physiologically rather than mentally is another exemplar of embodiment dynamics. With genuine emotions, the somatic markers (Damasio 1994: Ch. 8) imitate Hebb's (1949) dictum "Neurons that fire together wire together" and fire together with the processing brain circuits. These scaffolded signals have great weight in decision making, evidently. Focusing on the "free market" aspect of the semiotic interaction between anticipators, I recapitulate as follows. The demands of existential agendas are satisfied by haphazard, opportunistic embodiments of affording services in various forms of material modalities and cognitive constructs. This interaction of bio-economic demand and supply should extrapolate to anticipatory capacities and teleological agendas of simplest cognizing, self-organizing beings. The simplest Umwelt, existential agenda or Peircian habit of a primed dynamical system can be recognized in mere organization of the particular reaction. Anticipators constitute (generally non-neural) dispositional representations (Damasio 1994: 102) of demands and opportunities in the environment. The existential agendas of many entities may include becoming effectively well-designed, strangely familiar (Botsman 2017: Ch. 3) affordances to others, or fitting competitively into centripetal (Ulanowicz 2009: Fig. 4.3) autocatalytic flows. These emergent drives are analogous to the objectives of the design industry (Hinton 2014: Ch. 4). More Fictionalism Recognition of anticipatory behavior in complex self-organizing phenomena has massive interpretive power. In turn, the fictionalist facets of anticipation clarify normativity, holism, teleology, striving of living or cognizing beings, and untangle conceptual complications of malfunction, excess and disequilibrium. Kindred anticipatory notions of Umwelt (von Uexküll 1957), affordances (Gibson 1966), functionality (Ariew et al. 2002), abilities (Maier 2018), dispositions (Choi and Fara 2018) can be similarly smoothly analyzed from the fictionalist perspective. Norms, meanings, intentions, goals, beliefs, signals are fictions whose proper unfolding can be usefully anticipated. As the poet Muriel Rukeyse (1968: IX) writes: "The Universe is made of stories, not of atoms." Semiotics and even philosophy of language could embrace the fictionalist approach rather than the customary logocentric setting. Adopting the spirit of Vaihinger's (1935) expedient illusion, the meaning of a sign or an utterance becomes a fiction that has to be construed well by the listeners or the interpretants. Processes of communication and learning encompass homologous fictions of proper comprehension. Davidson (2005, Ch. 6) describes these fictions in communication as passable theories. Even conventions are likewise anticipatory, thus fictional, tools for minimizing misunderstanding. As well, confidence in the meaning of words and signs can be compared to Santayana's (1955) compulsive animal faith. In all, my proposal constitutes a strong kind of hermeneutic fictionalism (Woodbridge and Armour-Garb 2010) towards the context of communication and the meaning of used language. Fictionalism can be applied to theory of mind (Demeter 2013) to the extent that other mind is as unknown as the future or a novel environment. Knowing the unknown in the messy, competitive world can be accomplished opportunistically by daring, tricky epistemology while anticipating the best development. Crucially, action of living entities necessitates fictional anticipatory scripts encoded in dispositional or improvisable preparedness. Feral anticipation is a causal force in a delegating (Vidunas 2019) way. Likewise, human action is based prototypically on beliefs, thus on principally presumed and fallible knowledge. That beliefs shape our conduct is an old insight of pragmatists (Peirce et al. 1935: 5.370) and others. In particular, utopias and reformative visions drive most of determined political action, for better or worse. Nietzsche (1995: Ch. 7) writes, "action requires the veil of illusion." I highlight two fictionalist aspects of anticipation that counter the leading contemporary paradigm of cognition based on predictive coding (Friston et al. 2016): primitive forms of anticipation look more like prejudice, superficial bias rather than objective inference; and the basic existential epistemology may have a boldly vigorous rather than a soundly careful character. In Nietzsche's (1995) terms, living beings are lifeaffirming Dionysian rather than rational Apollonian. The wilder epistemological impulses are moderated by generalized natural selection. Acknowledgments The author would like to thank Ari Belenkiy, Marianna Benetatou, Steven Gimbel, Rimvydas Krasauskas, Jean-Marie Lehn, Markus Pawelzik, Joseph Riggio, Susumu Tanabe, Robert Ulanowicz for useful discussions and remarks. The anonymous comments of the peer review are appreciated as well. Funding Not applicable. Conflict of Interest Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
8,902.6
2021-04-01T00:00:00.000
[ "Philosophy" ]
Investigation of mechanical properties of AlSi3Cr alloy In the present paper, microstructural and mechanical properties of an innovative AlSi3Mg alloy were studied. Particularly, the effect of the addition of Cr and Mn on tensile strength and impact toughness was evaluated. In fact, the presence of these elements leads to the formation of an intermetallic phase with a globular or polyhedral morphology. It was therefore investigated the role played by Cr-Mn containing particles in the failure mechanism and the influence of the heat treatment parameters. Moreover, tensile and impact tests were performed on A356 samples in T6 condition, whose results were compared with the performance of the innovative alloy. Considering the static properties, the innovative alloy showed remarkable values of tensile strength, while ductility was improved only after heat treatment optimization. Poor impact toughness values were measured and the microstructural analysis confirmed the presence of coarse intermetallics, acting as crack initiation and propagation particles, on the fracture surfaces. INTRODUCTION n the last decades, the light weighting of cars and trucks has become a very widely discussed theme [1] in both academic and industrial world. The design of more efficient processes and the development of stronger and lighter materials enhance the reduction of weight of cars and trucks components, allowing the decrease of fuel consumption and toxic emissions in atmosphere. For such reasons, the use of aluminum alloys in transports is increasing in the last years. Among Al alloys, Al-Si-Mg alloys are the most used for automotive castings production [2,3] because of their excellent castability, good corrosion resistance, high elongation and significant strength. I In addition, the light weighting of automotive components can be achieved by enhancing the mechanical properties of Al alloys due to the presence of strengthening elements. For instance, Cr and Mn additions in Al-Si alloys lead to the formation of globular or polyhedral intermetallics [4][5][6], reducing the detrimental effect of the brittle needle-like β-Al 5 FeSi intermetallics, thus increasing the mechanical properties of the material. In fact, in Al-Si alloys, Fe is a common impurity that forms brittle needle-like intermetallics, known as β-Al5FeSi phase, which are harmful for mechanical properties, particularly for tensile and fatigue behavior [4,7,8]. In addition, heat treatment is also a key factor to consider in order to optimize the performance of any Al-Si-Mg alloy. At this regard, several authors examined the effect of heat treatment parameters, in particular solution and ageing (T6), and chemical composition on microstructure, mechanical properties and precipitation sequence for Al-Si-Mg alloys [9]. For instance, Wang et al. [10] studied the effect of Mg content on both solidification and precipitation behavior of AlSi7Mg casting alloy. The ageing behavior of Al-Si alloys with Mg and Cu addition was investigated also by Li et al. [11], who focused their attention on the precipitation sequence. The presence of Cr and Mn in the alloy composition does not seem to interact significantly with Mg during ageing treatment. In fact, it was recently demonstrated that Cr-containing dispersoids already form in AlSi3Cr alloy during the solution treatment, without changes during ageing, and that they contribute to the dispersion hardening of the material [12]. Notwithstanding the abundant information in scientific literature about heat treatment of Al-Si-Mg alloys, for industrial production, it is mandatory to define the proper heat treatment for each alloy, in order to reach a good compromise between strength and ductility. In the present paper, tensile properties and impact toughness of an innovative AlSi3Cr alloy were investigated before and after T6 heat treatment. The studied alloy is characterized by the presence of Cr and Mn in order to modify the morphology of intermetallic particles and improve material properties. This alloy was developed for the production of truck wheels by means of a non-conventional hybrid technique, which combines features of both low pressure die casting and forging processes [13]. Nevertheless, the presence of a significant amount of intermetallic phase can still represent a limit to the mechanical performance of the Cr-containing alloy and deeper investigations are needed to evaluate its effect. The influence of time and temperature of the ageing treatment were analyzed, paying particular attention to the role of intermetallics. Furthermore, in order to better evaluate the suitability of the alloy for this application, the obtained results were compared to the properties of the commercial A356-T6 casting alloy, currently used for the production of wheels. MATERIALS AND METHODS he content of main alloying elements in the studied alloy is shown in Tab. 1. The values are given in wt. % and were measured by optical emission spectrometer. As mentioned in the introduction, the chemical composition is between those of the conventional alloys for LPDC and forging. In fact, the alloy under investigation is an Al-Si-Mg alloy developed for the production of truck wheels by a non-conventional hybrid technique [13], combining features of both low pressure die casting (LPDC) and forging processes. Furthermore, Cr and Mn are present as main alloying elements. Ti was used as refining element, while no modifiers were added to the melt. A proper degassing was performed before casting. Samples to be tested in as cast conditions were directly machined to the proper shape for tensile and Charpy impact tests, while the other samples were first machined as cylinders, heat treated and then machined to the final shape according to the standards. All the samples were taken from the rim of the wheel in order to guarantee a reliable comparison of mechanical properties. Solution and aging treatments were performed in air in laboratory furnaces. Solution temperatures were chosen according to solidus temperature measured by differential scanning calorimetry (DSC) [13], while aging temperatures were selected as suggested by good practice for this group of aluminum alloys [14]. Samples were solution treated for 3 h at 545 °C, T then water quenched at 65 °C [14] and subsequently aged at 165 °C and 190 °C for 1, 2, 4, 6 and 8 h. Between quenching and ageing treatments, the samples were kept at -20 °C in order to avoid natural ageing. During the heat treatment, the temperature was additionally monitored by a thermocouple placed inside an aluminum sample in the furnace chamber. Microstructural characterization was carried out by both a Leica DMI 5000 M optical microscope (OM) and a LEO EVO 40 scanning electron microscope (SEM), equipped with an energy X-ray dispersive spectroscopy microprobe (EDS). In addition, the sludge factor referred to the Cr-Mn-rich intermetallic phase was calculated, while its average area fraction and morphology (roundness, average particle area, equivalent diameter, and maximum size) were investigated by image analysis techniques. In particular, roundness was evaluated as follows: where P is the perimeter and A is the area of each intermetallic compound. According to the formula, a value of roundness equal to 1 corresponds to a circle and it represents the minimum value: the more elongated the shape, the higher the roundness value. Vickers microhardness tests were performed on as cast, quenched and aged samples using a Shimadzu indenter with an applied load of 200 g and a loading time of 15 s. In order to guarantee a reliable statistic, at least 20 measurements were carried out on each sample. Tensile tests were performed at room temperature on as cast, quenched and aged samples using an Instron 3369 testing machine with a load cell of 50 kN. The crosshead speed was 1 mm/min in the elastic field and 2 mm/min in the plastic field. Accurate elongation values were obtained using a knife-edge extensometer fixed to the gauge length of the specimens. After tensile tests, in order to define the optimum heat treatment condition, the quality index (QI) was calculated starting from the values of ultimate tensile strength (UTS) and elongation (El%), using the following formula [15]: Charpy impact tests were performed at room temperature on U-notched samples with standard dimensions of 10 mm x 10 mm x 55 mm. In order to consider the effect of the heat treatment on the impact strength performance of the alloy, samples were tested in as cast, quenched and aged conditions. A CEAST instrumented pendulum with an available energy of 50 J was used and data were acquired by means of a DAS 64k analyzer. In this work, only the total energies absorbed by the specimens in the different thermal conditions are correlated with the microstructural features. Tensile and impact strength tests were also performed on samples machined from a commercial A356 LPDC wheels in the T6 condition in order to compare the results with those of the innovative AlSi3Mg alloy. The fracture cross-sections and surfaces of tensile and impact specimens were observed and analyzed by optical microscopy (OM) and scanning electron microscopy (SEM), respectively. Microstructural analysis and morphological analysis of intermetallics he microstructure of the AlSi3Cr alloy in the as cast condition is reported in Fig. 1 at two different magnifications. It consists of a primary dendritic phase with a small amount of a eutectic mixture. Moreover, intermetallic particles with a globular or polyhedral morphology can be frequently detected (see arrows in Fig. 1b). The chemical composition of this phase was evaluated by SEM-EDS analysis ( Fig. 2 and Tab. 2). These particles are usually referred as the α-Al(Fe,Mn,Cr)Si intermetallic phase, which forms when Cr and/or Mn are added to the alloy composition [6,16]. After heat treatment, the typical spheroidisation and coarsening of the Si eutectic particles take place (Fig. 3b). In addition, as explained in a previous work by the authors [12], during solution treatment Cr-containing dispersoids also form in the aluminum matrix. It was demonstrated that they are responsible of an increase in material hardness and influence both tensile properties and toughness [7,8,17]. Intermetallic particles are not significantly affected by the heat treatment [18]. High-density intermetallic phases can precipitate as sludge and settle at the bottom of the furnace [19,20]. Hence, a sludge factor was defined in order to predict the formation of sludge according to the content of Fe, Mn and Cr for Al-Si-Cu alloys and to estimate if the sedimentation of this phase is likely to happen in the molten metal [19,20]. Considering the chemical composition, a sludge factor of 1.19 was calculated for the AlSi3Cr alloy, according to Jorstad [19,20] and Gobrecht [20], and it resulted lower than the critical level causing sludge sedimentation [5]. In addition, according to [21], the sludge factor can be correlated to the area fraction of the intermetallic particles and not to their morphology, so the former parameter was calculated by means of image analysis technique. It was measured an average area fraction of α-Al(Fe,Mn,Cr)Si intermetallic phase of about 0.6 % and a particle density of 56 particles/mm 2 . Additionally, the image analysis results pointed out that this phase is characterized by an average roundness of 2.75. A more accurate evaluation of the results showed that 70 % of the particles analyzed are characterized by a roundness value lower that 3, while about 30% of the intermetallics can reach a roundness above 3 up to 6. Particularly, very elongated intermetallics, with roundness between 5 and 6 represented only 2 % of the total investigated particles. These particles can play an important role during tensile tests since it is known that their sharp edges can behave as stress concentration points and therefore can lead to fracture [6]. All the discussed results are summarized in Tab. 3. Hardness and tensile properties The average values of Vickers microhardness and of tensile properties of the AlSi3Cr alloy in as cast condition are summarized in Tab. 4. The standard deviation of the measured properties is also reported. The influence of the aging time on the Vickers microhardness of the studied alloy for the two considered aging temperatures, 165 °C and 190 °C, is shown in Fig. 4. As expected, it appears that the peak condition is reached earlier when ageing is performed at 190 °C rather than at 165 °C [9]. In fact, in the former case peak hardness is reached after 4 h and in the latter case after 6 h of treatment. Accordingly, over ageing occurs earlier when the heat treatment is performed at higher temperature, while the peak hardness is about 130 HV0.2 for both the ageing temperatures. In order to investigate the evolution of the mechanical properties of the innovative alloy according to the aging time, tensile tests were performed on specimens in the same heat-treated conditions. It was found that the AlSi3Cr alloy shows a remarkable increase in strength after the ageing treatment, reaching values of UTS between 320 and 360 MPa and values of YS between 275 and 330 MPa (Fig. 5a-b). On the other hand, as expected as drawback of any increase in material strength, an inverse correlation between ductility and mechanical properties was found. In most heat-treated conditions, the AlSi3Cr alloy shows poor elongation values. However, as shown in Fig 5c, it is possible to reach elongation values between 4 % and 6 % with ageing treatments between 1 h and 4 h at 165 °C. Microstructural analysis of tensile specimens The loss in elongation at fracture can be explained considering that material ductility is affected by different parameters such as the presence of brittle Si eutectic particles, α-Al(Fe,Mn,Cr)Si intermetallics, Cr-containing dispersoids and Mg 2 Si precipitates after heat treatment. Particularly, in as cast condition brittle Si particles and Fe-containing intermetallics are known to be responsible for crack propagation during the evolution of the fracture processes [7]. During heat treatment, as above mentioned, spheroidisation of Si particles and formation of Mg 2 Si precipitates take place. The former is reported to be positive for tensile properties [23], while the latter is responsible of a loss in ductility of the α-Al matrix [9]. A mainly ductile fracture mechanism of the matrix is observed from SEM analysis of the fracture surfaces after tensile tests in all the selected heat-treated conditions (Fig. 6). A transcrystalline fracture, typical for Al-Si alloys [24] with visible traces of micro-deformation (dimples), can be observed. As shown in Fig. 7a-b, intermetallic particles containing Fe, Mn and Cr were sometimes detected on the fracture surfaces; the EDS results of the identified particles are reported in Tab. 5. They appear to be small and not cracked and this supports the hypothesis that they play a marginal role in the fracture initiation. As reported by some authors [8,25,26], during tensile tests α-Al(Mn,Cr,Fe)Si intermetallics are not cut by dislocations, which instead create a circle around the particles and moves around them during tensile tests, bypassing the obstacle. It is believed that the same mechanism is taking place for the studied alloy, in particular when globular intermetallic particles are present. Table 5: EDS analysis (wt. %) of the intermetallic particles shown in Fig. 7. Therefore, the main failure mechanism involves the fracture of eutectic Si particles rather than of intermetallics particles, as also visible from the two micrographs of the fracture profile of a specimen aged for 1 h at 165 °C ( Fig. 8a-b); the fracture mainly follows the eutectic path. Nevertheless, also some cracked intermetallic particles could be present along the fracture surface (Fig. 8b), but they do not appear to strongly contribute to the fracture initiation and propagation. This supports what is already reported by different authors about the positive contribution to tensile properties of the modification of intermetallic morphology due to Cr and Mn addition to Al-Si-Mg alloys [7,8,17]. Intermetallic particles seem to be not so critical for the tensile strength of heat-treated AlSi3Cr alloy, while most of the ductility loss is probably correlated to precipitation of hardening Mg 2 Si particles. Unfortunately, it is not possible to identify Cr-containing dispersoids on the fracture surface due to its non-regular morphology, even though their presence in the Al matrix was demonstrated in a previous study [22]. Impact strength results During impact tests, the maximum load (Fm) was measured and the total impact energy (Wt) was calculated as the integral of load-displacement curve from the start to the end of the test, which is considered when the load comes to 2 % of its peak. The two complementary contributions to the total energy, i.e. the energy at the maximum load (Wm) and the propagation energy (Wp), were also calculated. As an example, in Fig. 9 are reported the load-displacement curves of two selected specimens, one tested just after solution treatment performed at 545 °C for 3 h and one after the same solution treatment and subsequent ageing carried out at 165 °C for 1 h. In Tab. 6 are collected the measured and calculated impact properties of the same specimens. Considering the results reported in the table, a significant variation of the impact behavior of the material was observed performing the ageing treatment after solution. The aging treatment for 1 h at 165 °C increases the maximum load of about 30 %, but decreases the impact strength of about 58 %. Comparing the trend of the curves, the aged specimen shows higher maximum load, but lower displacement to fracture and displacement at the maximum load. The ratio between the propagation energy and the nucleation energy is lower after the ageing treatment, pointing out that the crack growth stability decreases. The mean impact energies of samples in all the investigated conditions are summarized in Fig. 10. The mean value of total impact energy obtained on the U-notched samples in as-cast condition was equal to 2.43 ± 0.14 J and it is reported for comparison as dotted line in the same figure. Table 6: Measured and calculated impact properties from curves plotted in Fig. 9. Solution treatment performed at 545 °C for 3 h increases impact energy values with respect to the as-cast condition (dotted line in Fig. 10); this is mainly due to the spheroidisation of the eutectic Si particles and the dissolution of the coarse Mg 2 Si particles during the solution treatment. It is well known, in fact, that solution treatment reduces the number of critical crack initiation points and therefore leads to the enhancement of the energy absorption during the impact [27]. Similarly, the partial decomposition of some Fe-containing intermetallics can positively contribute to increase the material toughness by diminishing sharp edges at the interface with the matrix. Nevertheless, after ageing treatment, a severe drop in impact strength takes place due to the precipitation of β'-Mg2Si particles. These particles, with their brittle behavior, increase the micro-stresses, reducing the α-Al strain: micro-cracks are more likely to originate [23,28,29,30]. The highest values of the absorbed impact energy were measured in the samples aged at 165 °C for short aging times, while after 6 and 8 h of ageing very similar values were recorded. In addition, almost constant values of the impact energy were found after ageing treatment at 190 °C, regardless of the aging time, probably due to the fast precipitation of the hardening Mg2Si particles. Microstructural analysis of impact strength specimens Some significant SEM micrographs of the fracture surfaces of impact strength specimens are shown in Fig. 11. A typical micro-ductile morphology can be identified for the as cast and for the selected heat-treated conditions. Dimples formed around the Si eutectic particles due to the different ductility between the more deformable α-Al matrix and the less deformable Si particles. Several cracked and coarse intermetallics were observed on the fracture surfaces of the analyzed samples and some of them are pointed out in the micrographs of Fig. 12; the chemical composition of these particles, as evaluated by means of the EDS microprobe, is reported in Tab. 7. They surely play an important role in both crack initiation and propagation during the impact tests [27,30]. Table 7: EDS analysis (wt. %) of the intermetallic particles pointed out in the micrographs of Fig. 11. Comparing the analyses performed on tensile and impact strength specimens, a significant increase in the number of intermetallics was observed on the fracture surfaces of the impact strength ones. The steep load variation during the impact tests probably induces the brittle behavior of the intermetallic particles. The intercrystalline fracture of eutectic Si crystals and of intermetallics is the cause of the fracture initiation. Then, after the principal crack is formed, the fracture seems to propagate following a quasi-claveage path along the intermetallics, as can be seen in Fig. 13. Comparison with a conventional A356-T6 alloy Tensile and impact strength tests were performed also on samples drawn from wheels produced by LPDC process with the traditional A356 alloy in the T6 condition (solution treatment at 540 °C for 4 h and ageing treatment at 155 °C for 2,5 h). In particular, the obtained results were compared with the performance of the innovative alloy in the best-considered heat-treated condition (ageing for 1 h at 165 °C). Data are collected in Fig. 14. From the analysis of tensile properties, it can be observed that UTS and YS are significantly higher for the AlSi3Cr alloy than for the A356-T6 alloy (Fig. 14a). On the other hand, a slightly lower elongation is also recorded. SEM micrographs of the fracture surface of a A356-T6 tensile specimen are shown in Fig. 15 at different magnifications. The morphology of the fracture surface is typically ductile (Fig. 15a) and some coarse Fe-containing platelet-like intermetallics can be observed on the fracture surface, as shown in Fig. 15b: the EDS analysis of the identified Fecontaining platelet is reported in Tab. 8. These intermetallics are characterized by sharp edges, surely inducing detrimental effects on the tensile properties [6]. Table 8: EDS analysis (wt. %) of the intermetallic particle shown in Fig. 14. Conversely, considering the comparison reported in the graph of Fig. 14b, the average impact energy absorbed by the A356-T6 alloy is significantly higher than the one of the innovative alloy in the best heat-treated condition. As previously shown in Fig. 12 and Fig. 13, the coarse secondary phases clearly identified on the fracture surfaces and on the fracture profiles play an important role in decreasing the impact strength of the heat-treated AlSi3Cr alloy. To support this assumption, in Fig. 16 are depicted two micrographs at different magnifications of the fracture profile of an A356-T6 impact strength specimen. Intermetallic compounds are scarcely found on the fracture profiles, even if some β-Al5FeSi are present as pointed out in Fig. 16b (red arrow). The presence of these fractured platelet-like intermetallics is also confirmed by SEM/EDS analysis as shown in Fig. 17. Figure 17: SEM micrographs of an impact strength A356-T6 specimen: fractured platelet-like intermetallic particle. Nevertheless, a Si-driven intercrystalline fracture can be considered the main cause for fracture initialization; once a critical number of fractured particles is reached, the principal crack is formed by local linkage of close microcracks. Then, they propagate following a preferential interdendritic path, with a predominantly transgranular fracture mode and sometimes assisted by microporosities ( Fig. 16 and Fig. 18). CONCLUSIONS he mechanical performances of AlSi3Cr alloy were evaluated in different heat-treated conditions in terms of tensile strength and impact toughness. Particular attention was paid to the influence of intermetallic phases on the mechanical performance of the material. In fact, the studied alloy is characterized by the presence of quite coarse intermetallic compounds that form due to the presence of Fe, Cr and Mn. The fracture mechanism was mainly ductile and intermetallic particles appear to play a marginal role in fracture initiation. Furthermore, the alloy shows remarkable tensile strength in most heat-treated conditions, while elongation can reach values very similar to that of the conventional A356 alloy for selected aged conditions. On the other hand, poor impact toughness values were measured because, in this case, intermetallic secondary phases act as crack initiation and propagation particles. This was demonstrated by the presence of coarse cracked intermetallic particles on the fracture surfaces in as cast and heat-treated conditions. Commercial A356 alloy exhibited impact toughness higher than AlSi3Cr alloy and the observations of fracture surfaces revealed a Si-driven main crack path, while intermetallic compounds were scarcely found. Data collected in the present work provide interesting evidences of the important role played by intermetallic particles in the mechanical behavior of AlSi3Cr alloy, together with heat treatment parameters. The comparison with the commercial A356 casting alloy can be very helpful for the identification of proper applications for the studied innovative alloy.
5,618.2
2017-09-29T00:00:00.000
[ "Materials Science" ]
Investigating the Nucleation Effect of DMDBS on Syndiotactic Polypropylene from the Perspective of Chain Conformation The mechanism of nucleating agents (NAs) accelerating the crystallization of semi-crystalline polymers has received continuous attention due to the extreme importance in academic research and industry application. In this work, the nucleation effect and probable mechanism of 1,3:2,4-bis(3,4-dimethylbenzylidene)sorbitol (DMDBS) on promoting the crystallization of syndiotactic polypropylene (sPP) was systematically investigated. Our results showed that DMDBS could significantly accelerate the crystallization process and did not change the crystalline form of sPP. The in situ infrared spectra recorded in the crystallization process showed that in pristine sPP the tttt conformers decreased and the ttgg conformers increased subsequently. In sPP/DMDBS system, DMDBS could promote the increase of ttgg conformers rather than the decrease of tttt conformers. The further analysis by 2D-IR spectra revealed that ttgg conformers increased prior to the decrease of tttt conformers in the sPP/DMDBS system comparing with pristine sPP. Considering that ttgg conformers were basic elements of helical conformation of Form I crystal for sPP, we proposed a probable nucleation mechanism of DMDBS for sPP:DMDBS could stabilize the ttgg conformers which induced these ttgg conformers to pre-orientate and aggregate into helical conformation sequences as initial nuclei quickly and early to promote the sPP crystallization. Our work provides some new insights into the nucleation mechanism of NAs for sPP. INTRODUCTION The nucleating agent (NA) is an important additive which could accelerate the overall crystallization of semi-crystalline polymers and improve the performances of products. [1−3] For some widely used polymer materials, such as isotactic polypropylene (iPP), poly(L-lactic acid) (PLLA), etc., with respect to their relatively slow crystallization rate in the conventional process, the addition of NAs in matrix can advantageously shorten the processing time and lead to dramatically improved performances. [4−8] Due to the extreme importance, exploration of the exact nucleation mechanism and development of highly effective NAs have been the most concerns in both the scientific research and industry application in the past decades. Up to now, two mechanisms including epitaxial crystallization and heterogeneous nucleation are generally accepted and used to explain the nucleation effect of NAs, especially on promoting iPP crystallization. The former mechanism, epitaxial crystallization, supposes that the nucleus could form on the substrates via epitaxial interaction for the dimensional matching between lattice parameters of substrates and polymer. [9,10] The latter mechanism, heterogeneous nucleation, suggests that the addition of NAs could provide many more sites with the surface which reduces the free energy barrier to primary nucleation. [11−13] Furthermore, it was found that the polymer molecular conformation played an important role in the iPP crystallization process. [14] Yan et al. revealed that crystallization occurred when the length of helix sequences exceeded a critical value in the crystallization process of iPP. [15,16] Li et al. reported that the conformational ordering of iPP chains would take place before they packed into crystal lattice in the growth boundary layer. [17,18] Moreover, during the flow-induced crystallization of iPP, Li et al. found that the short sequences of long iPP chains could adopt proper conformation in the flow field and the isotropic-nematic transition happened as the initiate of crystals when the concentration or length of these conformational ordering sequences surpassed a certain threshold. [19,20] As for the iPP system containing NAs, Smith et al. found that NAs with cleft shape could affect the iPP chain conformation in the crystallization process via experimental measurements and molecular modeling. They proposed that these nucleators could bind and stabilize the chains in helical form, which would suppress the conformational transformation of iPP chain from helical to random and be beneficial to the crystallization process. [21] Similarly, Myerson et al. used molecular dynamics to compare the iPP chain conformational change in the nucleation and crystallization process of iPP. They found that the orientation of chain backbone with helical conformation in the iPP-sorbitol system was larger than the orientation in pristine iPP, which could promote the crystallization of iPP. [22] These pioneer works suggested that the nucleation action of NAs was related to their stable effect on the proper conformation, which was in favor of the crystallization process. Therefore, studying the effect of NAs on the polymer chain conformation provided us with an important sight of understanding the nucleation mechanism of NAs. Syndiotactic polypropylene (sPP) is a class of polypropylene whose methyl-group shows prevailingly syndiotactic arrangement in the main chain, which is different from the commonly used iPP. It has lower crystallinity with small spherulites and stronger chain entanglement in the amorphous region, [23−25] which makes it particularly suitable for the preparation of highly transparent films with puncture resistance or transparent pipes with better creep resistance. However, the commercial utilization of sPP was greatly limited for its slow crystallization rate during melt processing. [26,27] Different from iPP which could be nucleated by many kinds of NAs, most commonly used NAs are not very efficient for sPP. Among various NAs investigated, it was reported that the sorbitol derivatives were relatively efficient in promoting the crystallization of sPP. [28,29] Considering that there were few efficient NAs for sPP and few studies on the mechanism of NAs on promoting sPP crystallization, investigating the effect of sorbitol derivatives on the chain conformation of sPP in the crystallization process to gain new insights into the nucleation mechanism of such NAs is of great significance and helpful for developing high-efficiency NAs to promote the industrial development of sPP. In this work, the effect of 1,3:2,4-bis(3,4-dimethylbenzylidene)sorbitol (DMDBS) on promoting the crystallization of sPP was systematically investigated by DSC, rheological measurement, and FTIR spectroscopy. From the perspective of chain conformation, the conformational changes of sPP chain in the crystallization of sPP with or without DMDBS were revealed by FTIR in combination with the two-dimensional correlation analysis proposed by Noda. [30,31] For the sPP system containing DMDBS, the ttgg conformers would increase and aggregate into helical conformation sequences much quickly and early, which could perform as initial nuclei. Comparing with the homogeneous nucleation in pristine sPP, these abundant initial nuclei could promote the crystallization of sPP. Our work demonstrated that the nucleation action of DMDBS on accelerating the crystallization of sPP was due to its pre-orientation stable effect on ttgg conformers. EXPERIMENTAL Materials The sPP sample was pilot product produced by Petrochina Petrochemical Research Institute with weight-average molecular weight (M w ) of 130 kg·mol −1 and a polydispersity index of 1.7. The microstructure of sPP chains was characterized by 13 C-NMR and a fraction of fully syndiotactic pentad [rrrr] of 75% was determined according to the literatures. [32,33] The nucleating agent Millad 3988 (Milliken Chemical, Belgium) was used as received, whose effective ingredient was 1,3:2,4-bis(3,4dimethylbenzylidene) sorbitol (DMDBS). Sample Preparation and Characterization The samples with various contents of DMDBS (x wt%) were marked as sPP-x, which were prepared by melt-blending sPP and DMDBS using a Haake Polylab OS mixer (ThermoFisher, Massachusetts) at 180 °C for 5 min at a rotation speed of 80 r·min −1 . Both the 0.5 mm-thick sheets for rheological experiments and 50 μm-thick films for FTIR measurements were melt-pressed at 180 °C under a pressure of 10 MPa for 5 min, and then cooled naturally to room temperature. Differential scanning calorimetry (DSC). The thermal properties of the sPP samples were examined by a Mettler DSC-821e apparatus (Mettler Toledo Instruments Inc., Switzerland) with a temperature accuracy of ± 0.05 °C. The temperature scale of the DSC instrument was calibrated with indium (T m = 156.60 °C and ΔH m 0 = 28.45 J·g −1 ) as a standard. All experiments were carried out in a nitrogen atmosphere with the sample weight of 5-8 mg. In the nonisothermal melt-crystallization process, the samples were firstly heated to 200 °C and held there for 5 min to erase previous thermal history; subsequently, they were cooled to 0 °C to record the crystallization exotherms and heated to 200 °C again to record the melting endotherms for further analysis. In the isothermal melt-crystallization process, they were cooled to the selected crystallization temperature (T c ) after erasing the previous thermal history by held at 200 °C for 5 min and then kept at this temperature to record the crystallization exotherms until the crystallization finished. Wide-angle X-ray diffraction (WAXD). The WAXD measurement was conducted on Xeuss 2.0 System (Xenocs, Sassenage, France) with X-ray source consisting of a Cu Kα (λ = 1.54 Å) microfocus tube and the distance from sample to detector being 149 mm. The samples for WAXD characterization were cooled down from 200 °C to 25 °C at 10 °C·min −1 after complete crystallization. Rheological measurement. All rheological measurements were performed using a Haake Mars III parallel-plate rheometer (ThermoFisher, Massachusetts) with a heated nitrogen stream for temperature control, and disk-shaped specimens (diameter 16 mm, thickness 0.5 mm) cut from melt-compression molded sheet were used. In the cooling process, smallamplitude oscillatory shear with strain amplitude of 1% and frequency of 1 rad·s −1 was applied, and the temperature was subsequently decreased at a rate of 5 °C·min −1 from 200 °C to 70 °C. Fourier transform infrared spectroscopy (FTIR). All the FTIR spectra were recorded on a Nicolet 6700 spectrometer (Ther-moFisher, Massachusetts) equipped with a hot stage in dry air atmosphere. The samples were prepared by being sealed between two ZnS tablets. The spectra were collected at a 2 cm −1 resolution and 16 co-adding scans with a 2 min interval during the isothermal crystallization. The baseline correct processing of stretching vibration bands was performed by OMNIC 8.0 software. Two-dimensional correlation analysis. Spectra recorded at an interval of 2 min were selected in certain wavenumber ranges, and the generalized 2D correlation analysis was applied by 2DShige software (Shigeaki Morita, KwanseiGakuin University, Japan). The final contour maps were plotted using Origin 8.5 software. In 2D correlation maps, red systemcolored regions are defined as positive correlation intensities, while blue system-colored regions are regarded as negative correlation intensities. The Nucleation Effect of DMDBS for sPP Crystallization The crystallization and melting behaviors of sPP incorporated with various contents of DMDBS were evaluated by DSC. The cooling curves are shown in Fig. 1(a) and the crystallization temperatures (T c ) of various sPP samples are summarized in Fig. 1(c). It could be found that the T c value of the neat sample sPP-0 was at about 72.0 °C and it increased significantly to 73.4, 76.8, 79.0, and 79.4 °C for samples sPP-0.05, sPP-0.1, sPP-0.2, and sPP-0.5, respectively. At a fixed cooling rate (10 °C·min −1 ), the higher T c value of sample represents the much higher crystallization ability of polymer. The lowest T c value of sPP-0 means the neat sPP performed the lowest crystallization ability and the increase of T c values with the addition of DMDBS indicates that the crystallization ability of sPP was enhanced by DMDBS. Fig. 1(b) shows heating curves of various sPP samples, and it is found that there existed double melting peaks in the melting process of all the sPP samples. In the previous literatures, [26] it has been reported that the low temperature melting endotherm (T ml ) corresponded to the melting of primary crystallites formed during cooling and the high temperature melting endotherm (T mh ) represented the melting of the recrystallized crystallites formed during a subsequent heating scan. The melting temperatures (T ml and T mh ) of sPP composites versus DMDBS loadings are also summarized in Fig. 1(c). It is clear that the T ml value of sPP-0 was at about 115.2 °C and it increased to 116.5, 116.7, and 117.9 °C for sPP-0.05, sPP-0.1, and sPP-0.2, respectively, and kept constant at about 117.9 °C even the content of DMDBS was 0.5 wt% for sPP-0.5. It is well known that the higher T m value denotes the crystallite with greater stability (i.e. thicker lamellae). [28] The increase of T ml values showed that DMDBS could improve the stability of primary crystals. Meanwhile, T mh remained at about 129.0 °C in the concentration range of DMDBS studied from 0 wt% to 0.5 wt%. Furthermore, the crystallinities of sPP with various DMDBS contents were also determined by WAXD measurement and the data are listed in Fig. 1(d). It could be found that all the five samples had similar crystallinities of around 22% in the NA concentration range studied from 0 wt% to 0.5 wt%. This result indicates that the addition of DMDBS could promote the crystallization but have no obvious effect on the crystallinity of sPP. The nucleation effect of DMDBS was further investigated by isothermal crystallization. Fig. 2(a) shows the DSC traces of various samples isothermally crystallized at 110 °C and it could be found that with the content of DMDBS increased from 0 wt% to 0.5 wt%, the DSC curves of samples became much more distinct. The Avrami equation is a widely used method for analyzing the crystallization kinetics including the half-time (t 0.5 ) of isothermal crystallization, the Avrami exponent n, and the crystallization rate constant k. [34,35] Fig. 2(b) shows the plots of log[−ln(1 − X(t))] versus logt of raw experimental data and fitted lines by Avrami equation. The kinetics parameters are listed in Table 1. The t 0.5 value is an important parameter to evaluate the crystallization rate of polymer. Generally, the shorter t 0.5 represents the faster crystallization of polymer. [4] From Table 1, it is clear that t 0.5 values of sPP samples decreased with the increase of DMDBS, which indicates that the crystallization of sPP was promoted by DM-DBS. The Avrami exponents (n values) of various sPP samples varied between ~3.17 and 3.70, which is in good agreement with the values of ~3.06 to 3.79 found in an earlier study. [29] These n values are very close, which means that DMDBS did not change the type of nucleation and growth geometries of sPP. As expected, the crystallization rate constants k of sPP samples varied markedly depending on the addition of DM-DBS at the given crystallization temperature. When the contents of DMDBS were 0 wt%, 0.05 wt%, 0.1 wt%, 0.2 wt%, and 0.5 wt%, the values of k could be obtained as 1.40 × 10 −7 , 2.44 × 10 −7 , 4.64 × 10 −7 , 22.38 × 10 −7 , and 102.8 × 10 −7 min −n , respectively. The increase of the values of k indicates that the crystallization of sPP was accelerated by DMDBS. The sPP crystallization with or without DMDBS was also characterized by rheological measurement. Fig. 3 shows the elastic modulus (G' in Pa) of sPP-0 and sPP-0.2 as a function of temperature upon cooling from 200 °C to 70 °C at 5 °C·min −1 . It is clear that G' of sPP-0 increased linearly and then a sudden sharp increase by a factor of 2 was observed at about 88.9 °C, while the sudden sharp increase of G' curve for sPP-0.2 was observed at about 93.5 °C (shown in the enlarged view of black rectangle from 80 °C to 100 °C in Fig. 3). The onset temperature (T onset ) of the sudden sharp increase represents the start of sPP crystallization. The higher T onset for sPP-0.2 (at about 93.5 °C) showed that the crystallization of sPP-0.2 was promoted by DMDBS, which is in accordance with the result of DSC. In the enlarged view of black circle from 100 °C to 160 °C in Fig. 3, relatively small but distinct increase was detected on the curve of sPP-0.2 at about 134 °C before the crystallization occurred, while the small increase was absent in the curve of sPP-0. Therefore, this small increase on the G' curve of sPP-0.2 should be caused by the addition of DMDBS. In the previous studies of iPP/DMDBS system, it has been reported that DMDBS crystallized as thin fibrils from the iPP melt firstly and then these fibrils acted as NA to promote the iPP crystallization. [36−39] Therefore, for the sPP/DMDBS system we investigated in this work, the nucleating ability of DMDBS for sPP may be also relevant to DMDBS thin fibrils. In summary, DMDBS was an effective NA for sPP which could not only promote the crystallization rate but also improve the stability of primary crystallites. The Conformational Changes of sPP Chain in the Crystallization Process In the previous works of sPP crystallization, Huang et al. reported that the development of sPP chain conformation was an important process in the crystallization process. [40] Wang et al. revealed that in the isotropic state of sPP melt, the most common conformers were ttgg and tttt, corresponding to basic elements of helical and trans-planar zigzag conformations, respectively. They found that the existence of ttgg conformer might be the motivation for the formation of short helical conformation, [41] which was in accordance with the helical structure of Form I crystal. Huang et al. proposed that partial chains among the amorphous component changed in conformation first, and then the crystallization occurred as the sPP chains formed unique helical conformations. [42] These works revealed that the change of sPP chain conformation could determine the crystallization process of sPP, and studying the effect of DMDBS addition on the sPP chain conformation maybe give us much more useful information to understand the mechanism of DMDBS accelerating the crystallization behavior of sPP from the perspective of chain conformation. , it is clear that five conformation sensitive bands of sPP chain were confirmed, which correspond to the different conformational structure of sPP, and the characteristic infrared vibrational assignments for sPP have been reported, as summarized in Table S1 (in the electronic supplementary information, ESI), to describe various conformations of the sPP chains. The bands at 811, 867, and 977 cm −1 correspond to the 4 1 helix conformation of sPP chains in form I crystal, while the bands at 826 and 963 cm −1 are associated with trans-planar conformation of sPP chains in mesophase or amorphous phase. [43−46] In Fig. 4(a), it is found that the intensities of the bands at 811, 867 and 977 cm −1 increased gradually with time, which indicates the helical conformation increased in sPP-0. Meanwhile, the intensities of the bands at 826 and 963 cm −1 decreased with time, which indicates that the trans-planar conformation decreased. The increase of helical conformation and the decrease of trans-planar conformation showed the transformation from melt to form I, which represented the occurrence of crystallization in sPP-0. In Fig. 4(b), the change of conformation sensitive bands of sPP-0.2 shows the similar phenomenon to sPP-0 where the helical conformation increased and the trans-planar conformation decreased. Thus, the formation of form I also occurred in sPP-0.2. The WAXD characterizations of samples after the crystallization have proved that both the crystalline structures of sPP-0 and sPP-0.2 were form I (see Fig. S1 in ESI). Figs. 4(c) and 4(d) show the relative intensity of FTIR bands at 811, 826, 867, 963, and 977 cm −1 as a function of time during the isothermal crystallization of sPP-0 and sPP-0.2 at 110 °C. It is found that the relative intensity of conformation sensitive bands of sPP-0 approached the platform at about 60 min, while the relative intensity of conformation sensitive bands of sPP-0.2 got close to the equilibrium at about 40 min. Therefore, the FTIR band intensity change ratios of sPP-0.2 were faster than that of sPP-0, which indicates that the conformational change of sPP was promoted by DMDBS. For further analysis of the FTIR band intensity change ratios in the crystallization process, the peak position of the first order derivation of the FTIR band intensity profiles as a function of crystallization time is a proper parameter to describe the characteristic time of the change, [40] which are shown in Fig. 5. In Fig. 5, the rough change ratios of conformational sensitive bands of sPP-0 and sPP-0.2 could be obtained as 963 cm −1 > 826 cm −1 ~ 977 cm −1 > 867 cm −1 ~ 811 cm −1 and 977 cm −1 ~ 867 cm −1 > 963 cm −1 > 826 cm −1 ~ 811 cm −1 , respectively. It is found that the sensitive bands at 977 and 867 cm −1 started to change earlier than 963 and 826 cm −1 in sPP-0.2, while the sensitive bands at 963 and 826 cm −1 started to change earlier than 977 and 867 cm −1 in sPP-0. This result means that the change of helical conformation was earlier than trans-planar conformation in sPP-0.2, which is different from the change order in sPP-0. Therefore, it could be concluded that DMDBS could promote the change of helical conformation rather than trans-planar conformation. From the analysis of original 1D FTIR spectra, we found that DMDBS could promote the conformational change of sPP chain, especially for the helical conformation. However, the change ratio sequences of conformation sensitive bands of sPP samples obtained from Fig. 5 were rough and the change order of some conformation sensitive bands was difficult to determine accurately, which limits the further analysis of the conformational change of sPP chain in the crystallization process. To obtain further useful and clear information of the conformational change of sPP chain, two-dimensional correlation (2DCOS) was considered for the analysis of FTIR spectra. 2DCOS is a mathematical analytical method first proposed by Noda, [47] and it has been extensively applied to trace spectra fluctuations of diverse external perturbations such as temperature, time, and concentration. [48,49] 2DCOS can capture the subtle information which is not obvious in 1D spectrum and improve the spectral resolution. Hence, both the FTIR spectra of sPP-0 and sPP-0.2 during isothermal crystallization from 0 min to 80 min with an internal of 2 min were used to perform the 2DCOS. The 2D-IR correlation spectra were obtained including two types of spectra, 2D synchronous and asynchronous spectra. These correlation spectra were characterized by two independent wavenumber axes (ν 1 , ν 2 ) and a correlation intensity axis. The correlation intensity in the 2D synchronous and asynchronous maps reflects the relative degree of in-phase or out-of-phase response, respectively. The warm colors (red) are defined as positive intensities, while the cool colors (blue) are defined as negative ones. The 2D synchronous spectra are symmetric with autopeaks along the diagonal line and crosspeaks (Φ(ν 1 ,ν 2 )) which are off-diagonal peaks, and the 2D asynchronous spectra are asymmetric with only off-diagonal cross-peaks (Ψ(ν 1 ,ν 2 )). According to Noda's rule, [31] when Φ(ν 1 ,ν 2 ) > 0, if Ψ(ν 1 ,ν 2 )) is positive (red-colored area), band ν 1 will vary prior to band ν 2 ; if Ψ(ν 1 ,ν 2 ) is negative (blue-colored area), band ν 1 will vary after ν 2 . However, this rule is reversed when Φ(ν 1 ,ν 2 ) < 0. Briefly, if the symbols of the cross-peak in the synchronous and asynchronous maps are the same (both positive or both negative), band ν 1 will vary prior to band ν 2 ; if the symbols of the cross-peak are different in the synchronous and asynchronous spectra (one positive, and the other one is negative), band ν 1 will vary after ν 2 under the environmental perturbation. Fig. 6 shows the 2D synchronous and asynchronous spectra in the isothermal crystallization process of sPP-0. According to the Noda's rule for the determination of sequence order described before, the final sequence order in the sPP chains conformational change could be described as 963 cm −1 > 826 cm −1 > 977 cm −1 > 867 cm −1 > 811 cm −1 (the de- termination details of sequential order of sPP-0 have been listed in Table S2 (in ESI), and ">" means earlier than or prior to). According to the assignments of bands shown in Table S1 (in ESI), this sequence order could also be described as CH 3 rock (trans-planar in interfacial) > CH 2 rock (trans-planar in mesophase or amorphous) > CH 3 rock (helical in interfacial) > CH 2 rock (helical in crystalline) > CH 2 rock (helical in crystalline). This sequence order could be interpreted as that the transplanar conformation changed prior to the helical conformation in the crystallization process. In the study of variations of regular conformation structures in sPP melt, Wang et al. revealed that the changes of helical and trans-planar conformations were related to the changes of their basic element conformers ttgg and tttt, respectively. [41] Thus, we can investigate the development of tttt and ttgg conformers during the crystallization by comparing the changes of trans-planar and helical conformations. From the sequence order obtained based on Fig. 6, it could be found that the decrease of transplanar conformation was earlier than the increase of helical conformation, which suggests that the transformation of conformers tttt was prior to the aggregation of ttgg conformers. Therefore, it could be concluded that in the crystallization process of pristine sPP, the tttt conformers would trans-form firstly, and then the ttgg conformers could increase to go beyond a certain threshold and aggregate into helical structures as homogeneous nuclei in melt to induce the crystallization. Fig. 7 shows the 2D synchronous and asynchronous spectra in the isothermal crystallization process of sPP-0.2 and the final sequence order in the sPP chains conformation transformation could be described as 977 cm −1 > 867 cm −1 > 963 cm −1 > 826 cm −1 > 811 cm −1 (the determination details of sequential orders of sPP-0.2 have been listed in Table S3 (in ESI), and ">" means earlier than or prior to) or CH 3 rock (helical in interfacial) > CH 2 rock (helical in crystalline) > CH 3 rock (trans-planar in interfacial) > CH 2 rock (trans-planar in mesophase or amorphous) > CH 2 rock (helical in crystalline). This sequence order could be interpreted as that the helical conformation changed prior to the trans-planar conformation in the crystallization process. We have known that the changes of ttgg and tttt conformers could be investigated by analyzing the development of trans-planar and helical conformations during the crystallization. The sequence order obtained based on Fig. 7 shows that the increase of helical conformation was prior to the decrease of trans-planar conformation. This result suggests the aggregation of ttgg conformers was 2D synchronous and asynchronous spectra of sPP-0.2 calculated from the spectra obtained during isothermal crystallization at 110 °C from 0 min to 80 min. earlier than the transformation of tttt conformers. Thus, it is reasonable to conclude that the addition of DMDBS could make the ttgg conformers aggregate into helical structures much earlier and more quickly, which promoted the crystallization of sPP-0.2. The Possible Mechanism of DMDBS Promoting the Crystallization of sPP In the study of the structure changes in the crystallization process of iPS, it was found that the order of conformational change in the nucleation period and the subsequent crystallization growth process were same. [50] According to the Hoffmann and Lauritzen (HL) theory of polymer crystallization, it was known that the nucleation and crystal growth were two continuous processes in polymer crystallization. [51] It could be reasonable to speculate that the sequence orders of the sPP chains conformational change in the nucleation and crystal growth processes were also same. In other words, the change sequence order of chain conformation in the nucleation process, which was the first process, could determine the sequence order of chains conformational change in the whole crystallization process, just like dominoes. Therefore, the details of chains conformational change in the nucleation process could be obtained by the analysis of 2D-IR correlation spectra of sPP-0 and sPP-0.2, and a schematic diagram of nucleation and crystallization process of sPP with or without DMDBS is proposed in Scheme 1. In the isotropic sPP melt, the most common conformers in the melt were ttgg and tttt conformers. In the nucleation process of the pristine sPP, the tttt conformers would transform into ttgg conformers firstly, which could make the content of ttgg conformers increase and go beyond a certain threshold; then, these ttgg conformers could aggregate together and form helical conformation, which performed as homogeneous nuclei in the melt. These homogeneous nuclei would induce the crystallization of pristine sPP. As DMDBS was added in sPP, it would crystallize as thin fibrils during the cooling of sPP melt and perform a stable and pre-orientation effect on the ttgg conformers. This effect would make the ttgg conformers pre-orientate and aggregate as initial nuclei immediately in the nucleation process, which was prior to the transformation from tttt conformers to ttgg conformers. Compared with the homogeneous nuclei formed by sPP chains themselves in the pristine sPP, much more initial nuclei could form quickly and early in the existence of DMDBS and promote the crystallization process of sPP. CONCLUSIONS In this work, the effect of DMDBS on promoting the crystallization of sPP was systematically investigated. The results of DSC and rheological measurement revealed that the efficient nucleating ability of DMDBS for sPP may be relevant to the DMDBS thin fibrils. By comparatively analyzing the 2D-IR spectra of sPP-0 and sPP-0.2, it was found that in the nucleation process of pristine sPP, the tttt conformers would transform to ttgg conformers firstly and increase the concentration of ttgg conformers. After the concentration of ttgg conformers surpassed a certain threshold, they could aggregate into helical structures as homogeneous nuclei and the crystallization of pristine sPP occurred. When DMDBS was added in sPP, it could perform a stable effect on the ttgg conformers, which would induce the ttgg conformers pre-orientation and aggregate as initial nuclei immediately. Therefore, more initial nuclei could form more quickly and earlier comparing with the homogeneous nucleation process in pristine sPP, which promotes the crystallization of sPP. Electronic Supplementary Information Electronic supplementary information (ESI) is available free of charge in the online version of this article at
6,910.4
2020-06-22T00:00:00.000
[ "Materials Science" ]
Chemical Constituents from Andrographis echioides and Their Anti-Inflammatory Activity Phytochemical investigation of the whole plants of Andrographis echioides afforded two new 2′-oxygenated flavonoids (1) and (2), two new phenyl glycosides (3) and (4), along with 37 known structures. The structures of new compounds were elucidated by spectral analysis and chemical transformation studies. Among the isolated compounds, (1–2) and (6–19) were subjected into the examination for their iNOS inhibitory bioactivity. The structure-activity relationships of the flavonoids for their inhibition of NO production were also discussed. Introduction Andrographis (Acanthaceae) is a genus of about 40 species, various members of which have a reputation in indigenous medicine. In traditional Indian medicine, several Andrographis species have been used in the treatment of dyspepsia, influenza, malaria and respiratory infections, and as astringent and antidote for poisonous stings of some insects [1,2]. More than 20 species of Andrographis have been reported to occur in India. The phytochemistry of this genus has been investigated quite well in view of its importance in Indian traditional medicine and reported to contain several flavonoids [3,4] and labdane diterpenoids [5][6][7][8][9][10]. A. echioides, an annual herb occurring in South India, is listed in the Indian Materia Medica used as a remedy for fevers. However, information on the chemical composition and bioactivity of this species is very rare. There is only report of flavonoids as major components from the extracts of A. echioides in the previous literature [11][12][13][14]. As part of our program to study the bioactive constituents from Andrographis species [15,16], we have investigated the whole plant of A. echioides and four new compounds (1)(2)(3)(4) were characterized. Herein, we wish to report on the structure elucidations of compounds 1-5 and the effects of flavonoids on NO inhibition in LPS-activated mouse peritoneal macrophages. Anti-Inflammatory Activity Inflammation is related to morbidity and mortality of many diseases and is recognized as part of the complex biological response of vascular tissues to harmful stimuli. It is the host response to infection or injury, which involves the recruitment of leukocytes and the release of inflammatory mediators, including nitric oxide (NO). NO is the metabolic by-product of the conversion of L-arginine to L-citrulline by a class of enzymes termed NO synthases (NOS). Numerous cytokines can induce the transcription of inducible NO synthase (iNOS) in leukocytes, fibroblasts, and other cell types, accounting for enhanced levels of NO. In the experimental model of acute inflammation, inhibition of iNOS can have a dose-dependent protective effect, suggesting that NO promotes edema and vascular permeability. NO also has a detrimental effect in chronic models of arthritis, whereas protection is seen with iNOS inhibitors. The iNOS inhibiting potentials of 1-2 and 6-19 were evaluated by examining their effects on LPS-induced iNOS-dependent NO production in RAW 264.7 cells determined by MTT assays. Cells cultured with 1-2 and 6-19 at different concentrations except 18 (at 42 μM) used in the presence of 100 ng/mL LPS for 24 h did not change cell viability thus the NO inhibiting effects may not due to the cytotoxicity (Table 3). In the examined concentration ranges (5.25-74 μM), NO production decreased in the presence of 1-2 and 6-19 in a dose-dependent manner (Table 3). Flavonoids are widely distributed in the higher plants capable of modulating the activity of enzymes and affect the behavior of many cell systems, including NO inhibitory activity. The structure-activity relationships of 3',4'-oxygenated flavones were discussed by Matsuda [53] and Kim et al. [54]. In 1999, Kim et al. [54] examined the naturally occurred flavonoids for NO production inhibitory activity in LPS-activated RAW 264.7 cells and the following structural requirements were afforded: (a) the strongly active flavonoids possessed the C2-C3 double bond and 5,7-dihydroxyl groups; (b) the 8-methoxyl group and 4'-or 3',4'-vicinal substitutions favorably affected inhibitory activity; (c) the 2',4'-(meta)-hydroxyl substitutions abolished the inhibitory activity; (d) the 3-hydroxyl moiety reduced the activity; (e) flavonoid glycosides were not active regardless of the types of aglycones. Andrographis species are noted for profuse production of 2'-oxygenated flavones and in the present study, the bioactive data of the examined flavonoids using RAW 264.7 cells were in agreement with the previous report by Kim et al., and the additional structural requirements of flavonoids for NO production inhibitory activity were suggested as follows: (1) the glycosidic moiety reduced the activity, like 9 and 14; (2) the 2'-hydroxyl group did not cause significant effects on NO inhibitory activity; (3) methylation of 5-hydroxyl group enhanced the activity, like 13 and 14 ( Table 4). The structure-activity relationships of flavonoids for NO production inhibitory activity resulted from our study clarified the insufficiency in the previous report. General The UV spectra were obtained with Hitachi UV-3210 spectrophotometer. The IR spectra were measured with a Shimadzu FTIR Prestige-21 spectrometer. Optical rotations were recorded with a Jasco DIP-370 digital polarimeter in a 0.5 dm cell. The ESIMS and HRESIMS were taken on a Bruker Daltonics APEX II 30e spectrometer. The FABMS and HRFABMS were taken on a Jeol JMS-700 spectrometer. The ESIMS (negative ESI) data were measured using a Thermo TSQ Quantum Ultra LC/MS/MS spectrometer. The 1 H and 13 C NMR spectrums were measured by Bruker Avance 300, 400 and AV-500 NMR spectrometers with TMS as the internal reference, and chemical shifts are expressed in δ (ppm). The CD spectrum was recorded in a Jasco J-720 spectrometer. Sephadex LH-20, silica gel (70-230 and 230-400 mesh; Merck, Darmstadt, Germany) and reversed-phase silica gel (RP-18; particle size 20-40 μm; Silicycle) were used for column chromatography, and silica gel 60 F 254 (Merck, Darmstadt, Germany) and RP-18 F 254S (Merck, Darmstadt, Germany) were used for TLC. HPLC was performed on a Shimadzu LC-10AT VP (Tokyo, Japan) system equipped with a Shimadzu SPD-M20A diode array detector at 250 nm, a Purospher STAR RP-8e column (5 μm, 250 × 4.6 mm) and Cosmosil 5C 18 Plant Materials The whole plant of A. echioides Nees was collected from Tirupati, Andhra Pradesh, India in May 1998. The plant was authenticated by Professor C. S. Kuoh, Department of Life Science, National Cheng Kung University, Taiwan. The voucher specimens (DG-199) have been deposited in the herbarium of the Department of Botany, Sri Venkateswara University, Tirupati, India; and Department of Chemistry, National Cheng Kung University, Tainan, Taiwan, respectively. Determination of Aldose Configuration Compounds 1-5 (each 0.5 mg) were hydrolyzed with 0.5M HCl (0.4 mL) in a screw-capped vial at 60 °C for 1 h. The reaction mixture was neutralized with Amberlite IRA400 and filtered. The filtrates were dried in vacuo, then dissolved in 0.1 mL of pyridine containing L-cysteine methyl ester (0.5 mg), and reacted at 60 °C for 1 h. To those mixtures were added a solution of O-tolylisothiocyanate in pyridine (5 mg/1 mL) at room temperature for 1 h. Those reaction mixtures were directly analyzed by HPLC (Cosmosil 5C 18 ARII (250 × 4.6 mm i.d. Nacalai Tesque Inc., Tokyo, Japan); 20% CH 3 CN in 50 mM acetate; flow rate 0.8 mL/min; detection, 250 nm). D-glucose (t R 40.5 min) was identified as the sugar moieties of 1-5 based on comparisons with authentic samples of D-glucose (t R 40.5 min). Cell Viability Cells (2 × 10 5 ) were cultured in 96-well plate containing DMEM supplemented with 10% FBS for 1 day to become nearly confluent. Then cells were cultured with samples in the presence of 100 ng/mL LPS for 24 h. After that, the cells were washed twice with DPBS and incubated with 100 μL of 0.5 mg/mL MTT for 2 h at 37 °C testing for cell viability. The medium was then discarded and 100 μL dimethyl sulfoxide (DMSO) was added. After 30-min incubation, absorbance at 570 nm was read using a microplate reader (Molecular Devices, Orleans Drive, Sunnyvale, CA, USA). Measurement of Nitric Oxide/Nitrite NO production was indirectly assessed by measuring the nitrite levels in the cultured media and serum determined by a colorimetric method based on the Griess reaction [55]. The cells were incubated with a test sample in the presence of LPS (100 ng/mL) at 37 °C for 24 h. Then, cells were dispensed into 96-well plates, and 100 μL of each supernatant was mixed with the same volume of Griess reagent (1% sulfanilamide, 0.1% naphthyl ethylenediamine dihydrochloride, and 5% phosphoric acid) and incubated at room temperature for 10 min, the absorbance was measured at 540 nm with a Micro-Reader (Molecular Devices, Orleans Drive, Sunnyvale, CA, USA). By using sodium nitrite to generate a standard curve, the concentration of nitrite was measured form absorbance at 540 nm. Statistical Analysis Experimental results were presented as the mean ± standard deviation (SD) of three parallel measurements. IC 50 values were estimated using a non-linear regression algorithm (SigmaPlot 8.0; SPSS Inc. Chicago, IL, USA). Statistical significance is expressed as * p < 0.05, ** p < 0.01, and *** p < 0.001. Conclusions In the previous literature, there are four Andrographis species containing diterpenoids such as andrographolide, including A. paniculata, A. affinis, A. lineata, and A. wightiana. In our investigation, the major constituents of the titled plant were flavonoids rather than the crystalline bitter principle analogous to diterpenoids. In the evaluation of NO inhibition activity, compounds 10 and 14 were the most effective and the IC 50 values were 37.6 ± 1.2 μM and 39.1 ± 1.3 μM, respectively. These results suggested that the Andrographis species are valuable sources for the discovery of natural anti-inflammatory lead drugs.
2,174.2
2012-12-27T00:00:00.000
[ "Chemistry" ]
VARIATIONAL PRINCIPLES FOR THE TOPOLOGICAL PRESSURE OF MEASURABLE POTENTIALS . We introduce notions of topological pressure for measurable potentials and prove corresponding variational principles. The formalism is then used to establish a Bowen formula for the Hausdorff dimension of cookie-cutters with discontinuous geometric potentials. 1. Introduction. Let (X, d) be a compact metric space and T : X → X be a continuous transformation. Throughout this paper we consider (X, T ) to be a timediscrete dynamical system. An important notion in the field of dynamical systems and its associated thermodynamic formalism is the topological pressure. For a function ϕ : X → R, the topological pressure with respect to (X, T ) on a given subset Z ⊆ X is defined to be P Z (T, ϕ) := lim where the supremum is taken over all ( , n)-separated sets E in Z. Above definition was introduced and discussed for Z = X and ϕ ∈ C(X, R) in [18]. The variational principle was also proven there: One has where the supremum is taken over all ergodic T -invariant Borel probability measures µ on X. The aim of this paper is to extend the definition of pressure to not necessarily continuous functions ϕ, and to prove a corresponding variational principle. Up to now there seem to be at least two systematic attempts to treat this task. In [15] dynamical systems were considered, where the invariant set under study can be exhausted by an increasing sequence of subsets, such that ϕ is continuous on the closure of each subset. Consequently, the topological pressure is then defined to be the supremum of the topological pressures of ϕ on those sets. A corresponding variational principle holds under some integrability assumptions. A variational principle for sub-additive, upper semi-continuous sequences of functions was established in [4] and [2], as well as in [13] for Z d + -actions. A generalization was recently given in [11] for weighted topological pressure on systems with upper semi-continuous entropy mapping. We note, that in [15] and [11] Carathéodory dimension type definitions of pressure (as introduced and discussed in [17] and [16]) were used, whereas [4] and [13] extended the classical topological pressure, defined via separated sets. In this paper we also stick to the original pressure definitions given in [18]. More precisely, we extend those definitions of pressure to discontinuous ϕ, and compare them to the classical ones. Furthermore we determine several classes of functions, which admit variational inequalities and principles. Various examples are given. In particular we construct an example for a potential, which is not upper semicontinuous and for which the formalism of [15] cannot be applied, but which admits a variational principle for the pressure considered here. As an application, we establish a Bowen formula for the Hausdorff dimension of attractors of cookie-cutters with discontinuous geometric potentials. This is done by connecting Hofbauer's Bowen formula [12] to the pressure defined in (1) (see Theorem 7.4 and Remark 21 (a)). We then use this relation to show a continuity property of the Hausdorff dimensions of a sequence of cookie-cutters (see Theorem 8.2 and Remark 23). 2. Main results. Let (X, T ) be a dynamical system and ϕ : X → R be a measurable function. For every subset Z ⊆ X, define P Z (T, ϕ) as in (1). Theorem A (Mass Distribution Principle, Theorem 5.2). Let Z ⊆ X be a Borel set. If µ is a Borel probability measure on X satisfying µ(Z) > 0, one has Above result is well known for ϕ ∈ C(X, R), and we show in the present paper, that the method of proof works well in the more general setting of measurable functions. Using Brin-Katok's theorem (see [3]), we can derive the following variational inequality: Theorem B (Variational Inequality, Theorem 5.3). Let h top (T ) < ∞ and µ be a T -invariant ergodic Borel probability measure. If ϕ : X → R is quasi-integrable with respect to µ, then there exists a Borel set G ⊆ X such that µ(G) = 1 and In particular, if ϕ : X → R is quasi-integrable with respect to T (see Definition 5.4), then where the supremum is taken over all T -invariant Borel probability measures µ on X. Inequality (4) was already proven for upper semi-continuous functions in [4] and [13] (see Remark 15). In both proofs it is used, that ϕ is bounded from above. Definition 3.6. Let (Ω, A, µ) be a probability space and ϕ : Ω → R be a measurable function. We call ϕ to be quasi-integrable with respect to µ, if either Quasi-integrable functions share some important properties with integrable functions. Some of them are recalled in the next two lemmas: Lemma 3.8. Let f, g : Ω → R be quasi-integrable functions with respect to µ, such that Ω f dµ+ Ω g dµ is well-defined. Then f +g is well-defined µ-almost everywhere and quasi-integrable with respect to µ, and The ergodic theorem of Birkhoff and the ergodic decomposition theorem can be restated for quasi-integrable functions. Assume (X, T ) to be a dynamical system for the rest of this section. Theorem 3.9. Let µ ∈ M T (X) and ϕ : X → R be quasi-integrable with respect to µ. Then there exists some quasi-integrable function ψ : X → R with respect to µ, such that ψ • T = ψ µ-almost everywhere and and in particular, if µ is ergodic, one has Theorem 3.10. Fix some µ ∈ M T (X) and denote by m µ the ergodic decomposition of µ, that is µ = E T (X) ν dm µ (ν). If ϕ : X → R is quasi-integrable with respect to µ, then one has In particular, E T (X) ν → X ϕ dν is a quasi-integrable with respect to m µ . 4. Topological pressure for arbitrary potentials. In this section we introduce three notions of topological pressure for not necessarily continuous potentials. Let (X, T ) be a dynamical system. 1. An function ϕ : X → R is called potential. For given potential ϕ, ∅ = Z ⊆ X, > 0 and n ∈ N define where the supremum is taken over all ( , n)-separated sets E in Z. Likewise, define where the infimum is taken over all (δ, n)-covers F of Z. Both definitions make sense: The set { z } is ( , n)-separated for all z ∈ Z, > 0 and n ∈ N, and by Lemma 3.4 there exists always a (δ, n)-cover for Z. Both limits exist, as every ( , n)-separating set is also ( , n)-separating for 0 < < , and every (δ , n)-cover is a (δ, n)-cover for 0 < δ < δ. The quantity P Z (T, ϕ) is called upper topological pressure of ϕ on Z with respect to T , and Q Z (T, ϕ) is called lower topological pressure of ϕ on Z with respect to T . Remark 1. In case ϕ : X → R is continuous, the definitions of Q X (T, ϕ) and P X (T, ϕ) coincide with the classical definitions given in [19]. In particular, one has then by [19] Theorem 9.1 Q X (T, ϕ) = P X (T, ϕ). However, as we shall see in Remark 5, above equality does not hold in general for discontinuous ϕ. Remark 2. By definition, the quantity M Z (T, ϕ, , n) is always finite. In contrast, M Z (T, ϕ, , n) may take values in [0, +∞]. We also have the estimate So if ϕ is bounded from above, M Z (T, ϕ, , n) < ∞ follows from [19] 7.2 Remark (5). We also note that Q Z (T, ϕ, δ), P Z (T, ϕ, ) ∈ [−∞, +∞] for all Z ⊆ X and Q ∅ (T, ϕ) = P ∅ (T, ϕ) = −∞. If ϕ is not bounded from above, but from below on one trajectory, we have the following: Proof. Choose for every n ≥ 1 some k n ≥ 0 such that ϕ(T kn x 0 ) ≥ 2 n + (n + 1) · C. Set x n := T kn x 0 . Then Thus, as { x n } is a ( , n)-separated set in X for every > 0, The next lemma follows readily from the definitions of the pressure: For every x ∈ X one has in addition where the infimum is taken over all (δ, n)-covers F of Z. By Lemma 3.4 above quantity is well-defined and finite for all ∅ = Z, and we set M ∅ (T, ϕ, δ, α, n) := 0 Next define for all Z ⊆ X, δ > 0 and α ∈ R M Z (T, ϕ, δ, α) := lim sup n→∞ M Z (T, ϕ, δ, α, n). Proof. Denote ϕ n (x) := n−1 i=0 ϕ(T i x). Let C ∈ R and F n a (δ, n)-cover of Z such that x∈Fn exp − α · n + ϕ n (x) < M Z (T, ϕ, δ, α) + 1 < C for all n > N 0 , N 0 large enough. Then The last lemma justifies the next definition: Definition 4.5. Let Z ⊆ X and δ > 0. Then As every (δ , n)-cover is a (δ, n)-cover for 0 < δ < δ, the limit exists and is called topological capacity pressure of ϕ on Z with respect to T . Remark 4. Although the quantity CP Z (T, ϕ) is called capacity pressure and its definition looks like a lower Carathéodory capacity (see [16] for a detailed introduction and discussion of this subject), it is important to emphasize, that it is not a proper Carathéodory construction for general ϕ. In particular, one cannot hope monotonicity of Z → CP Z (T, ϕ), if ϕ is discontinuous. However, the next theorem shows that CP Z (T, ϕ) recovers Q Z (T, ϕ), and gives a monotonicity relation between the pressures defined so far. This is of importance in the proof of Theorem 5.2. Proof. We may assume CP Z (T, ϕ) > −∞, which implies Z = ∅. Denote ϕ n (x) := n−1 i=0 ϕ(T i x). Fix some δ 0 > 0 such that CP Z (T, ϕ, δ) > −∞ for all 0 < δ < δ 0 . Fix furthermore an −∞ < α < CP Z (T, ϕ, δ). Then there exists some sequence { n l } l∈N , which depends on δ and α, such that where the infimum is taken over all (δ, n l )-covers F of Z. Hence there has to be a l 0 ∈ N large enough, such that for all l > l 0 . As α < CP Z (T, ϕ, δ) was arbitrarily chosen, letting α → CP Z (T, ϕ, δ) yields Repeating above argument for CP Z (T, ϕ, δ) < ∞, one obtains in addition for all δ > 0. Next we pick by Z = ∅ and Lemma 3.2 some maximal (δ, n)-separated set E n ⊆ Z. That means Z ⊆ z∈En B dn (z, δ), hence for all n ∈ N. Thus, for all δ < δ 0 , and letting δ → 0 we finally obtain Remark 5. In general, it can happen that To see this, recall the example of Remark 3. It yields Other examples for differing lower and upper pressures were given in [7]. 5. Mass distribution principle and variational pressure. Let (X, T ) be a dynamical system and ϕ : X → R be a potential. We introduce various measuretheoretic notions of pressure for discontinuous potentials. The main goal of this section is then to establish analogs of the classical mass distribution principle (see [8]) for those pressures. Definition 5.1. Given some Borel probability measure µ on X, define for x ∈ X and δ > 0 exists. It is called measure-theoretic pressure of ϕ on x with respect to µ and T . In case ϕ : X → R is measurable and bounded from below, the function is also measurable and bounded from below. Thus x → P µ (T, ϕ, x, δ) is quasiintegrable with respect to µ for every δ > 0. Denote by M(X) the set of all Borel probability measures on X, and by B(X) the set of all measurable functions ϕ : X → R bounded from below. For µ ∈ M(X) and ϕ ∈ B(X) the quantity is called mean measure-theoretic pressure of ϕ with respect to T and µ. Immediately by monotone convergence follows. We state now three versions of the so-called mass distribution principle, beginning with the most important: Assume that µ(Z δ,N ) = 0 for all δ > 0, N ∈ N. By definition of L, for every z ∈ Z there exists a 0 < δ z and an N z ∈ N such that for all 0 < δ < δ z and n ≥ N z . This shows Z = n≥1 N ≥1 Z 1/n,N and µ(Z) = 0, which is a contradiction. Hence we can choose As the cover F was arbitrarily chosen, letting n → ∞ results in Hence for all 0 < δ < δ . Letting δ → 0 and using Theorem 4.6 yields Now letting → 0 gives us P Z (T, ϕ) ≥ L. The case L = ∞ can be proven in the same way as above by considering the sets If we assume ϕ to be measurable and bounded from below, we immediately obtain the second version of the mass distribution principle: Corollary 1. Let µ ∈ M(X) and ϕ ∈ B(X). Suppose Z to be a Borel set satisfying µ(Z) = 1. Then one has Proof. The proof works in a similar way like the proof of Theorem 1.2 (i) in [10]. Assume −∞ < P µ (T, ϕ) < ∞ and fix > 0. Clearly Taking the limit → 0 yields P Z (T, ϕ) ≥ P µ (T, ϕ). The case P µ (T, ϕ) = ∞ works in same way. Remark 6. In the proof of Theorem 5.2, ϕ(x) ∈ R for all x ∈ X was the only property we used, whereas Corollary 1 needed more assumptions. Actually, Theorem 5.2 gives us the third important version of the mass distribution principle, if we assume finite topological entropy and quasi-integrable ϕ: for every x ∈ G. By Brin-Katok's theorem [3] there exists another Borel set G 2 ⊆ X such that µ(G 2 ) = 1 and Combining both yields for all x ∈ G =: G 1 ∩ G 2 . Here we used lim inf n→∞ (a n + b n ) = lim inf n→∞ a n + lim n→∞ b n , if b n converges in R and the sum of both limits is well-defined. Hence applying Lemma 4.2 and Theorem 5. Note that the statement of Theorem 5.3 also holds, if h top (T ) = ∞ and X ϕ dµ > −∞. Corollary 2. Assume ϕ : X → R to be measurable such that for every N > 0 there exists a µ N ∈ M T (X) with the properties: Then one has P X (T, ϕ) = ∞. In particular above statement holds, if there is a µ ∞ ∈ M T (X) satisfying X ϕ dµ ∞ = ∞. Proof. By Theorem 3.10 we have where m µ N denotes the ergodic decomposition of µ N . Thus there must exist some We now assume that ϕ : X → R is quasi-integrable for all invariant measures, which allows us to introduce the variational pressure. which is called variational pressure of ϕ with respect to T . In particular S(T, ϕ) is well-defined for all ϕ ∈ Q T (X) in case of finite topological entropy. As ϕ is a real-valued function, one sees immediately the following: Remark 8. The variational pressure can be ±∞. Consider for example the unit circle S 1 with irrational rotation R on S 1 . The unique R-invariant measure on S 1 is the normalized Hausdorff measure H 1 , which satisfies h H 1 (R) = 0. One can choose now a Borel measurable partition It turns out that by ergodic decomposition, the variational pressure can be computed as the supremum over all ergodic measures (as in the classical case): Proof. Denote s := sup h µ (T ) + X ϕ dµ : µ ∈ E T (X) . We may assume S(T, ϕ) > −∞, as S(T, ϕ) ≥ s. Let µ ∈ M T (X) and m µ be its ergodic decomposition. As E T (X) ν → h ν (T ) ≥ 0 and E T (X) ν → X ϕ dν are quasi-integrable functions with respect to m µ , one has by Lemma 3.8 and Theorem 3.10 Next assume µ n ∈ M T (X) to be an sequence such that lim n→∞ h µn (T )+ X ϕ dµ n = S(T, ϕ). In case S(T, ϕ) < ∞ we can choose by (6) some ergodic measure ν n ∈ E T (X) such that which shows by n → ∞ the statement. In case S(T, ϕ) = ∞ we can choose in a similar way ergodic measures ν n ∈ E T (X) such that 6. Variational principles for measurable potentials. Let (X, T ) be a dynamical system. We give a first version of the variational principle for quasi-integrable functions: One might suspect that if S(T, ϕ) = −∞, similarly to Theorem 6.1 one has Q X (T, ϕ) = −∞. It is not clear whether this holds in general, but we have the following positive result: for all x, y ∈ X and i ≥ 1. Suppose there exists an ergodic measure ν such that ν(U ) > 0 for all open sets U ⊆ X. Fix a ϕ ∈ Q T (X). If S(T, ϕ) = −∞, one has Q X (T, ϕ) = −∞. Proof. First note that B dn (x, ) = B d (x, ) for all x ∈ X and > 0. Therefore h top (T ) = 0, and S(T, ϕ) is well-defined. Furthermore X ϕ dµ = −∞ for all µ ∈ M T (X). By [19] The next theorem gives us one half of the variational principle: Theorem 6.2. In case ϕ ∈ B(X) one has In case h top (T ) < ∞ and ϕ ∈ Q T (X), one has P X (T, ϕ) ≥ S(T, ϕ). In case h top (T ) < ∞ and ϕ ∈ B(X), one has Proof. The first two statements are consequences of Lemma 5.5, Theorem 5.3 and Corollary 1. By (5), for every ergodic ν ∈ E T (X) we have That means by Lemma 5.5 In view of Proposition 1, the following second version of the variational principle holds: Proof. We show that δ x0 is an equilibrium measure. By − log δ x0 B dn (x 0 , δ) = 0 we have for all δ > 0 Remark 11. Above proof combined with Lemma 4.2 shows that for all ϕ ∈ B(X) and To prove upper estimates for the variational principle, we first introduce a new class of functions: Definition 6.3. Let ϕ ∈ Q T (X). We call ϕ upper semi-continuous with respect to T , if the following holds: If { µ n } n∈N is a sequence of atomic probability measures µ n = kn i=1 λ n i δ x n i , where (λ n i ) kn i=1 are some probability vectors and {x n i } kn i=1 ⊆ X for n ∈ N, such that there exists a µ ∈ M T (X) satisfying µ n → µ in the weak * topology, then The set of all upper semi-continuous functions with respect to T is denoted by U T (X) ⊆ Q T (X). Remark 12. The example of Remark 3 is a system, where each function in U T (X) needs to be bounded from above. Assume there is a ϕ ∈ U Id ([0, 1]), which is not bounded from above. Pick a sequence x n ∈ [0, 1], such that lim n→∞ x n = x and lim n→∞ f (x n ) = ∞. Then one has lim sup which is a contradiction. It turns out that many systems exhibit the same behaviour as the above example: Proposition 4. If (X, T ) has a periodic orbit, than each ϕ ∈ U T (X) is bounded from above. . Assume there is a measurable function ϕ : X → R such that for each n ≥ 1 there is an x n ∈ X satisfying ϕ(x n ) ≥ n. Define Then it is easy to see that µ n → µ as n → ∞. On the other hand one has lim sup n→∞ X ϕ dµ n = lim sup This shows ϕ / ∈ U T (X). We do not know whether the last statement also holds for systems where each ergodic measure has no atoms. Hence, to cover the full generality, we have to deal with cases where the pressure is infinite: Proposition 5. Suppose ϕ ∈ U T (X) and P X (T, ϕ, ) = ∞ for some > 0. Then there exists a µ ∈ M T (X) such that X ϕ dµ = ∞. Proof. Let ξ = { A 1 , . . . , A k } be a measurable partition of X such that diam(A i ) < for all i = 1, . . . , k. Denote by If σ is some probability measure on X, define Note that for all n ≥ 1 We first consider subsequences { n j } j∈N such that lim j→∞ 1 n j log M X (T, ϕ, , n j ) = ∞ and −∞ < M X (T, ϕ, , n j ) < ∞ for all j ∈ N. Set ϕ n (x) := n−1 i=0 ϕ(T i x) and choose ( , n j )-separated sets E nj satisfying log x∈En j exp ϕ nj (x) ≥ log M X (T, ϕ, , n j ) − 1. Next define the probability measure for every j ∈ N. Then by the definition of σ nj and [19] Lemma 9.9 Next define Combining (7), (8) and (9) yields Furthermore there exists a subsequence { n j l } and a µ ∈ M T (X) such that lim l→∞ µ nj l = µ in the weak * topology. Thus using ϕ ∈ U T (X) This implies X ϕ dµ = ∞. Now let { n j } j∈N be a subsequence such that M X (T, ϕ, , n j ) = ∞ for all j ∈ N. We can then choose ( , n j )-separated sets E nj satisfying log x∈En j exp ϕ nj (x) ≥ 2 nj , and the statement follows in the same way as above. Proposition 5 gives us a third version of the variational principle, which is in some sense the reverse direction of Theorem 6.1: Corollary 4. Let h top (T ) < ∞ and ϕ ∈ U T (X). If there is an > 0 such that P X (T, ϕ, ) = ∞, then one has S(T, ϕ) = P X (T, ϕ). We are now able to state the variational principle for all ϕ ∈ U T (X): Theorem 6.4. Let h top (T ) < ∞ and ϕ ∈ U T (X). Then one has P X (T, ϕ) = S(T, ϕ). Proof. By Theorem 6.2 it remains to show that P X (T, ϕ) ≤ S(T, ϕ). Furthermore, by Corollary 4 we may assume P X (T, ϕ, ) < ∞ for all > 0. In this situation, the proof follows the conventional proof of the classical variational principle as given in [19], Theorem 9.10. We are only outlining it here. Denote ϕ n (x) := n−1 i=0 ϕ(T i x). Fix > 0 and choose for all n ∈ N a ( , n)-separated set E n such that log x∈En exp ϕ n (x) ≥ log M X (T, ϕ, , n) − 1. Define Given 1 ≤ q < n j and 0 ≤ m ≤ q − 1, define a(m) := (n − m)/q . One can now decompose where S is a set with cardinality at most 2q. Hence Summing over all m = 0, . . . , q − 1 gives and dividing by n j yields q n j log M X (T, ϕ, , Using ϕ ∈ U T (X) and µ(∂A i ) = 0 for all i = 1, . . . , k we obtain by j → ∞ VARIATIONAL PRINCIPLES FOR MEASURABLE POTENTIALS 383 Finally dividing by q and letting q → ∞ gives that is P X (T, ϕ, ) ≤ S(T, ϕ) for every > 0. This shows the statement. Next we introduce two non-trivial classes of (dis-)continuous functions, which satisfy Theorem 6.4. Definition 6.5. Let (Y, ρ) an arbitrary metric space and ϕ : Y → R measurable. The set is called set of discontinuity points of ϕ. One can show that D ϕ is Borel measurable. Denote by C T (X) the set of all bounded, Borel measurable functions ϕ : X → R, such that µ(D ϕ ) = 0 for all µ ∈ M T (X). Proposition 6. Let { µ n } n∈N be a sequence of Borel probability measures with limit measure µ in the weak * topology. Then one has for all ϕ ∈ C T (X) In particular one has C T (X) ⊆ U T (X). Proof. The first statement is part of the Portmanteau theorem, see for example [14], Theorem 13.16. The second statement follows immediately. Proof. As ϕ is bounded, the statement follows from Proposition 6, Theorem 6.4 and Remark 13. Remark 14. We give an illustration of above corollary. Suppose (X, d) to be a non-empty, compact space and T : X → X to be a contraction. By the Banach fixed-point theorem there exists a unique fixed-point x 0 ∈ X. It is then easy to see that for every continuous ψ : X → R and every x ∈ X In particular by Lemma 4.2 one has P X (T, ϕ) = P { x } (T, ϕ) for every x ∈ X. is an open set for every c ∈ R. We denote the set of all upper semi-continuous functions ϕ : X → R by U(X). As X is compact, every ϕ ∈ U(X) is bounded from above (see for example [1] Theorem 2.43). This immediately yields U(X) ⊆ Q T (X). In addition, the following holds: Proposition 7. Let { µ n } n∈N be a sequence of Borel probability measures with limit measure µ in the weak * topology. Then one has for ϕ ∈ U(X) In particular one has U(X) ⊆ U T (X) for every continuous mapping T : X → X. Proof. The statement follows from Proposition 7 and Theorem 6.4. Remark 15. Corollary 6 was already proven (as a special case) in [4] (1). However, the proof in [4] for the lower bound of the variational principle requires the functions f n := exp i<n ϕ(T i ) to attain a maximum on every compact subset of X. This is clearly the case if ϕ is upper semi-continuous, as this implies the upper semi-continuity of f n . Another proof for Corollary 6 can be found in [13] (see Theorem 4.4.11). Here for the lower bound of the variational principle, the functions f n need to be bounded from above on X. In the method of proof used in the present paper, above properties are not needed. Instead, the lower estimate is first proven for ergodic measures with the help of an ergodic theorem. After that it can be extended to all invariant measures via ergodic decomposition. Remark 16. Recall the example given in Remark 14 and consider the indicator function As one has D χ = { x 0 } in general, we can not apply Corollary 5. One the other hand the function χ is upper semi-continuous, as { x 0 } is closed. Thus again by Corollary 6 P X (T, χ) = χ(x 0 ) = 1. Remark 14 motivates the following example, which shows that the pressure defined in this paper might be applied to systems, which are unavailable to the pressures and its variational principle derived in [15]. Proof. Define Clearly ϕ α is measurable and continuous in α only. Also one has lim sup y→x ϕ α (x) > 0 = ϕ α (x) for every x ∈ E \ { α }, which proves ϕ α not to be upper semi-continuous. 1 and open interval (a, b) such that (a, b) ⊆ Λ k . But that means ϕ α is not continuous on Λ k . Corollary 7. Let X := [0, 1], α ∈ (0, 1) ∩ Q and T : X → X contracting such that T (α) = α. If ϕ α is the function constructed in Proposition 8, then one has Proof. This follows from Remark 14 and Proposition 8. Remark 17. Clearly both sets U(X) and C T (X) contain all continuous functions ϕ : X → R. Moreover, Corollary 5 can be seen as variational principle for potentials which are continuous from a measure theoretical point of view. We want to emphasize that the set C T (X) heavily depends on the mapping T : X → X. The set U(X) on the other hand only depends on the metric d on X. Note that in general it may happen that C T (X) U(X) = ∅, which implies U(X) U T (X). This can be seen from the following statement: Proposition 9. Assume that (X, d) has no isolated points, and there exists a nonatomic µ ∈ E T (X). Then there is a function ϕ ∈ C T (X), which is neither upper nor lower semi-continuous on X. Proof. As µ is non-atomic, there are two distinct points x 1 = x 2 ∈ X such that lim n→∞ i<n δ T i x1 = lim n→∞ i<n δ T i x2 = µ. Next define ϕ := 1 {x1} − 1 {x2} . As X has no isolated points, ϕ is not lower semi-continuous in x 1 , and not upper semicontinuous in x 2 . That means D ϕ = {x 1 , x 2 }, and as µ has no atoms, ϕ ∈ C T (X) follows. 7. Cookie-cutters with discontinuous geometric potentials. In this section we introduce cookie-cutter systems with discontinuous geometric potentials and their corresponding attractors. Following this, we use the topological pressure and its variational principle for discontinuous functions to compute the Hausdorff dimension of those attractors. The definitions and notations are based on the classical treatment of cookie-cutters in [9]. We define the derivative of T on the interval endpoints x i , y i , i = 1, . . . , N to be the left or right derivatives respectively. By definition, T is not well-defined in the set D, and there is also no way to extend T continuously to D. Proof. Fix i ∈ { 1, . . . , N }. Clearly ϕ i : I i := (0, 1) T (D ∩ J i ) → J i is continuously differentiable satisfying c i := sup ξ∈Ii ϕ i (ξ) < 1. Fix x < y ∈ [0, 1] and denote by z 1 < · · · < z m , m ≥ 0, the set of points [x, y]∩T (D ∩J i ). Set z 0 := x and z m+1 := y. As ϕ i is strict monotonic on [0, 1], we have Therefore the number c := max i=1,...,N c i has the desired property. The second part follows for example from [8] Theorem 9.1. is called attractor of the cookie-cutter T : J → [0, 1] . As T |X is continuous and X = T (X), the tuple (X, T |X) is a dynamical system. Furthermore, T |X D is continuous. Thus, if ϕ : X → R is some function satisfying ϕ|X D = T |X D, one has D ∩ X = D ϕ (see Definition 6.5). We call such a function ϕ to be an extension of T to X. Clearly an extension ϕ of T to X is continuous if and only if one has D ∩ X = ∅. In that case ϕ := T |X by definition is the only possible extension. If ϕ : X → R is an extension from T to X, the function log |ϕ| : X → R is called geometric potential of T . In case D ∩ X = ∅ the system is called cookiecutter with discontinuous geometric potentials, and every possible extension ϕ defines a corresponding geometric potential. Remark 18. We shall give some examples to illustrate the notion of cookie-cutters. Let J := [0, 1 3 3 ≤ x ≤ 1. In this case the corresponding attractor X 1 is the middle-third Cantor set. The derivative T 1 is well-defined everywhere. If we change T 1 to we see that T 2 does not exist in 5 6 . One the other hand one has T ( 5 6 ) = 3 5 / ∈ J, which means 5 6 / ∈ X 2 and ϕ := T 2 |X 2 is the continuous extension. To obtain a cookie-cutter satisfying D ϕ = ∅ for every extension ϕ of T to X, one can easily modify the second example in a way that the point of discontinuity is a fixed point (see Figure 1). The goal is to compute the Hausdorff dimension dim H X of an attractor X. In case D ∩ X = ∅ it is determined by the zero of a certain pressure function, which Proof. As T is continuously differentiable on J D, holds for all x ∈ X, thus a : X → R is continuous. In addition by the third property of Definition 7.1 one has 1 < inf x∈X a(x), which implies Then by [6] Theorem 2.4 it follows thatP X (T |X, −s · log a) = 0 if and only if s = dim H X. HereP Z denotes a Carathéodory dimension type definition of topological pressure, which was first given in [17]. Furthermore, it is well-known that for all continuous ϕ : X → R and all non-empty compact T -invariant subsets Z ⊆ X one hasP Z (T |X, ϕ) = P Z (T |X, ϕ). Hence the statement follows. Remark 19. Actually Theorem 2.4 in [6] is much more powerful than above proof suggests. It basically states, that the Hausdorff dimension of every subset of an attractor is the zero of the pressure function s →P Z (T |X, −s · log a), provided the system (X, T |X) is conformal and reasonable expanding. Conformal in this context means that the expression (10) is well-defined and continuous on X. However the theorem cannot be applied anymore, if the limit a(x) in (10) does not exist for even one x ∈ X. In higher dimensions this can happen, if the derivative in a point exists, but has distinct eigenvalues. For a survey of recent research on the topic of non-conformal repellers, see [5]. As indicated in Remark 19, the classical thermodynamic formalism and its celebrated Bowen formula cannot be applied directly to cookie-cutters T : J → [0, 1], as the derivative might not exist in finitely many points on the attractor. Nevertheless its still possible to establish an analogous formula for them: Theorem 7.4. Let T : J → [0, 1] be a cookie-cutter and X its attractor. Then there exists a geometric potential log |ψ| : X → R such that P X (T |X, −s · log |ψ|) = 0 if and only if s = dim H X. Remark 20. As we shall see, the geometric potential can be constructed with the lower semi-continuous extension of T . It might be considered as the natural choice among all possible geometric potentials. To prove above theorem, we collect some preparatory results first: x ∈ J D, lim inf y→x T (y), x ∈ D, T |B x > 1, lim sup y→x T (y), x ∈ D, T |B x < −1. Thus x → f (x) is lower semi-continuous. As log(·) is strictly monotone increasing, the function x → log f (x) remains lower semi-continuous, whereas x → −s · log f (x) is upper semi-continuous for all s ≥ 0. The remaining parts easily follow. Lemma 7.6. The dynamical system (X, T |X) is topological transitive and satisfies h top (T |X) = log N . Fix 0 < < 1 4 . Define . Clearly g(x) = (2 − 4 ) > 1 and 0 < g(x) < 1 for all x ∈ (0, 1). Fix an i ∈ { 1, . . . , N }. Define an affine scaling Φ : (0, 1) → (y i−1 , x i ) by It is easy to see that τ satisfies the properties of an EPM map, if we linearly order all points of D ∪ { x i , y i | i = 1, . . . , N } ∪ { 0, 1 } as demanded in (11). In addition one has τ (x) / ∈ X for every x / ∈ X. This follows from the construction of τ : For every x ∈ [y i−1 , x i ] one has T i (x) ∈ (y i−1 , x i ). Furthermore every x ∈ J X will eventually be mapped by T into one of the intervals (y i−1 , x i ), where it cannot escape into X. Hence X is a closed completely invariant subset of the EPM mapping τ : [0, 1] → [0, 1] (see [12]). Clearly τ |X = T |X. Next take the function f constructed in Lemma 7.5 and define We then observe: On the other hand the corresponding pressure functions for log|ψ| might not have a zero, nor a variational principle for the topological pressure has to exist. (c) By Lemma 7.6 the entropy mapping h : is upper semi-continuous. Similarly, by Lemma 7.5 and Proposition 7 the mapping µ → X −s · log|ψ| dµ is upper semi-continuous too. Hence there exists for every s ≥ 0 an equilibrium state µ s ∈ M T |X (X) such that h µs (T |X) + X −s · log|ψ| dµ s = P X (T |X, −s · log |ψ|). In particular there is a µ s0 ∈ M T |X (X) such that Thus · X induces a norm on the R-vector space I(X), which is defined as the set of all measurable, bounded functions ϕ : X → R. Proposition 11. Let (X, T ) be a dynamical system such that h top (T ) < ∞. Then one has for all ϕ ∈ I(X) hence P X (T, ϕ) is finite. In particular one has for all ϕ 1 , ϕ 2 ∈ I(X), that is P X (T, ·) : I(X) → R, ϕ → P X (T, ϕ), is Lipschitz continuous. Proof. The first statement follows from the definition of pressure. By Remark 2 we have M X (T, ϕ, , n) to be finite for all n ∈ N, > 0. Thus the second statement can be similarly proven like [19] Theorem 9.7 (iv). Remark 22. Above theorem basically states, that if one has a classical, smooth cookie-cutter T : J → [0, 1], the Hausdorff dimension of its attractor changes only slightly, if one adds some tiny corners to T . Another way to view this theorem is that in terms of the dimension, a smooth cookie-cutter can be approximated by cookiecutters with discontinuous geometric potentials. Note that the theorem cannot be used the other way around, i.e. to approximate a cookie-cutter with discontinuous geometric potentials by smooth cookie-cutters, as lim n→∞ f n − f ∞ J = 0 cannot hold in this case. Figure 3. The cookie-cutters T n approaching the limit cookiecutter T ∞ . Before we prove the theorem, we recall the following lemma: Clearly one has f n ∈ I(J) for all n ∈ N ∪ { ∞ }. Assume sup n∈N f n J = ∞, then there exist some n k ∈ N, x k ∈ J such that lim k→∞ f n k (x k ) = ∞. Thus lim k→∞ f n k (x k ) − f ∞ (x k ) = ∞, which is a contradiction. Hence there is a constant C > 1 such that for all x ∈ J, n ∈ N ∪ { ∞ }. As x → log(x) is Lipschitz continuous on [1, C] with Lipschitz constant 1, we in addition have lim n→∞ − log |f ∞ | + log |f n | J ≤ lim n→∞ |f n | − |f ∞ | J = 0. This means P Σ + N (σ N , λ s n ) = sup h µ (T n |X n ) + Xn −s · log f n |X n dµ : µ ∈ M Tn|Xn (X n ) , hence, using the variational principle again, P Σ + N (σ N , λ s n ) = P Xn T n |X n , −s · log f n |X n (15) for all n ∈ N ∪ { ∞ }, s ≥ 0. Now denote s n := dim H X n for all n ∈ N ∪ { ∞ }. Let s n k be some convergent subsequence with limit s * . Recall that by (15) and Theorem 7.4 each s n k is the unique zero of s → P Σ + N (σ N , λ s n k ). Thus one has by Proposition 11,(13) and (14) P ≤ s * · − log |f ∞ • π ∞ | + log f n k • π n k Σ + N + s * − s n k · log f n k • π n k Σ + N ≤ s * · − log |f ∞ | + log f n k J + s * − s n k · log f n k J → 0 as k → ∞. This means P Σ + N (σ N , λ s * ∞ ) = 0, hence As s n ∈ [0, 1] for all n ∈ N, there exists at least one convergent subsequence. This shows by (16) that the limit of s n exists and one has lim n→∞ dim H X n = dim H X ∞ . Remark 23. The two key observations for the proof of Theorem 8.2 are: (α) The operator ϕ → P X (T, ϕ) is Lipschitz continuous. (β) For each n ∈ N ∪ { ∞ } there is a topological conjugation Σ + N → X n . To apply (α), one has to relate the variational pressure and the topological pressure via a variational principle. For this, we used Corollary 6. As mentioned in Remark 21, there is also the option to change the underlying EPM system of each X n into a new system, where dynamics and the potential are continuous again. Then one would be able to use the classical variational principle. However, by changing the system, one has to introduce new symbolic spaces F n ⊆ Σ + Nn , where the numbers N n depend on the discontinuities of the functions f n , and each F n is a subshift of the full shift on Σ + Nn . Using this approach together with our method of proof, only a weaker version of Theorem 8.2 can be proven: To satisfy (β), one has to assume in addition that all F n are pairwise topological conjugated for n ∈ N ∪ { ∞ }.
10,477.6
2017-01-01T00:00:00.000
[ "Mathematics" ]
Research on the Influence of Steam Turbine Seal Leakage CFD was employed to simulate the steam flow in 1.5-stage cascades with three different seal clearances. When the seal clearance is 0, there is no steam seal leakage, no obvious secondary flow in the cascade, and the stage efficiency reaches 88.27%. When the clearance of diaphragm seal and rotor tip seal is 1mm, the leakage of diaphragm seal strongly interferes with the flow in the cascade, which promotes the formation and development of the end-wall secondary flow near rotor hub, and the rotor tip leakage flow has little effect on next stage, and the stage efficiency drops by about 2 percentage points. When the seal clearance is 4mm, the end-wall secondary flow near rotor hub and next stator casing is strengthened significantly, and the attack angle loss increases, so the stage efficiency decreases by about 13 percentage points. Introduction The performance of steam turbine flow path is directly related to the economy and safety of the entire steam turbine unit, and naturally becomes the focus of experimental research, operation optimization and technological transformation. The flow loss in the flowpath of steam turbine cascade is mainly blade profile loss, secondary flow loss and steam leakage loss [1]. Due to the advanced development of aerodynamics and other theories and manufacturing technologies, currently the flow efficiency of steam turbine has reached high level, and the blade profile loss and secondary flow loss have been greatly reduced [2,3], so steam leakage loss has become an important factor restricting the improvement of turbine flow efficiency [4]. Therefore, it is necessary to indepth study the impact of the internal seal leakage on flow performance, which will help site engineers to grasp the working characteristics and status of the steam seal, and guide the optimization or technological innovation of the steam turbine unit. In the field, the installation location of the internal seals in steam turbine stage is very special. It is very difficult to directly arrange test points on the seal, which makes it impossible to monitor the wear condition of the seal in stage, and it is impossible to judge the influence of the leakage on the flow characteristics in stage. Since it is not easy to directly measure the seal leakage in steam turbine stage [5], this paper chooses to use numerical simulation methods for research. Geometric model The blade profile and seal structure is a 1.5-stage cascade with a diaphragm seal and a rotor tip seal, which are 3 flat tooth labyrinth seal. In order to compare the flow under different seal clearances, the steam flow under three different clearances of 0, 1, and 4mm were simulated. Turbine machinery cascade has periodic flow. In order to improve the efficiency of numerical simulation and analyze the influence of steam leakage on next stage, this paper took a cascade flow-path for research. The simulation object was simplified as a complete stage including diaphragm seal and rotor tip seal, and a stator blade with diaphragm seal. The blades and their seals were referred to as S1, DS1, R1, TS1, S2, DS2. The entire computing domain was divided into 6 areas, namely areas S1, DS1, R1, TS1, S2, and DS2, as shown in Fig.1. In order to obtain a stable flow field, the front end of the computational domain was appropriately extended. The calculation area used a tetrahedral hybrid grid with strong regional adaptability, and periodic grid on the periodic boundary was adopted. Boundary conditions The working fluid was superheated steam. The turbulence model was standard k-İ two-equation model, and the wall function method was used to deal with the near-wall area. The calculated governing equations were Flow, Turbulence, and Energy. The control equations adopted SIMPLE algorithm of pressure-velocity coupling. In this example, the total inlet pressure was 3.2MPa, the total temperature 800K, and the incoming flow direction was perpendicular to the inlet. The static pressure value at the given median diameter of the outlet was 2.6MPa, and a simple radial balance equation was used to determine the pressure distribution on the outlet. The interface parameter transmission between the static and rotor blade areas used multi-reference model (MRF) method. S1, S2, and R1 flow field were given periodic boundary conditions on the circumferential boundary, and internal boundary conditions were given for the contact surface on radial boundary, and wall conditions were given for the rest. The wall rotation speed of R1 area was 3000rpm. The boundary conditions of the flow field in the sealing area were as follow: given the internal boundary conditions on the contact surface with the blade area, and the circumferential boundary was set as the periodic boundary conditions. Under the condition of constant stage flow rate, the boundary conditions of the inlet and outlet after the change of the seal clearances could be determined according to the Friuli Greig formula. Since there is no steam leakage from seals, the steam can fully expand in stator blade flow-path and perform work in rotor blade flow-path, and the main flow will not be disturbed by leakage. Fig.2 and Fig.3 shows the distribution of streamlines in the flow-path of stator blades S2. It can be seen that the streamlines in S2 flow-path are along the main flow direction, and the flow loss is small. The absolute value of negative attack angle at top of the flow-path further increases, comparing with that at bottom of the flow-path. This is because the circumferential velocity of steam at the leading edge of blade S2 is larger, and the circumferential speed increases along the leaf height. Fig.7 shows the streamline in the rotor blade flow-path. There is no obvious secondary flow at the endwall of rotor blade R1, and no passage vortex and lateral secondary flow from the pressure surface to the suction surface are observed. The streamlines regularly flow out of flow-path along the main flow direction. This is because there is no turbulence caused by diaphragm seal leakage, which makes the dimensions of passage vortex, reverse vortex, and corner vortex become very small and difficult to be observed. The streamlines at leading edge of R1 are separated at saddle point. Because there is no strong lateral pressure gradient, the horseshoe vortex on pressure surface has not developed into a strong passage vortex. which increases and strengthens the secondary flow area near R1 hub. The secondary flow near S2 casing is due to a large amount of R1 tip seal leakage flows into S2 flow-path, making the upper flow of S2 flow-path uneven, and at the same time, the incoming flow has a large negative attack angle. After it hits the blade surface, pressure stagnation occurs, forming two horseshoe vortexes on the pressure and suction surfaces. At the same time, it forms a flow separation area on the pressure surface at top of S2 flowpath, and the boundary layer thickens, as shown in Fig.11. As shown in Fig.12, due to the interference of a large amount of diaphragm seal leaks, the absolute value of negative attack angle of R1 is abnormally increased, and the steam inlet angle is close to perpendicular to the axial chord. Under such condition, a large area of localized flow separation occurs on R1 pressure surface. Obvious lateral secondary flow can be seen near the end-wall. The pressure-surface horseshoe vortex flows directly from the leading edge to the suction surface to form a passage vortex, and the suction surface horseshoe vortex develops into a reverse vortex at the root of R1 suction surface. .13~14 shows the static pressure distribution on the surface of R1 and S2. The over-expansion phenomenon appears at the inlet of R1 pressure surface. This is due to the large negative attack angle of the incoming flow. The absolute value of the negative attack angle decreases along the blade height, and the over-expansion phenomenon is also weakened. The phenomenon of deceleration and pressurization appears at the inlet of S2 suction surface. The pressurization effect is the most significant at the blade tip of suction surface, and no steam overexpansion is observed at the root of suction surface. Comparison of flow efficiency Under different seal clearances, the pressure and temperature parameters of inlet and outlet are shown in Table 1. When the seal clearance is 0, the total flow of the cascade is 2.735kg/s, the leakage flow of each seal is 0, and the calculated stage efficiency is 88.27%. When the seal clearance is 1mm, the total inlet flow is also 2.735kg/s, and DS1 leakage flow is 0.049kg/s, TS1 leakage flow 0.015kg/s, DS2 leakage flow 0.048kg/s, so the relative internal efficiency of the stage is 86.43%. When the seal clearance is 4mm, the total inlet flow is 2.735kg/s, of which DS1 leakage flow is 0.221kg/s, TS1 leakage flow is 0.057kg/s, DS2 leakage flow is 0.218kg/s, and the relative internal efficiency is 75.79%. Discussion and analysis The losses caused by seal leakage mainly include the following. First, the leakage does not expand in stator blade or the leakage bypasses rotor tip seal, and it cannot do work in rotor blade flow-path. Secondly, the high damping leakage flow makes its entropy value much larger than that of the fluid in flow-path, with vortexes. Finally, the leakage enters the cascade and mixes with the mainstream, so irreversibility will inevitably lead to a further increase in the entropy. Studies have shown that the entropy increase loss caused by mixing has the same order of magnitude as the entropy increase of the leakage in seals [2]. It can be seen that the leakage will inevitably cause the reduction of the steam flow in cascade and the increase of flow loss. In this paper, when the seal clearance is 0, there is only circumferential flow in seals, and the leakage loss is 0. The internal losses are mainly blade profile loss and secondary flow loss. Since there is no leakage interference, the streamline in flow-path are basically along the main flow direction, and the secondary flow loss is also small, so the stage efficiency is the highest at this time. When seal clearance is 1mm, the stage efficiency will drop by about 2 percentage points due to the leakage. If seal clearance is 4mm, the amount of leakage increases sharply. This part of the leaked steam cannot do work in the stage, and at the same time it interferes with thermal conversion process of the mainstream, resulting in large-scale passage vortex and reverse vortex in flow-path. The negative attack angle of rotor incoming flow and secondary flow near the endwall have increased significantly, and flow separation appears on the suction surface of rotor blade & stator blade of next stage. A large amount of effective energy consumption is dissipated as heat, and the flow efficiency drops by about 13 percentage points. Conclusion When the seal clearance is 0, when there is no leakage loss, the secondary flow in the cascade is weak, and the stage efficiency is the highest. In the case that a seal clearance is 1mm, the diaphragm leakage has a strong interference to the flow in rotor blade flow-path, and while the leakage of rotor tip seal has little effect on next stage, and the stage efficiency drops about 2 percentage points. When the seal clearance is 4mm, the secondary flow near rotor blade hub and the casing of next stator blade is significantly enhanced, and flow separation occurs on the suction surface of rotor blade, and while the loss of attack angle increases at the inlet of rotor blade and next stator blade. The suction surface of rotor blade and next stator blade has local flow separation, and the stage efficiency drops by about 13 percentage points. In comparison, the effect of the diaphragm seal leakage is greater than that of the rotor tip seal leakage.
2,744
2021-01-01T00:00:00.000
[ "Engineering" ]
Iris: Interactive all‐in‐one graphical validation of 3D protein model iterations Abstract Iris validation is a Python package created to represent comprehensive per‐residue validation metrics for entire protein chains in a compact, readable and interactive view. These metrics can either be calculated by Iris, or by a third‐party program such as MolProbity. We show that those parts of a protein model requiring attention may generate ripples across the metrics on the diagram, immediately catching the modeler's attention. Iris can run as a standalone tool, or be plugged into existing structural biology software to display per‐chain model quality at a glance, with a particular emphasis on evaluating incremental changes resulting from the iterative nature of model building and refinement. Finally, the integration of Iris into the CCP4i2 graphical user interface is provided as a showcase of its pluggable design. | INTRODUCTION Macromolecular structure determination primarily involves building an atomic model that best fits experimentally-observed data from practical methods, such as X-ray crystallography (MX) and electron cryomicroscopy (cryo-EM). At every step of the structure solution pipeline loom unavoidable uncertainties, from the experimental errors introduced in the early stages, to the subjective decisions made during model building. The intertwined steps of refinement and validation at the end of the process play a crucial role in mitigating against this. Refinement and validation are performed with the help of validation metrics, which provide information about various aspects of the atomic model. They may pertain just to small sections of the model (local criteria) or to the model as a whole (global criteria). The calculation of validation metrics may require only a model, or a model and experimental data (reflection data in the case of MX). Model-only metrics inform about aspects such as the geometric plausibility of the atomic model as a standalone entity, covering deviations from ideal bond lengths, angles, planes or dihedrals. These geometric analyses result in the detection of outliers: arrangements of atoms that are rare and deemed unlikely to occur, which are either the result of an improbable but true feature of the protein structure, or an error in the protein model. The only way to distinguish between these two possibilities is to compare the atomic model to the experimentally derived electron density, to assess the likelihood that a particular set of atoms is modeled correctly, given the data. Judgments like these are typically made by manually reviewing the questionable area in molecular modeling packages like Coot 1 or CCP4MG, 2 but can also be helped by local reflections-based metrics, which take the experimental data into account, such as the Debye-Waller factor (B-factor) and measures of electron density fit quality. The most commonly used measures of local fit quality are the real space R and real space correlation coefficient, both of which have been demonstrated to show individual biases in assessing the accuracy of a model. 3 Today, these validation metrics can be produced either by validation-specific software within software suites like CCP4 4 and PHENIX, 5 independent web services, or by options and plugins in molecular modeling packages. The number of different routes for model validation has exploded in recent years, having developed from nothing just a few decades ago. Indeed, there is an ever increasing demand for new validation metrics and better refinement procedures, 6 one most certainly fuelled by periodic realizations that the models in the Protein Data Bank (PDB) are not always perfect. [7][8][9][10][11][12] In the early days of macromolecular crystallography, refinement was an impossibility; the necessary computational power was simply not available. It was not until 1971 that Robert Diamond published the first automated least-squares refinement algorithm. 13 At this stage, the only available validation metrics were global indicators, including resolution and R-factor. The introduction of restraints and constraints to the least-squares refinement process in software-both in small molecule 14,15 and macromolecular 16,17 crystallography-brought a significant leap forward by reducing the size of the least-squares matrix most programs used for their minimization calculations, thus reducing the computational requirement of model refinement. These restraints-ideal bond lengths, angles, planarities and sometimes also torsions angles-did more than just keep the whole process stable; they were about to become very useful metrics to flag up geometric distortions in a protein model. Those distortions may either be a consequence of modeling errors, thus the model should be inspected and corrected, or the product of genuine chemical interactions, meaning the model should be inspected and respected. As further developments were made, and the amount of computing power available to crystallographers increased exponentially, so too did the amount of available software for macromolecular structure determination. The 1990s saw the inception of the first validation software suite, PROCHECK, 18 which produced a number of summary outputs, including a page containing residue-by-residue plots of stereochemical analyses. Though basic, these local analyses proved exceptionally useful to users, providing immediate direction toward areas of the model that were likely to be in need of further refinement or review. Similarly, the WHAT IF 19 check report, WHAT_CHECK, 11 performed an array of geometric validation calculations, including some analyses that were not available in PROCHECK, for example, unsatisfied donors and acceptors, and suggested side-chain flips. 20 In 2004, Coot 1 took interactive output one step forward from that of O, 21 adding scrollable self-updating charts as the result of its comprehensive array of integrated validation tools. These included residue-by-residue geometric and reflections-based analyses in the form of pop-up interfaces. Many of these analyses were based on the Clipper C++ libraries. 22 MolProbity, 23 which produces high-quality geometric analyses of protein models using their proprietary hydrogen-placement and all-atom contact analysis, quickly became and still is one of the most ubiquitous pieces of validation software today. MolProbity defines itself as a "structure-validation web service," and in addition to the web-based MolProbity servers that produce geometry-based validation metrics reports, the MolProbity libraries are also found in suites like CCP4 and PHENIX. In these implementations, the MolProbity server is run locally to calculate the metrics on the back-end, which are then used by the package-integrated validation software to generate a report to be shown to the user. PHENIX's Polygon 24 provided a way to graphically represent any combination of the available validation metrics meaningfully, in a single view, by plotting multiple quality indicators alongside one another from a shared origin. The one-shot view of a model's overall quality, combined with the use of percentiles for context, proved very successful and has since inspired other multi-metric reports (vide infra). In January 2014, the Worldwide Protein Data Bank partnership (wwPDB) introduced the OneDep system, 25 designed in part to provide "preliminary validation reports for depositor review before deposition." This incorporated the well-known summary quality sliders, featured on the summary page for every structure in the PDB, which show a model's percentile rankings for a number of whole-model validation metrics. The full validation report also contains residue sequence plots which flag geometry outliers. Each of the pieces of software mentioned so far has brought something new and valuable to the field (Table 1). But, owing to the differences between them, a typical workflow will often involve running different programs-for example, Coot, then MolProbity or Polygon, and finally the wwPDB validation server-to obtain the desired array of metrics and paint a complete picture of the outcome of refinement. Movement away from manual model building and refinement, and toward an automated iterative process, has been a long-time target in the field. Since the early 1990s, software like O, and the programs working in conjunction with it, like OOPS, 26 have made it possible to automate a significant amount of the building process, requiring reduced user input. The release of the ARP/ wARP 27 software suite, which aimed to produce essentially complete models from electron density maps alone, paved the way for full automation by coupling the model building and refinement processes together. In recent years, this goal has been almost completely realized by software like PHENIX's AutoBuild, 28 which performs many cycles of refinement and rebuilding to automatically produce a relatively complete model. With fully automated systems like AutoBuild, the latest model file can be exported at each refinement iteration, enabling the user to follow the progress of the automated procedure by comparing models from different stages in the overall refinement process. And a useful way of tracking progress is by seeing their validation results side by side. Novel validation software not only need to be able to calculate both model-only and reflections-based analyses at a per-residue level, but also to be compatible with the recent advances in automation, by having the capacity for integration within new and existing pipelines as an automated task with, ideally, minimal run time. Iris is a pluggable standalone validation software designed to address the specific needs described here: to provide an all-in-one package that calculates its own perresidue validation metrics-but also allows the incorporation of metrics from other validation services such as MolProbity-and displays them in a compact, interactive graphical interface that enables at-a-glance comparison between stages of automated model building, and finally, that runs quickly enough to be used either interactively or at the end of a pipeline with imperceptible time penalty. In the present work, we will discuss the rationale behind the design of Iris's graphics, how its metrics compare to those calculated by other programs such as MolProbity and Coot, and introduce, as an example, the implementation of our component into the CCP4i2 29 graphical user interface. | Component design Python was the language of choice for the Iris validation package. Increasingly prevalent in the field, the Python interpreter is a component of all the major crystallographic software packages. Python code is naturally easy to read and write, and because the Iris code was written specifically with maintainability and customizability in mind, it is especially easy for anyone who wants to use the package to edit the source code for their needs. The built-in metrics calculations are based on the fast Clipper 22 libraries, thanks to the Clipper-Python C++ bindings. 30 The Iris module also hooks C++ functions from libraries like NumPy 31 and the Computational Crystallography Toolbox (CCTBX), 32 providing the computational efficiency of strongly typed C++ code, combined with the simplicity of a scripting language like Python. Despite reaching its official end-of-life date on January 1, 2020, Python 2 is still the only version available in some crystallographic packages, and such is the case of the CCP4 suite. Consequently, Iris was written to be compatible with both Python 2 and Python 3. Note: All the programs mentioned have longer run times than Iris, which are exacerbated in some cases by simple, but mandated, manual input. Coot performs all the desired analyses, but provides them in individual horizontally-scrolled bar charts, rather than an all-in-one graphic. Similarly, MolProbity, which performs excellent per-residue geometric (but not reflections-based) analyses, provides its output as a vertically-scrolled table. Polygon and wwPDB both provide an all-in-one overview for a model, but not one with residue-by-residue analyses. The Iris validation package has two major components: the metrics module, responsible for the back-end validation analyses, and the interface module, which generates the front-end user-interface. | Metrics The validation metrics chosen for the Iris metrics generation module are those that are most commonly selected in a typical workflow. Based on the class cascade from the Clipper MiniMol library ( Figure S1), the metrics were implemented with the goal of producing the most accurate results possible with minimal run time. The core analyses can be broken down into three categories: B-factors, geometry, and electron density fit. B-factor analyses are performed by taking the values directly from the Clipper MiniMol object. The B-factor for each atom is listed within the model (coordinates) file, and is loaded as an attribute of each Clipper MAtom object upon initialisation. For each residue, the metrics module calculates the minimum, maximum, mean, and SD of the B-factor values of its constituent atoms. The geometry calculations in Iris include bond length and torsion angles, which are used to analyze backbone conformation (Ramachandran likelihood) and side-chain conformation (rotamer likelihood). The bonds geometries themselves are calculated with simple matrix calculations using atom coordinate data from the MAtom objects. To produce meaningful validation metrics, these bond angles are turned into both a continuous probability score and a discrete classification (favored, allowed, or outlier), using reference data. The Richardson lab has published a public repository of reference data for different types of residue geometry, based on thousands of high-resolution, qualityfiltered protein chains, called Top8000. 33 In the case of backbone conformation, the Clipper Ramachandran class already implements the relevant data from the Top8000 database, accessible through a selection of calculators which output a probability value for a pair of backbone torsion (phi, psi) angles, sampled from the relevant Ramachandran distribution. This value is used as Iris' continuous score metric, and is also directly used to produce the discrete classification, by applying the same thresholds as those used in Coot. In the case of side-chain conformation, there is no Clipper class to do the work. To generate a validation metric from the side-chain torsion (chi) angles, the data had to be implemented manually. The Richardson lab's rotamer data is provided for each of the rotameric canonical amino acids in two forms: (a) a multidimensional contour grid that maps out the feasible chi space in discrete intervals, plotting a "rotamericity" value at each point; and (b) a set of "central values," which lists the mean and SD of the bond torsions for each recognized rotamer. 34 The most accurate way to produce a continuous rotamer score would be to implement the raw data from the contour grids with an interpolating lookup function, but this poses two difficulties. The first is that even if the data from these grids are stored in a data structure optimized for multidimensional search like a k-d tree, looking up and interpolating these data for each angle in every residue in a model would elicit an unacceptably long run time. This could be mitigated against by performing the interpolation in pre-processing, or by omitting it entirely, given that the contour grids are already fairly high resolution. But, the second and more significant difficulty is that these contour grid files are large, totaling 39 megabytes in all. As a consequence, directly loading these data results in a long initial load time and significantly increases the file size of the package; these factors would only be exacerbated if interpolation were implemented in the lookup function. Because of this, the Iris rotamer score is based on the central values data, which have a much smaller footprint. The score is calculated by modeling each of a rotamer's chi dimensions as Gaussian distributions, calculating a z-score in each dimension, and taking the quadratic mean (Equation 1). Equation (1): formula used to calculate a continuous rotamer score from the central values lists. Where i is the index that enumerates the recognized rotamers for a residue, N is the number of chi dimensions applicable to a particular residue, χ n is the nth chi angle of the residue, and μ χ in , σ χ in are the mean and SD of the indexed rotamer, respectively. The rotamer classification, in contrast to the Ramachandran one, is not just calculated by placing thresholds on the continuous score. Instead, a more accurate solution was devised, which involves using a compressed version of the contour grids to achieve a compromise between accuracy and load time (Figure 1). To compress the data, the point values were first converted from the very highprecision floating-point values in the contour grids to a low bit-width integer classification. In order to maintain concordance with the MolProbity rotamer analyses, based on the same data, the same thresholds for categorization were applied, that is, let ≤0.3% define "outlier" rotamers, let ≥2.0% define "favored" rotamers, and let values in between define "allowed" rotamers. This reduced the size of the data substantially, but still the most significant factor in the size of the data persisted, which was storing all the coordinates as keys. To maximize the compression, the data for each amino acid were flattened to a unidimensional array of values. This way, the index of each value corresponds to the calculable index of its coordinate in a theoretical ordered array of coordinates (Equation 2). It is only possible to store the data in this way when there is a uniform distance between each point in every dimension, and when there is a value for every possible point in the dimension. In the original data, the latter criterion is not met. To enable flattening, an extra value had to be allocated for "unknown" data points, thus filling every point on the contour grid, and bringing the number of discrete classifications up to four. If these four classifications are treated as integers in the interval [0, 3] each coordinate point only requires a precision of 2-bits, which means four values can be stored per byte of data. Reducing each classification in the flattened arrays to a 2-bit value and compressing the result with gzip leads to a file size of 147 kilobytes for the entire library, a 265x reduction from the original data. Upon initialisation of the module, the library loading function can decompress this file and load the library to memory on the millisecond scale, with similarly fast point recall. The compression process is illustrated in Figure 1. Equation (2): formula used to calculate the relevant index in the compressed rotamer library for a given array of chi angles. Where N is the number of chi dimensions applicable to a particular residue, χ n is the nth chi angle of the residue, Χ n is the regularly spaced array of chi values known in the nth chi dimension for that residue type, thus (Χ n1 − Χ n0 ) represents the width of the spacing in that dimension, and dim(Χ m ) is the number of known points in the mth dimension for that residue type. nint is the nearest-integer rounding function. Electron density fit scores for each residue are calculated by applying methods of the Clipper crystallographic F I G U R E 1 Visualization of the rotamer library compression. The topmost figure shows a contour grid for a hypothetical amino acid with two side-chain torsion angles. Grid points are colored red for "outlier" values, yellow for "allowed" values, green for "favored" values, and gray for "unknown"-where a coordinate is not listed in the original contour grid file. The bottom figure illustrates the compression process: starting with the conversion from floating point to integer data points, followed by the type conversion from dictionary to integer array, which includes the addition of zeros to represent null data points, and lastly the compression of Python integers to two-bit binary values. It should be noted that the original contour grid values are given to a much higher precision than is shown here map (Xmap) class. Firstly, a map is calculated from the list of reflection data using a fast Fourier transform, and is stored in memory. Then, to score the fit of each atom, the map density at its coordinates is used to calculate an atom fit score (Equation 3). A residue's fit score is calculated by taking the average of the fit scores of its constituent atoms. Average density fit quality for the backbone and side-chain atoms alone are also calculated. Equation (3): formula used to calculate the density fit score for an individual atom. Where NormCDF is the cumulative density function of the standard Gaussian distribution, ρ atom is the electron density at the coordinate of a particular atom, normalized by its proton number, and μ map , σ map are the mean and SD of the map electron density respectively. The final step in the construction of the metrics module was the generation of a percentiles library. This enables the final report to be able to provide a sense of scale to each of the metric values, which is necessary if metrics are going to be displayed alongside one another in a meaningful way. Some metrics would otherwise have to be presented in arbitrary, incomparable units. To generate the library, the metrics generation functions were run for every structure in the PDB-REDO database, 35,36 in which every model is accompanied by its experimental data. To ensure the library was based on models generated using modern standards, data from structures deposited before 2010 were discarded. The resulting data are based on the analysis of over 66,500 structures and more than 47 million individual residues. The structures analyzed and their respective metrics values were divided into 10 non-uniform resolution bins. For each bin, thresholds were calculated at each integer percentile for all relevant metrics. The percentile calculations were also performed for all the data together, to be applied to models based on data of unknown resolution. The result of these percentile calculations is a set of highly accurate distributions which can be used to normalize the distributions for any of the continuous metrics. | Graphical panel The centerpiece of the Iris report is its graphical panel, which comprises a chain-view display and residue-view display presented alongside one another. Both of these are scalable vector graphics (SVGs) that can respond dynamically to user interaction, handled by JavaScript (JS) functions. This SVG/JS format was chosen because it is natively compatible with all modern browsers, even the most basic. Report-specific SVG and JS code is generated programmatically within the interface module. | Chain-view display The Iris report was designed around its chain-view, which illustrates a number of local validation metrics for every amino acid of a protein chain in a single compact display. The graphic went through a number of designs before reaching its final form (Figure 2). In the finalized design, each segment of the circle represents an individual residue, and each of the concentric ring axes represent a different metric. The idea behind this design is that areas of poor protein structure will often cause fluctuations in multiple validation metrics together, making them especially easy to spot. This design is robust to even extremely low or high residuecounts. Each ring can either represent a continuous metric, with a radial line graph, or a discrete metric, with a traffic light-based (red, amber, green) segment coloring system. By default, the two innermost rings are discrete representations of Ramachandran and rotamer favorability, and the outer four rings are continuous representations of B-factors and electron density fit. The axis arrangement can be configured to produce any combination of metrics by editing the definitions file in the root directory of the package. The continuous axes are individually scaled to collectively emphasize the regions with the poorest validation scores relative either to the rest of the chain or the model, depending on the user's settings. To enable this, the polarity of each axis is set such that the inward direction on the axis represents poorer values for that metric. For example, a higher B-factor represents a less-desirable value, so the polarity of the B-factor axes is inverted, causing areas of higher B-factor values to appear as troughs, facing toward the center of the plot. Once the polarities have been unified, the axes are skewed to stress the areas on the inward-facing side of each. The chain-view has the ability to show and compare two different versions of the same model, an extremely useful feature in the era of automated iterative refinement pipelines like AutoBuild. If Iris is supplied with model data from a prior iteration in addition to the latest, both datasets will be analyzed together, and the collation functions of the report submodule will align the chains and sequences of the two versions using pairwise alignment. This way, the results from both versions can be presented in the same graphic, even if changes have been made to the chains' arrangements or their amino acid sequences. Originally, the two versions were going to be shown concurrently, with the previous dataset represented by a gray shaded "ghosting" area around each axis. Testing showed that this would often make the graphic look too crowded, and some areas would require close examination to understand the changes that had taken place between iterations. In the final design, the different datasets are transitioned between with a toggle switch at the top of the report pane, which triggers an animation that warps between the two model versions, far more intuitively highlighting the areas of greatest change. F I G U R E 2 The evolution (top) and final design (bottom) of the Iris chain-view display. In its first iterations, based on existing residue-by-residue displays, the chain-view was a radial bar chart, with multiple bars stacked on top of one another within a segment. The problem with these initial designs was that in chains with a high number of residues, the chart would become unclear. The third image of the second generation of iterations shows the original "ghosting" implementation. The bottom picture shows an instance of the final design, produced using synthetic data. At the one o'clock position is the residue selector, highlighting an individual residue segment. The patch of 10 residues at the two o'clock position illustrates the indicators of "poor" residues for each feature of the chain-view graphic. The discrete axes show amber or red segments, the continuous axes show an exaggerated dip toward the center, and MolProbity clash indicators appear around the edge The chain-view is highly extensible, and can be easily adapted to include any metrics added to the metrics module, as well as data from other validation tools. If MolProbity analyses are run alongside Iris, clash markers from MolProbity's all atom contacts analysis will be displayed around the edge of the circle, and the more accurate MolProbity Ramachandran and rotamer outliers will be shown (See CCP4i2 implementation). The residue-selector arm of the chain-view is used to select any individual residue for more detailed information, to be shown in the residue-view. | Residue-view display The default residue-view (Figure 3, left) has a grid-based layout which has one section dedicated to discrete metrics, illustrated with traffic light checkboxes, and another section containing bar charts to represent the percentile values for the continuous metrics, which are the B-factors and density fit scores. The bar charts also contain individual spectra, which show the minimum, maximum, mean, and SD of each of the continuous metrics within the selected chain and model. This way, you get a comprehensive understanding of the quality of a residue, and a chain's distribution of residues, at a glance. The percentile value tells you the quality of the selected residue relative to all other residues of similar-resolution structures, the position of the marker within each bar's spectrum tells you the metric quality of the selected residue relative to the other residues in the chain, and the distribution of each bar's spectrum tells you the overall quality of the selected chain relative to other similar-resolution structures. A radar chart is also available (Figure 3, right), which displays all of the metrics on a continuous percentile scale, including Ramachandran probability and the aforementioned rotamer score. | Reports The Iris validation report is a single HTML file, with the chain-view and residue-view implemented as integrated SVG elements. Other linked files include either one or F I G U R E 3 The Iris residue-view displays: default layout (left) and radar chart (right). In the default layout, the top section contains the discrete metrics, and the bottom section contains the continuous metrics. Of the three dashed lines on each bar, the middle line represents the mean percentile for the selected chain, and the other two lines represent one SD from the mean in each direction. The top and bottom of each bar represent the minimum and maximum percentile for the selected chain. The radar chart option shows all the metrics as continuous scores, on the percentile scale. In this chart, the color of the circle corresponding to each metric represents the position of that particular value within its distribution. Hovering over any of these circles produces a pop-up bubble containing both the absolute and percentile values for a particular metric. The shape of the chart is updated automatically based on the number of metrics selected two stylesheet (CSS) files, and the JS files responsible for chart interactivity. Iris will produce one of two types of report: a full validation report, containing both the graphical panel and further sections of additional validation data, or a minimized report, containing just the graphical panel. The full report provides the ability to test and customize Iris as a standalone entity, or to implement it as an addition to a bespoke Python-based model pipeline. The intended purpose of the minimized report is to facilitate the integration of Iris within new or existing software suites, to be rendered by the package's own integrated browser, either on its own or via insertion into another HTML page. The simplest way to do this is by using an iframe, to maintain CSS separation. CCP4i2 29 is the latest version of the graphical user interface for the CCP4 suite, and will be the first package to feature Iris as an integrated validation plugin. The native CCP4i2 validation routine is the Multimetric Model Validation task, which has been renovated with a new structure and design ( Figure 4) to feature the Iris graphical panel. Despite the widely-supported nature of the Iris SVG/JG format, the CCP4i2 browser originally failed to parse the native Iris JS code, as some of the keywords were unsupported. It was also unable to render the SVG graphics within an iframe. The integrated browsers found in many other crystallographic software packages are similarly outdated. So, a backwards-compatibility mode was added in which the modern keywords in the Iris JS code are replaced with archaic, but well supported ones, and the CSS is modified such that the Iris HTML can be inserted directly into an existing HTML document without causing any style conflicts. This mode is what is used within the CCP4i2 implementation of Iris. At the bottom of the CCP4i2 validation task is a button that launches the Coot software with a "guided tour of issues" raised in the validation report. Because Coot and Iris both use the same data and thresholds for the detection of Ramachandran and rotamer outliers, the outliers flagged in the CCP4i2 validation report directly correspond to those that would be detected in Coot, making for a seamless transition from the CCP4i2 validation report to the Coot guided tour. | Package overview The following is an example of the most basic way to generate a standalone Iris report, by importing the generate_report function from the top of the package. The process triggered by calling this function is illustrated in Figure S2. from iris_validation import generate_report generate_report (latest_model_path = 'latest.pbd', previous_model_path = 'previous.pbd', latest_reflections_path = 'latest.mtz', previous_reflections_path = 'previous.mtz', output_dir = 'Iris_output/', mode = 'full') 4 | RESULTS AND DISCUSSION | Metric quality tests Ramachandran and rotamer classifications were tested against MolProbity ( Figure 5). Because the thresholds applied for rotamer classification are the same as those applied by MolProbity, but those applied for Ramachandran classification were those used by Coot, the rotamer classifications show higher agreement with MolProbity than the Ramachandran classifications within the outlier and allowed categories. The Iris Ramachandran classifications will be in complete agreement with those from Coot. | User interface tests To showcase Iris's functionality, example reports were generated for a number of structures using models from the PDB-REDO database. In these tests, the PDB-REDO refined models were used as the "latest" inputs and the originally deposited models were used as the "previous" inputs. | Analyzing the structure of a betagalactosidase mutant (PDB code 3VD3) This structure 37 was chosen due to its high residue count and the fact that the resolution of the experimental data is not high. Looking first at the chain-view display (Figure 6), the outer axes reveal two troughs around the eight o'clock position in all the continuous metrics, which correspond to the low-end of the distributions shown on the residue-view spectra. The first is from B/681-689, in which the selected residue lies, and the second is from B/727-737. These areas both correspond to random coils on the very outside of the molecule, regions which often have quite low fit quality. The two innermost rings reveal some geometry outliers, though not an alarming number given the resolution of 2.80 Å. Their distributions are quite clear: the Ramachandran Turning to the residue-view display, the spectra on the bar charts show that this chain has quite high-quality distributions of the continuous metrics, both with a high mean and low SD. However, both distributions have quite low minima, indicating a small number of residues with particularly high B-factor and poor fit quality. This is likely to correspond to the two troughs seen on the chain-view display, including the selected residue, which has poor continuous metrics relative to the chain's distribution, and to other models in the percentile bin. Finally, the residue's Ramachandran and rotamer conformations are both in the "allowed" category, indicating unusual conformations for both backbone and side-chain. The model visualization shows the random coil corresponding to the first trough on the chain-view display (B/681-689). Here, the relevant residues are shown in a ball-and-stick view with each atom colored by B-factor. The density has been contoured at 1σ, and clipped around these residues. | Automated model building: Watching progress and identifying regions for manual intervention This case was taken from the "rnase" tutorial that comes bundled with CCP4i2. After phasing the data by molecular replacement with Phaser 38 using PDB code 3A5E 39 as model, the Autobuild Protein pipeline was launched, which runs alternate iterations of Buccaneer 40 and REFMAC5, 41 and extracted the coordinates resulting from the first and last iterations of the pipeline. The results obtained by Iris on these two structures can be seen in Figure 7. The sequence of the first iteration of the model is three residues shorter than in the final iteration, due to Buccaneer being unable to model this section. Pairwise alignment enables Iris to determine that the missing residues are the final three of the chain, which is represented on the chain-view with the black spots in the segments around the edge of the ring. Looking first at the inner two rings, it is evident that the Ramachandran and rotamer torsion angles improved significantly, with nonfavored Ramachandran residues decreasing from nine to three, and non-favored rotamers decreasing from fourteen to nine. On the outer four rings, changes are more subtle, and much easier to make out with the live animation (please refer to our Supporting Information Video S1). At around the seven and eight o'clock positions, there is some improvement in both B-factor and density fit quality. Because Iris emphasizes the poorer areas on the chain, this change appears very slight. More noticeable are the areas of poor quality that have developed between the two versions, for example, a trough developed in all four rings at the third residue, where apparently fit quality and B-factor were sacrificed in order to swap the rotamer for a more favorable conformation. | Timings A random selection of 20,000 models from PDB-REDO was run through three different versions of the CCP4i2 F I G U R E 5 Confusion matrices for Ramachandran (left) and rotamer (right) classification agreements between Iris and MolProbity. Figures in brackets are the number of residues. Percentages are given as a proportion of the sum of each Iris classification. In the case of rotamer classifications, discrepancies between MolProbity and Iris arise as a result of the different formats of the reference data; MolProbity has access to the entire original dataset, allowing for very accurate interpolation for each case, whereas the compression Iris uses to store the reference data yields slightly less precise classifications, especially at the interfaces between classifications (i.e., borderline cases). Discrepancies in the Ramachandran classifications are partly due to the differing interpolation methods applied by MolProbity and Clipper, but more significantly to the fact that the thresholds are arbitrary; and those selected for iris are the ones that are used in Coot, to facilitate the transition between an Iris report and the Coot validation tools. These are not necessarily the same as those used by MolProbity validation task: (a) the old version of the task, with MolProbity analyses enabled; (b) the new Iris-implemented version of the task, with MolProbity analyses enabled; (c) the new Iris-implemented version of the task, with MolProbity analyses disabled. It is important to note that the old version of the task only analyses the originally deposited model, whereas the Iris-implemented task analyses both the original and optimized versions together. The results are shown in Figure 8. Timings were calculated on an Intel i9-9900k (eight cores, sixteen threads) at stock frequency, with eight CCP4i2 instances running in parallel. Because each instance of CCP4i2 can have up to three intensive processes running at once (one main thread plus two MolProbity threads) the processor thread count will have led to bottlenecking at times. The implementation of the Python 2 multiprocessing module in the CCP4i2 task is not supported under Windows. Hence, when the CCP4i2 validation task is run under Windows, MolProbity analyses have to be run sequentially on the main thread, without parallelization, the same as in the old task. Because of this, if two models F I G U R E 6 Example Iris report for structure 3VD3 (top) and accompanying model visualization (bottom). The screenshot shows an Iris report for 3VD3, with chain B, residue 684 selected. The version slider in the "previous" position, corresponding in this case to the originally-deposited model, before refinement by PDB-REDO. The selected chain comprises more than 1,000 residues, demonstrating the robustness of the design to high residue-counts. For the bottom panel, the model has been colored by B-factor (blue for low values, red and then white for high relative values) to highlight the mobility of this region. The map shows 2mFo-DFc density contoured at 1σ; the fact that the map does not cover all the residues at this level hints at the region's mobility and/or disorder are provided, and MolProbity is enabled, validation may take significantly longer than it otherwise would on a unix-based operating system like Linux or MacOS. Forthcoming updates to the CCP4 package instating Python 3 will solve this issue in the near future. | CONCLUSIONS AND FUTURE WORK Our main aim at this stage was to demonstrate the benefits of using an interactive multi-metric per-residue display; the fact that problematic regions in a model create ripples across the different metrics helps spot those parts of a model requiring further attention. In the near future, MolProbity analyses will be implemented directly within the Iris metrics module using the CCTBX Python package. This way, the user will be able to choose either Iris or MolProbity analyses when using Iris in any context, including as a standalone solution, rather than having to choose an implementation of Iris that makes MolProbity analyses available, such as CCP4i2. The Iris metrics module cascade is an ideal candidate for multithreading. Residue analyses could be parallelized for a significant reduction in run time. Unfortunately, the way that Python 2 requires multiprocessing processes to return a class that can be serialized with Python's built-in serialization module makes this infeasible at the moment. In the longer term, the Iris code will be restructured to realize this goal. The calculation used for electron density fit score is quite oversimplified for the sake of reducing computational intensity. If other optimizations are made, like multithreading, then more processor time can be spent on more comprehensive density fit calculations. Alternatively, we could incorporate the ability to parse output from programs like EDSTATS, 3 which is bundled with the CCP4 suite. Owing to its modularity and portability, we expect to make Iris available to a number of structural biology programs, including CCP4mg, 2 Coot 1 and ChimeraX. 42 These graphical programs will also provide a 3D view that can be centered upon clicking on individual residues in the Iris report. The mechanism we envision for this task has already been tested in the implementation of Glycoblocks, 43 which communicated the Privateer 44 carbohydrate validation software and CCP4mg through hyperlinks on SVGs. The most pressing development however is to expand the number of metrics available that can be generated by the Iris metric module. Like with the compressed Iris rotamer library, a compressed library for CaBLAM 45,46 could be generated and integrated using the Richardsons' group data. 33 CaBLAM C-alpha evaluation can be useful in the refinement and validation of models derived from cryo-EM data, which are becoming increasingly prevalent. Support for such data will be added soon, through F I G U R E 7 Example chain-views for the first (left) and last (right) iterations of a model from the Autobuild Protein pipeline (CCP4i2). The pink shaded area on the left Iris plot illustrates an area of missing residues-not modeled by Buccaneer application of the Clipper NXmap class. Other traditional metrics to be included will cover planarity and chirality favorability. However, all the aforementioned metrics are easily targeted by restrained refinement, potentially devaluing them as validation criteria. A longer, more challenging project will involve the inception of a new set of validation metrics that remain as separate from the refinement process as possible, opening the door to a truly independent evaluation of the quality of a protein model. | REPRODUCIBILITY AND AVAILABILITY Iris validation is available from GitHub (https://github. com/wrochira/iris-validation), and soon, as a regular Python package installable with the pip install irisvalidation command. A forthcoming CCP4 update will distribute the component and its CCP4i2 29 interface. The scripts used to generate and test any of the data implemented in the package can be found in the Iris tools companion module within the same repository. Therefore, if any customizations are made to the metrics module, the percentiles library can be regenerated with ease. F I G U R E 8 Boxplots illustrating the distribution of average (n = 5 repeats) times taken to run models (both coordinates and reflection data) through different versions of the CCP4i2 Multimetric Validation task. The median value of each distribution is labeled
9,833.6
2020-09-23T00:00:00.000
[ "Computer Science", "Biology" ]
Poverty Reduction Policies in Malaysia : Trends , Strategies and Challenges Malaysia is a multi-ethnic religious country with a population of 28.5 million, it is characterised by mainly three ethnic groups-Malay and indigenous people, Chinese, and Indians. Ever since independence in 1957, Malaysia has successfully transformed itself from a poor country into a middle-income nation. The Malaysian economy has seen a periodic growth despite challenging external factors. It can also definitely claim its success of combat against poverty. Despite its poverty reduction success, there still remains a vulnerable group of people in the country experiencing poverty for some geographical and societal reasons. This concept paper has several objectives: A brief description of the country’s nature of poverty, poverty reduction policies and programs, and an analysis facing the challenges and recommendations for a sustainable poverty reduction in Malaysia. Introduction Defining poverty conceptually is easier as opposed to operational definition.Poverty is perceived as an amalgamation of various aspects which exceeds the argument on lack of income and not confined to a single-faceted phenomenon.The term poverty refers to different adverse social and psychological repercussions namely domestic violence, crime, perceived inadequacy of social investments and problems in expansion of human capital, unfair service delivery and feeble political participation.Hence, the definition of poverty is ultimately country specific. Universally, poverty is normally referred to failed income "dollar-a-day" by World Bank.However, for country specific purposes it is standard recommended practice to use national poverty lines where there exist.Most countries adopted this practice in the 2005 Millennium Development Goal report (United Nations, 2011).Malaysia developed its own poverty line in the 1970s when the government's national policy gave a high priority for poverty eradication.The government utilized this poverty line on assessments of the minimum consumption requirements of an average sized household for food, shelter, clothing and other non-food needs. The Gross Domestic Product (GDP) in Malaysia contracted to US$156.53 billion in 2007 and US$278.7 billion in 201, and the GDP growth was 5.7% and 4.7% respectively.The GDP per capita in Malaysia was last reported at US$7.760 in 2007 and US$5,364.5 in 2011.The Gross Domestic Product (PPP) in Malaysia was reported at US$13,740.93 in 2007 andUS$14,730.93 in 2011.The unemployment rate in Malaysia was 3.2% in 2007and 3% in 2011(Department of Statistics in Malaysia, 2011). Definition of Poverty in Malaysia Adjustments were made to the poverty line in its earliest form, for differences in mean household size and cost of living among the three main regions of Malaysia-Peninsular Malaysia, Sabah and Sarawak.No adjustments were made for rural or urban location.This resulted in three regional poverty lines (besides the national one).These poverty lines, with adjustments for inflation and changing mean household sizes, were in use from their adoption in 1976 to 2004.Although the poverty line was defined by consumption, poverty status was determined with reference to gross household income rather than expenditure.Thus, households with income below the poverty line were defined as living in poverty, and those with incomes below half the poverty line as living in "hard-core" or extreme poverty.In 2004, a revision was done to the poverty line.The revised poverty line now defined for each household and averaged to each state and rural or urban location, taking into account relative costs of living, household composition and size.This new poverty line also defines extreme deprivation or hard-core poverty as household with incomes below their food poverty line or households unable to meet their minimum food needs.In 2009, the mean national poverty line translated to an unadjusted RM6.50 per capita a day (equivalent to US$3.00 a day, PPP). Currently there are revised and separate Poverty Line Income (PLI) for each state in the country.The revised version takes into account different household size, a separate classification based on the urban and rural areas.Basic characteristics of each household considered for measuring the PLI, which includes number of occupants and its locality and demographic aspects.To characterize the poverty line income of Malaysia, presently, the level of minimum expenditure that is essential to lead a reasonable life is taken into consideration and Consumer Price Index (CPI) is utilized to update PLI every year.To reveal the disparity of cost of living and size of household between the Peninsular Malaysia, Sabah and Sarawak, two different PLIs were adopted.Tables 1 and 2 showed that PLI was decided for Peninsular Malaysia to be 4.1 at RM763 (US$254) and Sabah and Sarawak to be 4.9 at RM912 (US$304) per month, respectively, as per 9th Malaysia Plan. The half of the PLI was set as the absolute hardcore poverty line (Department of Statistics Malaysia, 2010).Note: 1 Refers to households with mean monthly gross income below its mean PLI. Note: 2 Due to varying household sizes, the per capita PLI will be used by implementing agencies to identify the target groups.Malaysia tremendously succeeded in combat against poverty.As of 1970, the poverty level was 49.3% and it was lessened to 8.1 in 1999.In the year of 2000, it was optimally reduced as 5.5%.The strategy which was employed for reducing poverty led to accommodate an effective poverty reduction enclosure and fast economic growth with a constant improvement of its micro economy (Department of Statistics Malaysia, 2011).Hardcore poverty was reduced from 1.2% in 2004 to 0.7% in 2009 and the incidence of overall poverty fell from 5.7% in 2004 to 3.8% in 2009.The overall poverty rate is 3.7% in Malaysia (Department of Statistics Malaysia, 2011). Table 3 showed that between these two regions a disparity was observed in the incident of poverty.Compared to Sabah and Sarawak, in Peninsular Malaysia, IOP was lower to some extent.In 1976, it was reportedly at 58.3% and 56.5%, respectively.This condition could be a positive outcome of rapid development initiatives implemented in Peninsular Malaysia.Yet, programs had robustly been executed in Sabah and Sarawak to combat poverty.The IOP that was reported as 51.2% and 51.7% in 1976 was minimized to 16.0% and 5.8% respectively in 2002. The New Economic Policy and the National Development Plan have emphasized only the Bumiputera (Sons of the Soil) communities in Sabah.Part of the problem lies with the tendency of policy makers to classify Bumiputera as homogenous resulting in government policies that are not neutral rather than targeted.However, the programs did not have the same impact on all Bumiputera groups irrespective of their ethnic background for poverty reduction.This effect is most evident in official statistics where the economically disadvantaged Bumiputera communities are classified as Bumiputera together with the more economically advanced Malays.Past government's policies aimed at eradicating poverty, restructuring employment and equity have produced limited impact on the Bumiputera communities.The development of human capital in Sabah is still in lacking, as seen from the large untrained population especially those from the rural districts, and much needed good governance. Due to a country wide steady decline of IOP, a proactive development was unrealistic and poverty remained relatively moderate in Kedah, Kelantan and Perlis.The IOP was found to be intensive in Kelantan and in 1976; this was acknowledged as the highest rate of IOP (67.1%) in Malaysia.This disparity remained to be constant as 10.6% in 2004 and 4.8% in 2009.The occurrence of poverty in Selangor and Wilayah Persekutuan states was comparatively low and had been reportedly lower than 10.0% since 1984 (Mat Zin, 2011). Poverty Reduction Programs in Malaysia Despite the successes in reducing poverty (less than 4%), there are vulnerable sections of the population remain unchanged due to several disadvantaged circumstances.In the effort to develop a more inclusive approach, the economic development model is being pursued.Capacity building in Malaysia in the context of alleviation of socio-economic inequalities is being implemented by expanding the economy, and at the same time giving subsidies to the needy.In pursuing inclusiveness, the approach is anchored on two objectives: i) Enabling equitable opportunities for all, and ii) Providing a social safety net for the disadvantaged groups.For the second objective, equitable access to health, education and basic infrastructure are being emphasized.Mechanisms for targeted income support will be enhanced as general subsidies are being phased out.Two features of social policy that distinguish Malaysia from other countries are: a. Social policies have had an orderly and incremental development owing to a supportive environment within a lengthy and continuous period of stability, which is unlike the experience of many developing countries, and b. A succession of strong governments and a public sector committed to improve the welfare and well-being of all Malaysians. The evolution of the social policy and welfare regime and its significance for poverty eradication in the 50-year span from 1957 to the present may be analyzed according to four different phases, namely, 1957- Challenges for Poverty Reduction in Malaysia Although the country has done a commendable job in eradicating poverty, nevertheless, there are significant challenges in the era of globalization.The following are important issues which need attention as the country faces a new category of new poor. Migrants Workers' Issues The current development policies of Malaysia are highly influenced by globalisation and liberalization, and this leads to direct and indirect implications on activities in relation to poverty.Contraction in employment opportunities drastically affects the urban poor, the near poor, migrant workers.A high prevalence of unemployment and retrenchment are also acknowledged by relevant authorities.There is a huge demand for knowledgeable and skilled human resources in capital intensive and high value added activities, since Malaysia restructuring its economy. Rigorous social policies 1981-1997 1957-1980 1998-2002 2003-present An increased influx of overseas employees also aggravated the IOP in the post world repercussion of 2008.This phenomenon induced a deliberate discussion in relation to the foreign labor policy for various reasons.The contribution of overseas employees for local economy, remittance for their country origin, perceived competition in the local labor market between local and migrant workers and the potential arrival of appalling amount of overseas labors attributed to severe unrest all turned pressing concerns.Nair (2010) is also concerned about this issue and stated "the increased invasion of (20%) of foreign labour force makes an impact on poverty issues and human resource development". Ethnic Issues The educational achievements of Bumiputera and rural students in disciplines which are very much fundamental to the economy was considerably lower than the urban and non Bumiputera students.This led to an academic lacuna between these two sectors.If the policy makers are turned a blind eye to the injustices and refuse to create promising arrangements, fragmentations, factions would be unavoidable and turmoil between ethnic groups would become worse.Hence, the perceived gap between poor and non poor will be widened. The Malays are dominant in the rural poverty and this initiated policy makers to figure out the national level conceptualisation of poverty.The National Economic Plan 2010 (NEP) poverty eradication mainly focussed on Malay rural population, and the policies and initiatives turned ethnically motivated. Rural and Urban Poor Since, more than half of the family units in the rural area being categorized as poor, rural and urban poor poverty have constantly been identified as a problem confined to rural.However, the consequences of poverty are devastating among urban communities as a vast proportion of new poor family units are settled in urban settings (Nair, 2010).Innovative policies and strategies should be implemented with strong commitment in programs planning and inner city development expenditure allocation are required (Nair, 2010).Rural and urban migration combined with the influx of foreign regulated and unregulated migrants dramatically increased the urban poverty (Economic & Planning Unit, Malaysia, 2010). Poverty Line Income Issues Poverty Line Income Issues are constantly discussed by absolute and relative terms with an idea that poverty alleviation initiatives to be more goals directed and challenges that intensified poverty had to be optimally tackled.It's also observed that views on relative poverty have been changing over time.While in the past the bottom 40% of the population was defined to be in relative poverty, it is the bottom 30% of the population under the current plan.However, there is an ambiguity behind the redefinition of relative poverty from the bottom 40% of the population to 30%.This kind of temporary adjustment of definition on relative poverty makes comparisons difficult. The selection criterion for financial assistance has been set as of RM1200 that is 2.3 times more than the PLI of Peninsular Malaysia (Nair, 2010).Even though a financial support is being provided with the selected households, it seems more support is needed to be given in order to promote their living standard.Therefore, the efficacy of the current PLIs which is used to differentiate the poor households from the non-poor should be revisited. Conversely, it is not clear as to which income level is to be used to define the inclusion criteria of the households to be selected for the provision of financial and other forms of support. According the government of Malaysia, if a household earn of (four persons) RM900 (US$300), RM1, 000 (US$333.34) or even RM1500 (US$500) a month, they cannot be considered as poor.Therefore, the families and more media reports complaining that they cannot meet their basic needs even they earn RM2, 000(US$666.67).Here, the government how will calculate the poverty line. The World Bank standard recommends that medium-income countries should calculate PLI based on US$2 (RM6.20) per individual per day.Meaning one person would need US$2 per day in order to meet both food and non-food necessities if that figure is used for Malaysia a theoretical household of 4.4 people will need RM858 a month to not be declared poor.Here PLI is not correct. In Australia and Britain, median income of households is applied to define PLIs.The median income of a country is calculated by dividing its income into half.The PLI is two-thirds of the median income. Using the Malaysian median income which is RM2,830 (US$944), the PLI would be set as RM1,886(US$629)  The government considers a household as comprising an average of 4.4 members (Total number of households divided by total population = 4.4).The PLI of monthly RM763 (US$254), therefore, is translated into a daily income of RM25.45 (US$8.50)that a household needs to meet the eight components such as food, rent, clothing and fuel and so on.Therefore, it is impossible to meet basic needs by this small amount of money. Climate Change Issues Climate change is a global issue with significant implications for Malaysia.Carbon dioxide (CO2) from fuel combustion and deforestation activities contributes to global warming and has caused a shift in the climate system.Malaysia will have to adopt a dual strategy in addressing climate change impacts: Firstly, adaptation strategies to protect economic growth and development factors from the impact of climate change; and secondly, mitigation strategies to reduce emission of greenhouse gases (GHGs) (Economic & Planning Unit, Malaysia, 2010). Analysis Since independence in 1957, Malaysia has successfully transformed itself from a poor country to a middle-income country.The incidence of poverty has been drastically reduced from 49.3% in 1970 to only 3.8% in 2010, with hardcore poverty nearly eradicated, declining to 0.7% in 2009 (Abidin & Rasiah, 2009).Malaysia's real GDP has grown by an average of 5.8% per annum from 1991 to 2010.This growth rate has helped improve the quality of life for Malaysians and supported widespread advances in education, health, infrastructure, housing and public amenities.Though the growth momentum has recently slowed down due to the global economic and financial crisis, public spending through the two economic stimulus packages and accommodative monetary policy have helped the nation recover. Malaysia can effectively declare victory in its fight against poverty.Nevertheless, pockets of poverty remain both in terms of specific geographies and particular communities.The government remains committed to transmitting assistance and welfare to the poor and vulnerable.Special programs are being undertaken to address poverty on a sustainable basis, especially in terms of providing income generating opportunities, such as through agro projects.Since the face of poverty is no longer purely a rural phenomenon, specific interventions will also be targeted towards the urban poor, such as through micro credit schemes. The authorities have to expand programs involving practical on-the-job training (vocational and internship) that are relevant to the market.Internships, soft-skill training and job placement initiatives will be targeted towards unemployed graduates.The premise behind lifelong learning programs must be the expansion of distance learning, e-learning, retraining and skills-upgrading offered by various institutions.At the local level, community colleges assumed a greater role in implementing retraining and skills-upgrading programmes (Abidin & Rasiah, 2009). With the labor force increasing by 1.7% per annum during the period of 2006 to 2009, the unemployment rate increased slightly from 3.5% in 2005 to 3.7% in 2009.Despite this slight increase in unemployment, Malaysia remains in full employment position.The unemployment rate improved marginally to 3.6% in 2010.The quality of life for Malaysians improved through better access to healthcare, public transport, electricity and water.Measures were also taken to create a caring society and promote community well-being.Economic development was based on sustainability principles to ensure that the environment and natural resources are preserved so that growth will not come at a cost to future generations. Even though poverty is effectively managed, the current income of the country is considerably low and 40% of households still struggle with a monthly income which is less than RM2, 300 (US$766.66).The perceived disparity in income levels and economic status between Sabah and Sarawak and Peninsular Malaysia should be addressed effectively.Additionally, an effective strategy should be adopted to minimize the disparity of economic status between rural and urban areas.In order to cater the issues emerging between different geographical areas and Communities, the New Economic Model and the Tenth Malaysia Plan were devised to focus on inclusive growth and aspire to provide equal opportunities to all Malaysians. Malaysia's success in reducing poverty was due to the policy of poverty eradication being made an integral part of the National Economic Plan (NEP).Poverty eradication programs were implemented alongside development plans and financial allocations for them were made in all the Malaysia Plans.When evaluating the impact that anti-poverty policies may, this study concludes that much caution is in order.Despite the ostensible official concern about poverty over the last thirty years, and the remarkable lifespan of new economic policy-inspired policymaking, a number of problematic issues remain.For instance, there is still relatively little detailed information about the characteristics of the poor that could help ascertain the reasons for their poverty and thus identify what the appropriate and effective measures and to overcome poverty might be. Detailed, analytically grounded, poverty profiles are particularly necessary in view of the increasingly recognized phenomenon of 'hard-core' poverty which is often remarked as being relatively unaffected by existing poverty eradication measures.Furthermore, a great deal of expenditure on what is officially categorized as poverty eradication, actually refers to expenditure on rural and agricultural development, much of which does not directly help to raise the poor out of poverty.Given the concerns raised, it is crucial that detailed information on the use of anti-poverty funds be provided in order to ascertain to what extent such expenditure actually benefited the poor.Such information is also crucial in order to minimize the budgetary abuses that seem to have been made in the name of poverty eradication. Recommendation The study has suggested some recommendations that will enhance economic development and poverty reduction in Malaysia.The young people from the rural areas should be given the technical and vocational training as they form the back bone of work force.Promoting the development of concentrated industrial clusters and supporting ecosystem towards enabling specialisation and economies of scale; increasingly targeting investment promotion towards investment quality (as opposed to just quantity), which support higher value add activities and diffusion of technology; and increasing public investment into the enablers of innovation, and venture capital funding.Skills training should be given special emphasis to develop the necessary human capital to meet industry's requirements and drive productivity improvements to move up the value chain. Technical education and vocational training should be mainstreamed, with a focus towards raising the quality of qualifications.This is a key towards providing a viable alternative to enable individuals to realise their full potential, according to their own inclination and talent.This principle necessitates a renewed focus on championing the interests of each and every community, ensuring no group is left behind or marginalised in the course of the nation's development.Social justice should be required taking into consideration the respective levels of achievement of each community.The distributional policies of the government will therefore be focused towards ensuring equality of opportunities for all. The well-being of the urban bottom 40% households will have to be addressed through capacity building programs to improve their income and overall quality of life.Programs to increase the incomes of rural households will focus on upgrading their skills, linking them to employers in nearby clusters and cities as well as providing support for self employment, micro-businesses and small scale industries.Efforts will also have to be focussed to increase the productivity and sustainability of agro-based activities through the adoption of modern agricultural technology and expansion of contract farming.Improved human capital productivity within rural agriculture and agro-based industries has to be reinvigorated.Additionally, skills training are needed in areas such as carpentry, tailoring, baking, hospitality, handicrafts, motor mechanism and food processing to support self-employment.The delivery of these training programmes will be tailored to the specific opportunities of target localities. The country needs to improve its balance of workforce and tertiary education.Malaysia's current workforce with tertiary education stands at 23%, whereas the average for Organisation of Economic Co-operation and Development (OECD) countries is nearly 28%.Furthermore, of the graduating students who were employed, 29% in 2006 and 33% in 2009 earned less than RM1, 500 (US$500) per month.Employers and industry associations state that lack of soft skills -such as positive work ethics, communications, teamwork, decision making and leadership skills. For students graduating from local higher education institutions in 2009, 27% remained unemployed six months after completion of their hampering employability of many Malaysian graduates.As there is still a sizeable gap between the competency levels of graduates and comparable international standards, the issue of graduate competency needs to be addressed to ensure that Malaysia has a skilled, well-rounded and employable graduate pool to enter the workforce. Even though Malaysia is highly successful in reducing poverty and expects to materialize seven crucial recommendations in relation to the MGDs by 2015, the country is experiencing disparities in many ways among the local communities.These disparities should be attended by local friendly and area specific intervention strategies bridge the persisting gaps between communities living in different geographical areas with much dissimilarity.Therefore, an increased attention should be directed on the most vulnerable population groups in order to bring an apparel improvement in their lives. Malaysia is a multi-racial or non-homogenous nation with wide and entrenched disparities of economic opportunities and incomes, the government has or may have to intervene in the market place and maintain affirmative action type programs to ensure a fairer distribution of opportunities and incomes among all racial and social groups.The avoidance or reluctance to undertake such initiatives and to succeed in achieving them, may lead to social unrest and violence.The affirmative action program only on the Bumiputeras is no longer sustainable. An effective environmental policy came into existence to make certain the Malaysian environmental sustainability.On the other hand, promises should be delivered to the global community and to itself-as specified by environmental strategies, by legislation and policies on various aspects such as environmental and resource management, green energy, physical planning and climate change.Malaysia should also develop suitable incentives for states and the private sector to implement initiatives and to comply with national policies and objectives.A suitable and comprehensive strategy should be created to motivate state governments and private entities to execute the policies and achieve the goals. Rising temperature is associated with volatile weather changes, shifts in rainfall patterns and climate zones and a rise in sea levels.Due to its climate and location, Malaysia is among the many economies that are likely to feel the force of climate events sooner, rather than later-in the form of coastal and inland flooding, rise in vector borne diseases, or drops in agricultural yields due to continuous occurrence of droughts.These events not only have the potential to destroy lives and communities, but also pose a significant economic risk.The government has to review the value at risk for communities to develop a clear understanding of the cost-benefit trade-offs involved in averting or reducing the impact of such climate-related hazards. Conclusion Protective measures should be in place to combat any potential repercussions inflicted upon poor communities those are fragile to globalization and liberalization.All ethnic groups, regardless of their diversity, should be provided with an opportunity to access to broad proficiencies required for the economy.Actions should be triggered at policy levels to address the potential roots of poverty and perceived inattention on the issues of sidelined communities as to assure the inclusion of marginalized.Self sustained programs that promote the poor's involvement in activities to increase their level of income and plans for empowerment of poor also should be in place.Actions should be taken to deal with negative attitudes regarding to poor's status.A priority should be given to empower the community and encourage mutual support and attentiveness among its members.The complex dynamics of poverty to be triumphed with a multi-disciplinary approach and poverty alleviation tasks should be focused on the enhancement of human skills, innovativeness and knowledge of the poor. Table 1 . Poverty line in Malaysia Source: Economic Planning Unit and Department of Statistics Malaysia, 2010. Table 2 . Poverty line in Malaysia by mean household size 2010 Table 3 . Poverty incident in Malaysia by state from 1970-2009 (in percentage) Table 4 . 1980, the Mahathir regime of 1981-1997, financial crisis period of 1998-2002, and the post-2003 regime under Prime Minister Abdullah Badawi.A summary of the evolution is shown in Table 4. Evolution of Welfare Regime Since 1957 Mohd, 2012d, 2012The strategies devised in Malaysia to mitigate poverty took on several important fronts.A combination of continued welfarism and a new drive towards independent living were incorporated in the strategy.Hence, improved capacity building was enhanced and new programs were introduced in order to take care of the vulnerable or the so called bottom million of society.The following sectors were given high priority: Advancing agricultural sectorStrengthening small medium enterprises  Improving welfare of student  Strengthening pre-school education  Improving literacy and numeracy  Creating quality school  Increasing home ownership  Expanding public health facilities  Enhancing social safety nets  Improving retirement scheme  Microfinance
6,099.8
2013-03-08T00:00:00.000
[ "Economics", "Political Science", "Sociology" ]
Identifying Drug Targets in Pancreatic Ductal Adenocarcinoma Through Machine Learning, Analyzing Biomolecular Networks, and Structural Modeling Pancreatic ductal adenocarcinoma (PDAC) is one of the leading causes of cancer-related death and has an extremely poor prognosis. Thus, identifying new disease-associated genes and targets for PDAC diagnosis and therapy is urgently needed. This requires investigations into the underlying molecular mechanisms of PDAC at both the systems and molecular levels. Herein, we developed a computational method of predicting cancer genes and anticancer drug targets that combined three independent expression microarray datasets of PDAC patients and protein-protein interaction data. First, Support Vector Machine–Recursive Feature Elimination was applied to the gene expression data to rank the differentially expressed genes (DEGs) between PDAC patients and controls. Then, protein-protein interaction networks were constructed based on the DEGs, and a new score comprising gene expression and network topological information was proposed to identify cancer genes. Finally, these genes were validated by “druggability” prediction, survival and common network analysis, and functional enrichment analysis. Furthermore, two integrins were screened to investigate their structures and dynamics as potential drug targets for PDAC. Collectively, 17 disease genes and some stroma-related pathways including extracellular matrix-receptor interactions were predicted to be potential drug targets and important pathways for treating PDAC. The protein-drug interactions and hinge sites predication of ITGAV and ITGA2 suggest potential drug binding residues in the Thigh domain. These findings provide new possibilities for targeted therapeutic interventions in PDAC, which may have further applications in other cancer types. INTRODUCTION Pancreatic ductal adenocarcinoma (PDAC) is one of the most malignant solid tumors (Bailey et al., 2016). PDAC is difficult to treat due to the stage of diagnosis, severe cachexia and poor metabolic status, the resistance of cancer stem cells (CSCs) to current drugs, and the marked desmoplastic response that facilitates growth and invasion, provides a physical barrier to therapeutic drugs, and prevents immunosurveillance (Al Haddad and Adrian, 2014). PDAC is also a drug-resistant disease, and the response of pancreatic cancer to most chemotherapy drugs is poor. Until now, most of research effort in PDAC has been directed at identifying the important disease-driving genes and pathways (Waddell et al., 2015). These studies have shown that KRAS, CDKN2A, TP53, and SMAD4 are the four most common driver genes in PDAC (Carr and Fernandez-Zapico, 2019). With the development of multi-omics data, a series of new regulators that are strongly correlated with survival have been proposed to be PDAC biomarkers (Rajamani and Bhasin, 2016;Mishra et al., 2019), including genes (e.g., IRS1, DLL1, HMGA2, ACTN1, SKI, B3GNT3, DMBT1, and DEPDC1B) and lncRNAs (e.g., PVT1 and GATA6-AS). The integrated transcriptomic analysis of five PDAC datasets identified four-hub gene modules, which were used to build a diagnostic risk model for the diagnosis and prognosis of PDAC (Zhou et al., 2019). Integrated genomic analysis of 456 PDAC cases identified 32 recurrently mutated genes that aggregate into 10 pathways: KRAS, TGF-b, WNT, NOTCH, ROBO/SLIT signaling, G1/S transition, SWI-SNF, chromatin modification, DNA repair, and RNA processing (Bailey et al., 2016). Previous treatments for pancreatic cancer have focused on targeting some of these PDAC-associated pathways, including TGFb (Craven et al., 2016), PI3K (Conway et al., 2019), Src (Parkin et al., 2019), and RAF!MEK!ERK (Kinsey et al., 2019) and NFAT1-MDM2-MDMX signaling, as well as cell-cell communication within the tumor microenvironment (Shi et al., 2019). The discovery of novel drug targets provides extremely valuable resource towards the discovery of drugs. Although the human genome comprises approximately 30,000 genes, proteins encoded by fewer than 400 are used as drug targets in disease treatments. A range of therapeutic targets in PDAC have been proposed, including suppressing the abovementioned genes and pathways (Tang and Chen, 2014). However, the current drug targets for PDAC will not be 100% effective due to the heterogeneous nature of the disease. To tackle this challenge, a complete understanding of the molecular mechanism of PDAC is urgently needed. Improving PDAC therapy will require a greater knowledge of the disease at both the systems and molecular levels. At the systems level, protein-protein interaction (PPI) networks provide a global picture of cellular function and biological processes (BPs); thus, the network approach is used to understand the molecular mechanisms of disease, particularly in cancer (Conte et al., 2019;Sonawane et al., 2019). Some proteins act as hub proteins that are highly connected to others, thus cancer drug targets can be predicted by hubs in PPI networks Lu et al., 2018;Zhu et al., 2019). However, there are some conflicting results that suggest disease genes or drug targets have no significant degree of prominence (Mitsopoulos et al., 2015), but higher betweenness, centrality, smaller average shortest path length, and smaller clustering coefficient (Zhao and Liu, 2019). Recent advances in systems biology have led to a plethora of new network-based methods and parameters for predicting essential genes , disease genes, and drug targets (Csermely et al., 2013;Vinayagam et al., 2016;Zhang et al., 2017;Fotis et al., 2018;Liu et al., 2018). Additionally, the structural annotation of PPI networks that has highlighted key residues has enriched the fields of both systems biology and rational drug design (Kar et al., 2009;Winter et al., 2012). The prediction of binding sites, allosteric sites, and genetic variations based on systems-level data is critical for suggesting therapeutic approaches to complex diseases and personalized medicine (Duran-Frigola et al., 2013;Yan et al., 2018). Combined with PPI network analysis, molecular docking studies of target genes can further help to find drug molecules and protein-drug interactions for lung adenocarcinoma (Selvaraj et al., 2018). Together with advances in "-omics" data, including gene expression and PPI data, machine learning (ML), and artificial intelligence (AI) techniques are powerful tools that can assess gene and protein "druggability" from such massive and noisy datasets (Kandoi et al., 2015;Zhavoronkov, 2018). As the most used ML method, support vector machine (SVM) has been used for cancer genomic classification or subtyping, which may be useful for obtaining a better understanding of cancer driver genes and discovering new biomarkers and drug targets . ML-based methods have been applied to study PDAC for different purposes. By applying ML algorithms to proteomics and other molecular data from The Cancer Genome Atlas (TCGA), two subtypes of pancreatic cancer can be classified (Sinkala et al., 2020). A meta-analysis of PDAC microarray data could help predict biomarkers that can be used to build AI-based computational predictors for classifying PDAC and normal samples , as well as predicting sample status (Almeida et al., 2020). To predict and validate novel drug targets for cancer, including PDAC, a ML-based classifier that integrates a variety of genomic and systems datasets was built to prioritize drug targets (Jeon et al., 2014). In this study, we developed a computational framework that integrates various types of high-throughput data, including transcriptomics, interactomics, and structural data, for the genome-wide identification of therapeutic targets in PDAC. A novel centrality metric, referred to as SVM-REF and Network topological score (RNs), was proposed for the identification of disease genes and drug targets. This method incorporates gene expression and network topology information from ML and PPI analyses. Moreover, the predicted genes were validated by "druggability" prediction, survival, and comparative network analyses, as well as functional enrichment analysis. Finally, the structural and dynamic properties of two integrins (ITGAV and ITGA2) as drug targets were investigated. The workflow of these methods is shown in Figure 1. Identification of DEGs In this study, three independent PDAC expression microarray datasets with 184 pancreas samples (95 cancer and 89 nonmalignant samples) were used. The datasets were obtained from the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO, https://www.ncbi.nlm.nih. gov/geo/). Details of each dataset are listed in Table 1. The GSE15471 dataset included 36 PDAC samples and matching normal pancreas samples from pancreatic cancer patients in Romania (Badea et al., 2008). There were also matched samples in the GSE28735 dataset, which contains gene expression profiles of 45 matched pairs of pancreatic tumor and adjacent non-tumor tissues from PDCA patients in Germany (Zhang et al., 2012;Zhang et al., 2013). The GSE71989 dataset contained expression profiles of eight normal pancreas and 14 PDAC tissues (Jiang et al., 2016). The normalized data were downloaded from GEO and then analyzed to identify DEGs using t-tests, with p-values adjusted by the Benjamini-Hochberg method. Only genes with adjusted p-values < 0.01 and |FC| > 1.5 were chosen as DEGs. Gene Prioritization Pipeline Disease genes and drug targets usually have large degree in PPI networks, but there is no single network parameter that can accurately predict them (Li et al., 2016). Protein targets do not exert their function in isolation; rather they are affected by interactions within their PPI network, which are governed by protein localization and environment. In the same way, topological information from PPI networks alone is not enough to identify disease genes and drug targets without biological information. To overcome these limitations, we developed a new three-step pipeline to identify cancer-related genes that may be candidate drug targets in PDAC. The pipeline integrated information from gene expression data and local and global topological characteristics of genes in PPI networks. Step 1: For each gene expression dataset, we employed SVM methods based on a Recursive Feature Elimination (SVM-RFE) algorithm (Guyon et al., 2002), which is an embedded method to specifically deal with gene selection for cancer classification (Boloń-Canedo et al., 2014), rank DEGs, and select the most relevant features (Jeon et al., 2014). SVM-RFE can remove redundant features (genes) to generalize performance, implement backward feature elimination, search an optimal subset of genes, and provide a ranking for each gene. We ranked genes by SVM-RFE score (R s ), according the following formula: where n is the number of DEGs and r i is the rank of gene i. Step 2: A PPI network of DEGs was constructed with the STRING database (von Mering et al., 2003;Szklarczyk et al., 2017) using scores > 0.9. The topological parameters degree and shortest path length for each gene in the PPI network were FIGURE 1 | The computational pipeline proposed in this work included three steps. Overall, a machine learning method was used to identify DEGs in PDAC, which were then combined with two parameters of the PPI network to define a new score that predicted disease genes and drug targets in PDAC. All potential targets were then further verified by other bioinformatics analyses and investigated by a "druggability" analysis of structural and dynamic properties. calculated. The degree (K) of a node in the PPI network is the number of links attached to that node, which is one of the measures of centrality of a node in the network. The average path length (L) of node v in the network is the average length of the shortest paths between v and all other nodes and was defined as: is the length of the shortest path between nodes v and I, and n is the node number in the network. Step 3: Finally, we incorporated Network topological properties into R s and defined a new score (RNs) for each gene as: Accordingly, this new RNs score (SVM-RFE and Network topological score) considers the cancer status of each gene by including information about gene expression and two levels of topological features in PPI networks, namely, degree K indicates the importance of the node, while the shortest path length L shows the effects from other nodes. The code for gene prioritization is freely available on GitHub for download at: https://github.com/CSB-SUDA/RNs. PPI Network Analysis Once the PPI network was constructed, two other analyses were performed. The first analysis was the calculation of two commonly used centrality parameters: betweenness and closeness centrality. The betweenness centrality (BC) (Freeman, 1977) of node v was defined as: where g ivj is the number of the shortest paths from i to j that pass through node v, and g ij is the number of shortest paths from i to j. The closeness (CC) of node v is the reciprocal of the average shortest path length, which was calculated as: Proteins are often incorporated into modules that can be shared between several different cellular activities. The second analysis was module detection of PPIs by integrating a Gaussian network (GN) algorithm (Newman and Girvan, 2004) and functional semantic similarity (Wang et al., 2007). In general, this involved using the GN algorithm to detect the module of PPI networks, and then applying functional semantic similarity to filter links. Thus, the genes in the detected modules not only had topological similarity, but also functional similarity. Survival Analysis To evaluate the prognostic value of candidate genes, a survival analysis was performed using data from the human protein atlas (Uhlen et al., 2017), which contains gene expression data and clinical information of 176 pancreatic cancer patients. P-values < 0.01 were considered significantly correlated with overall survival. Functional Enrichment Analysis Functional enrichment analysis, including cellular component (CC), molecular function (MF), and BP, from the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways of genes was performed using the R package cluster Profiler (Yu et al., 2012). Terms with adjusted p-value < 0.05 were considered significant. Structural Modeling and "Druggability" Analysis The protein structures of potential drug targets were retrieved from the Protein Data Bank (PDB) if they were available. The Swiss model (Waterhouse et al., 2018) and I-TASSER (Roy et al., 2010) were used for the structural modeling of genes if protein structures were unavailable. We choose the Swiss model when the sequence similarity between searched models was >30%; otherwise, we used I-TASSER, which predicts protein structure using modeling by iterative threading assembly. Based on model structures, Fpocket (Le Guilloux et al., 2009) was used to detect druggable pockets and calculate "druggability" scores, which were based on several physicochemical descriptors on a genomic scale. The pocket with the highest score in the entire PDB was defined as the reference druggable score. The score of each pocket was classified as: 0.0-0.5: non-druggable; 0.5-0.7: druggable; and 0.7-1.0: highly druggable. Molecular Docking and GNM Modeling To study the interactions and binding mode of small molecules with the potential drug targets, molecular docking was performed using AutoDock 4.2 (Khodade et al., 2007). The target, drug, and related disease information were collected from the Drug Bank database (Version 5.0) (Wishart et al., 2018) and the Therapeutic Target Database 2020 (Wang et al., 2020). A normal mode analysis of the GN model (GNM) was performed to investigate collective dynamics via the DynOmics online tool (Danne et al., 2017). The default cutoff distance of 7.3 Å between GNM model nodes was used. Identification of Disease Genes and Drug Targets in PDAC From the three datasets GSE28735, GSE71989, and GSE15471, we identified 3,079, 1,225, and 2,257 DEGs between PDAC and adjacent tissues, respectively. The top 10 genes with the smallest p-values are marked in Figure 2. In GSE28735, 1,724 genes showed increased expression in PDAC tissues, while 1,355 genes showed decreased expression (Figure 2A). In GSE71989, 766 genes were upregulated and 459 genes were downregulated in PDAC tissues compared with normal tissues ( Figure 2B). In GSE15471, 1713 genes were overexpressed, while 544 genes showed decreased expression in tumor tissues ( Figure 2C). Together, there were 313 common DEGs between PDAC and adjacent tissues in all three datasets ( Figure 2D). Additionally, we evaluated gene expression as an input feature for ML and selected the most relevant genes for PDAC using SVM-RFE (Almeida et al., 2020), which provided a ranking for the genes. Then, each DEG was assigned an R s value (see Materials and Methods), which was used to further rank all genes. As an illustration, the top 100 R s values of the DEGs in each dataset are listed in Table S1. This shows that there is little overlap of results between the different datasets. This means that calculating R s based on SVM-RFE can provide information for classification, but not enough for ranking. The DEGs were next mapped to the STRING database, which yielded a PPI network with 144 genes and 440 links ( Figure 3). Then, degree and shortest path length of each gene in the network were calculated. Finally, we ranked the genes according to our designed score RNs, which integrated these two topological parameters and was based on gene expression profile. The top 20 genes predicted based on at least two datasets were considered potential drug targets. As shown in Table 2 and Table S2, eight genes (ADAM10, TIMP1, MATN3, PKM, APLP2, ACTN1, CALU, and VCAN) were identified in all three datasets, and nine genes (LGALS1, ITGA2, BST2, MFGE8, ITGAV, EGF, APOL1, ALB, and MSLN) were identified in two of three datasets. We propose that genes predicted by at least two datasets could serve as disease genes and/or drug targets. Taken together, 17 genes predicted by RNs score are listed in Table 3, and most have been previously reported to be PDAC-associated genes. There are only four that have not been previously associated with PDAC. This suggests that our metric RNs is useful for identifying novel disease genes and drug targets. It is also useful to compare our results predicted by RNs with other common network parameters. The genes predicted by calculating betweenness and closeness centrality are also listed in Table S2. Among our 20 predicted potential drug targets, six and nine were also found by betweenness and closeness centrality, respectively. Notably, ADAM10, ACTN1, and TIMP1 were in all three lists, which suggested they had important roles in PDAC. Moreover, two other genes (ITGAV and ITGA2) were in the top 20 of two datasets, which suggested they should be investigated. Overall, compared with the top 20 genes predicted by these two common network parameters, our RNs parameter identified more extracellular matrix (ECM) proteins, including integrins and collagens. The other interesting finding was that four common genes (ALB, EGF, ITGA2, and VCAN) were identified by isolating the nodes with large degrees (hubs) in PPI network construction based on other PDAC GSE datasets (Lu et al., 2018). Survival analysis was also performed to evaluate whether the expression of our 17 identified candidates was related to the prognosis of PDAC. Using Kaplan-Meier analysis with the logrank test for 176 pancreatic cancer patients from the human protein atlas (Uhlen et al., 2017), we found that higher expression levels of 11 genes were significantly correlated with decreased overall survival (p < 0.01, Figure 4). For the eight genes identified in all three datasets, five (ADAM10, PKM, APLP2, CALU, and VCAN) were associated with poor prognosis when highly expressed. The other six highly expressed genes (LGALS1, ITGA2, BST2, ITGAV, APOL1, and MSLN) associated with poor prognosis that were identified in two of three datasets are shown in Table 2. Accordingly, the survival analysis showed significant prognostic values for most of the predicted genes. Table 3 shows the genes predicted above shortlisted based on our RNs criteria. After searching the drug bank, these 17 predicted genes were classified into two types: 11 genes were drug targets, while six were non-drug targets. We also annotated drug targets in the drug bank by their related drugs and diseases. It should be noted that MSLN was the only proven drug target for PDAC, and there are many drugs that inhibit ALB. Thus, we concluded that these two genes had been studied widely and would not give us more insight regarding discovering new targets. Considering the potential of other predicted genes as drug targets for PDAC, we performed functional and "druggability" annotations for all. Among the 15 genes, 11 (ADAM10, TIMP1, EGF, APLP2, ITGAV, VCAN, ITGA2, PKM, APOL1, ACTN1, and BST2) have been reported to be contributing factors in PDAC invasion, growth, or metastasis, which indicated that our pipeline had good performance for finding potential drug targets for PDAC. Characterization of Predicted Drug Targets for PDAC The protease ADAM10 was predicted as the highest ranked gene, and it has been reported that ADAM10 influences the FIGURE 3 | Potential drug targets in the PPI network. The genes that were predicted by our pipeline are marked with red labels. The node size denotes the average RNs of the gene in two or three datasets. FIGURE 4 | Kaplan-Meier survival curves of overall survival from the human protein atlas datasets for potential drug targets divided by high (red) or low (green) expression level. *"YES" means drug target, and "NO" means non-drug target; # "NA" means no drug and disease information, or no druggable pockets. progression and metastasis of cancer cells, as it promotes PDAC cell migration and invasion (Gaida et al., 2010). Inhibiting ADAM10 could be a novel approach for natural killer (NK) cell-based immunotherapy (Pham et al., 2017). Tissue inhibitor of metalloproteinases-1 (TIMP-1) correlated with tumor progression, and elevated levels of TIMP-1 in tumor tissue and peripheral blood were associated with poor clinical outcomes in numerous malignancies, including PDAC (Prokopchuk et al., 2018). The third gene was epidermal growth factor (EGF), which was a common disease gene for many cancers, and EGF mutations were associated with PDAC (Grapa et al., 2019). Amyloid precursor-like protein 2 (APLP2) affects the actin cytoskeleton and also increases PDAC growth and metastasis (Pandey et al., 2015). ITGAV (Villani et al., 2019), VCAN (Skandalis et al., 2006), and ITGA2 (Nones et al., 2014) are matrix proteins that have been shown to contribute to pancreatic cancer cell migration, invasion, and metastasis. PKM2 is one of the isoforms of pyruvate kinase muscle isozyme (PKM) and promotes the invasion and metastasis of PDAC through the phosphorylation and stabilization of PAK2 (Cheng et al., 2018). The final three genes, APOL1 (Liu et al., 2017), ACTN1 (Rajamani and Bhasin, 2016), and BST2 (Grutzmann et al., 2005) have previously been reported to be effective biomarkers for PDAC. Although 11 genes were already known drug targets, "druggability" annotations based on protein structures can improve our knowledge and understanding of the mechanisms of proteins as drug targets. The "druggability" of proteins is a measure of their ability to bind drug-like molecules based on molecular shapes. For the "druggability" of all 17 genes, we first obtained their structural modes by retrieved data from the PDB database or homology modeling. The PDB codes of proteins or their templates are listed in Table 3. Then, Fpocket was used to compute all possible pockets and their corresponding "druggability score" (DS). The "druggability" of the protein was defined as the DS of the highest scoring pocket. As expected, most of the predicted proteins were druggable (DS ≥ 0.5), except VCAN, IGALS1, and MFGE8. ALB had the largest DS (1.00), which can partially explain why so many ALB inhibitors exist. Among the six non-drug targets, TIMP1, ITGA2, and BST2 were predicted as highly druggable (DS ≥ 0.5), which meant that these three genes had the structural abilities to be drug targets. In particular, the non-drug target ITGA2 had a larger DS than ITGAV, suggesting that a more detailed structural comparison between these two integrin proteins is needed. Identification of Functional Modules and Pathways Within PPI networks, cancer targets interact with different modules to perform biological functions. A module within a network is defined a set of nodes that are densely connected within subsets of the network but may not all directly interact with each other. To get further insight into the topological and biological functions of potential targets, we performed module detection in the PPI network using a GN algorithm and functional semantic similarity. As shown in Figure 5, we FIGURE 5 | Four modules were discovered within PPI networks. Genes that were predicted in at least two datasets are marked red, while genes that were predicted in only one dataset are marked blue. identified four modules (the pink, yellow, green, and blue nodes) and labeled the genes that were predicted in at least two datasets (red) or in only one dataset (blue). Except PKM and ACTN1, 15 of the 17 predicted genes were detected by the modular analysis and are included in these four modules. The top module (pink) was formed of 19 genes, including the most of our predicted genes (12/17, ADAM10, CALU, ALB, APLP2, MSLN, LGALS1, TIMP1, MATN3, VCAN, EGF, MFGE8, and APOL1). Most of these genes have been previously reported as disease genes in PDAC or drug targets in other cancers. Another three predicted genes were included in two other modules, while ITGAV and ITGA2 were detected in the second largest module (yellow). Although there were only two predicted genes, this module deserves more attention, as it primarily contains two types of gene targets: integrins (ITGA5, ITGA3, ITGB5, ITGA2, and ITGAV) and collagens (COL6A3, COL11A1, COL1A1, COL10A1, COL5A1, COL1A2, and COL3A1). Research into integrins and collagens and their interactions may provide more insights into the molecular mechanisms of PDAC. We next performed an enrichment analysis on genes in the PPI network ( Figure 6 and Table 4). The genes were enriched for the GO terms related to extracellular structure and matrix, such as extracellular structure and matrix organization in BP, ECM in CC, and ECM structural constituent and binding in MF. Table 4 shows the top 10 most significantly enriched KEGG pathways. Most of the pathways are associated with cancer, such as ECM-receptor interaction, focal adhesion, and proteoglycans in cancer. Moreover, integrins were enriched in most of the carcinogenesisassociated pathways, such as focal adhesion, which play essential roles in important BPs, including cell motility, proliferation, and differentiation. Interestingly, several altered molecular pathways were identified, which suggests that genes in the secondary module were involved in these pathways. These modules and pathways not only contained integrins, but also another group of collagens. In particular, two predicted integrins (ITGAV and ITGA2) were involved in nine out of the top 10 pathways, while the top four pathways (ECM-receptor interaction, focal adhesion, proteoglycans in cancer, and human papillomavirus infection) also contained collagens, especially COL1A1 and COL1A2. Except for these pathways, the list of integrins and collagens was used to define the traditional cancer-related PI3K/ AKT pathway. It was previously known that collagen is a major component of the tumor microenvironment that participates in cancer fibrosis, which can influence tumor cell behavior through integrins (Xu et al., 2019). Our results indicated that ITGAV, ITGA2, and their interactions with COL1A1 and COL1A2 may play important roles in PDAC, suggesting they could serve as potential drug targets. For example, the predicted genes and their interactions were highlighted in the ECM-receptor interaction pathway ( Figure S1). This systems biology evidence of gene cluster-and pathway-based distributions suggested that targeting several key genes together could be a more promising approach. ITGAV and ITGA2 as Potential Drug Targets for PDAC By combining SVM-RFE, PPI network, and survival analysis, 11 out of 17 candidate genes have been predicted as biomarkers in pancreatic cancer patients. Among them, two integrins of ITGAV and ITGA2 were further screened as two potential drug targets according to the following evidences: 1) Both ITGAV and ITGA2 are involved in all PDAC-related pathways include ECM-receptor interaction and focal adhesion pathways, suggesting that ITGAV and ITGA2 may play an important role in PDAC progression; 2) Based on the druggability criteria, ITGAV and ITGA2 have relatively high DS. In addition, ITGAV is already a drug target for other cancer. Due to the structural similarity, ITGA2 can also be considered as a potential drug target; 3) Current experimental data suggest that several other integrins are overexpressed in various cancer types, being involved in tumor progression through tumor cell invasion and metastases. For example, the therapeutic potential of ITGA5 in the PDAC stroma has been proved efficacy (Kuninty et al., 2019). Collectively, our data together with some know results point towards ITGAV and ITGA2 as two potential drug targets for PDAC. Thus, the emerging understanding of their structural properties will guide the development of new strategies for anticancer therapy. Integrins are transmembrane receptors that are central to the biology of many human pathologies. Classically, integrins are known for mediating cell-ECM and cell-cell interaction, and they have been shown to have an emerging role as local activators of TGF-b, influencing cancer, fibrosis, thrombosis, and inflammation (Raab-Westphal et al., 2017). Integrins are composed of a and b subunits to form a complete signaling molecule. Their ligand binding and some regulatory sites are extracellular and sensitive to pharmacological intervention, as proven by the clinical success of seven drugs that target integrins (Hamidi et al., 2016). Although peptides and small molecules are generally designed to target integrin ab dimers, the individual integrin a subunits may also be therapeutic targets. ITGAV always bind with five b subunits that form receptors for vitronectin, cytotactin, fibronectin, fibrinogen, and laminin. ITGAV has mostly been investigated for its role in malignant tumor cells and tumor vasculature (Xiong et al., 2001;Xiong et al., 2009). ITGAV recognizes the Arg-Gly-Asp (RGD) sequence in a wide array of ligands at the interface between the a and b subunits (Xiong et al., 2002). ITGA2 forms with b 1 and belongs to the collagen receptor subfamily of integrins (Emsley et al., 2000). The structure of ITGAV was taken from chain A of the x-ray structure of complete integrin aVb 3 (PDB code: 3IJE). It contains a b-propeller domain of seven 60-amino-acid repeats, and three other domains including the Thigh, Calf-1, and Calf-2 domains ( Figure 7A). The PDB repository contains no crystal structure for full-length ITGA2. The highest sequence similarity between ITGA2 and searched models (PDB code: 5ES4) was 28%, so we employed I-TASSER to generate a composite model of ITGA2 based on several templates. A subsequent analysis of the structure of ITGA2 revealed similar domain structures with ITGAV but with the addition of an I domain (Emsley et al., 1997) and a WKp GfFkR helix tail, which may suggest more drugtargeting possibilities for ITGA2. Based on the structures of ITGAV and ITGA2, Fpocket was used to detect their druggable pockets. For ITGAV, there were two highly druggable pockets, both located within the b-propeller domain. The largest druggable pocket was located on the outer side of the b-barrel, consisted of Val192, Lys104, Ala189, Asp132, Val188, Ala189, Asp167, Leu130, Gln187, Glu190, Lys135, Val137, and Gln131, and had a DS of 0.663 ( Figure 7A). The second largest druggable pocket was located at the hole of the b-barrel, consisted of Trp93, Leu111, Gln156, Phe159, Pro110, Ala96, Phe21, Tyr406, Tyr224, and Phe278, and had a DS of 0.599 ( Figure S2A). For ITGA2, only one highly druggable pocket was found at the b-propeller domain and had a DS of 0.92. This pocket consisted of His416, Phe162, His414, Ser159, Phe156, Leu417, Ser161, Val409, Leu396, Lys411, Leu158, Gln157, Leu394, Ala160, Leu417, Asp155, Asp392, Val381, Gly415, and Ser413 ( Figure 7B). Despite progress in the development of drugs that target different integrins, there are only two clinical approved drugs in the drug bank for ITGAV (Levothyroxine and Antithymocyte immunoglobulin) ( Table 3). Thymoglobulin is a polyclonal antibody, while Levothyroxine is currently the only approved small molecule that targets ITGAV. The small ligand Levothyroxine was docked to the two druggable pockets in ITGAV to study the stability of the complex and protein-drug interactions. When docked to the largest druggable pocket, Levothyroxine formed hydrogen bonds with Asp167, Thr134, Lys135, and Val192, and a hydrophobic interaction with Ala189, and the binding free energy was −8.3 kcal/mol ( Figure 7C). For the other pocket, hydrogen bonds were formed between Levothyroxine and Phe21, Trp93, Ala96, and Pro110 with the binding free energy of −10.08 kcal/mol ( Figure S2B). We further docked Levothyroxine to ITGA2 at its druggable pocket. The binding free energy of −9.09 kcal/mol suggested a good interaction between ITGA2 and Levothyroxine, with the potential binding sites at Phe162, Lys411, Asp392, and Leu158 ( Figure 7D). To determine residues that play a key role in the global dynamics of ITGAV and ITGA2, we performed a GNM analysis. GNM analysis provides information on the mechanisms of collective movements intrinsically accessible to the structure, which usually enable structural changes relevant to function (Bahar et al., 2010). The most discriminative feature in dynamic analysis is hinge prediction, which are expected to be sites for drug development (Sumbul et al., 2015). We predicted hinges sites by the minima of corresponding GNM slow modes. By applying GNM to ITGAV ( Figure 7E), GNM mode 1 highlights the hinge region located in the Thigh domain, especially at Asn455, Ser471, Arg553, and Gly594, which are located at the interface between the Thigh and Calf-1 domains. We also note that the b-propeller domain became the major hinge region in GNM mode 2, while Ile286, Asn287, Asp352, Phe377, Ser389, Thr413, Asp414, Pro421, and Tyr436 have minimal fluctuations. Hinge sites located at the b-propeller domain in GNM mode 2 may correspond to pocket sites, as the first and second largest druggable pockets were within the bpropeller domain. For ITGA2 ( Figure 7F Phe681 to Ser737. Accordingly, our GNM modeling suggested that both the b-propeller domain and the Thigh domain play important roles in modulating the collective movements of ITGAV and ITGA2. The b-propeller domain has been indicated to be a druggable domain by pocket detection. Here, some hinge sites located within the Thigh domain offer other reasonable starting points for inhibitor design. CONCLUSIONS In this study, we developed a computational framework that integrated ML (SVM-RFE), biomolecular networks (PPI network analysis), and structural modeling analysis (homology modeling, molecular docking, and GNM modeling) to help future drug targets for PDAC. The core of the new method was that we defined a new score, termed RNs, based on cancer-related information from gene expression data and topological information obtained from PPI network analysis. Research using three GEO datasets (GSE28735, GSE71989, and GSE15471) yielded 17 genes (ADAM10, TIMP1, MATN3, PKM, APLP2, ACTN1, CALU, VCAN, LGALS1, ITGA2, BST2, MFGE8, ITGAV, EGF, APOL1, ALB, and MSLN) that were predicted to be potential drug targets. The survival and "druggability" analysis of these genes showed that most of the identified genes had poor survival associations and good DS values, further providing evidence that they can be used as therapeutic targets in PDAC. The important roles of integrins as well as their interactions with collagens were highlighted by combining network modules and KEGG pathway analysis, in term of four pathways, ECM-receptor interaction, focal adhesion, proteoglycans in cancer, and human papillomavirus infection pathways. By focusing on ITGAV and ITGA2, we identified druggable pockets, drug binding sites, and hinge sites that are potential sites for designing small molecules. In summary, this new methodology will provide new avenues for discovering drug targets in PDAC and other cancers. Of course, our method in this work has some limitations. Firstly, our method only used SVM-REF to the gene expression data to rank the DEGs. With the growth of other omics data, we need to apply our method by including more kinds of data, such as RNA-Seq data for PDAC (Raphael et al., 2017), which will make our method more practical. Secondly, our method just combined the systems level analysis of PPI construction and analysis and the molecular level analysis of "druggability" prediction, and thus, the drug target prediction needs some structural research experience to some extent. To address this, the real integration of structure knowledge into PPI networks is still needed. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS WY, XYL, and GH analyzed the data and wrote the manuscript. XYL and WY conducted the SVM calculation and network analysis. FW assisted in network analysis. FX and SH conducted the structural modeling and docking. XL assisted in molecular docking. WY, FX, and GH conceived and designed all experiments, and interpreted all results. GH revised the manuscript. All authors contributed to the work.
8,271.2
2020-04-30T00:00:00.000
[ "Medicine", "Computer Science" ]
Towards Intelligent Data Analytics: A Case Study in Driver Cognitive Load Classification One debatable issue in traffic safety research is that the cognitive load by secondary tasks reduces primary task performance, i.e., driving. In this paper, the study adopted a version of the n-back task as a cognitively loading secondary task on the primary task, i.e., driving; where drivers drove in three different simulated driving scenarios. This paper has taken a multimodal approach to perform ‘intelligent multivariate data analytics’ based on machine learning (ML). Here, the k-nearest neighbour (k-NN), support vector machine (SVM), and random forest (RF) are used for driver cognitive load classification. Moreover, physiological measures have proven to be sophisticated in cognitive load identification, yet it suffers from confounding factors and noise. Therefore, this work uses multi-component signals, i.e., physiological measures and vehicular features to overcome that problem. Both multiclass and binary classifications have been performed to distinguish normal driving from cognitive load tasks. To identify the optimal feature set, two feature selection algorithms, i.e., sequential forward floating selection (SFFS) and random forest have been applied where out of 323 features, a subset of 42 features has been selected as the best feature subset. For the classification, RF has shown better performance with F1-score of 0.75 and 0.80 than two other algorithms. Moreover, the result shows that using multicomponent features classifiers could classify better than using features from a single source. Introduction Driving a vehicle requires dynamic adjustment of cognitive control, here, both visual and physical tasks are crucial to keep the driving performance to an acceptable level within a comfortable effort [1]. While driving a vehicle, drivers are often occupied with many other activities such as using a mobile phone, listening to the radio, or having a conversation with a passenger, etc. Moreover, new advanced in-vehicle information systems embedded in the modern vehicles could create distracted driving scenarios and may affect the driving performance [2][3][4]. Thus, these secondary activities, i.e., activities not related to driving require extra cognitive processes in ways that the driver can still keep their eyes on the road and hands on the steering wheel while being involved in other activities at the same time, and this refers to the 'cognitive load activities'. It is reported that more than 90% of traffic crashes are assigned to the driver's error, whereas 41% of them are due to inattention, distraction, and cognitive load activities [5]. Further, the risk concerning traffic safety and driving performance anticipating cognitive load activities have been addressed in [6,7]. Many studies have been pursued to understand the consequences of secondary task or dual-task demands while driving and different types of data such as physiological, driving behavioural, and subjective measures have been used to evaluate the driver's mental effort [1,[8][9][10]. In this paper, the attention selection model (ASM) [11], based on the n-back task has been employed to impose Study Design and Data Set The experimental study took place at the Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden, using a high-fidelity moving-base driving simulator (VTI Driving Simulator III (https://www.vti.se/en/research-areas/vtis-driving-simulators/)), see Figure 1. The study was approved by the regional ethics committee at Linköping University, (Dnr 2014/309-31) and each participant signed an informed consent form. The simulator was a car cabin consisting of front seats of a SAAB 9-3 with automatic transmission. It could simulate the movements and forces by moving, rotating, or tilting the part of the simulator with projector screens. A vibration table enables the simulation of the road surface contact. It had three liquid-crystal display systems for rear mirrors and six projectors for visualization of the frontal view with a horizontal field of view of 120 degrees. The study that collected the cognitive load dataset consisted of two test series that contained recordings from 66 participants (33 in test series 1 and 33 in test series 2). All the participants were male with no known diseases or medications, aged between 35 and 50 (42.47 ± 4.39 years), and had held a valid driver's license for more than ten years. To obtain homogeneity, only males were chosen with the aforementioned criteria. Further, participants were not professional drivers (e.g., taxi and heavy vehicle driver), no extremes in terms of self-reported personalities (extrovert or introvert), and self-reported normal sensitivity to stressful situations. To assess stress tolerance, each participant had to fill up a questionnaire after the end of each driving session. The self-reported questionnaire used a scale of 0-6, where '0' means low-stress tolerance and '6' means the high-stress tolerance; whereas for anxiety '0' indicates low and '6' indicates high anxiety. However, the personality and stress sensitivity have not been taken into consideration in this paper. Brain Sci. 2020, 10, x FOR PEER REVIEW 3 of 19 from 66 participants (33 in test series 1 and 33 in test series 2). All the participants were male with no known diseases or medications, aged between 35 and 50 (42.47 ± 4.39 years), and had held a valid driver's license for more than ten years. To obtain homogeneity, only males were chosen with the aforementioned criteria. Further, participants were not professional drivers (e.g., taxi and heavy vehicle driver), no extremes in terms of self-reported personalities (extrovert or introvert), and selfreported normal sensitivity to stressful situations. To assess stress tolerance, each participant had to fill up a questionnaire after the end of each driving session. The self-reported questionnaire used a scale of 0-6, where '0′ means low-stress tolerance and '6′ means the high-stress tolerance; whereas for anxiety '0′ indicates low and '6′ indicates high anxiety. However, the personality and stress sensitivity have not been taken into consideration in this paper. The driving environment in the simulator consisted of three recurring scenarios in which the simulated road was a rural road with one lane in each direction, some curves and slopes, and a speed limit of 80 km/h. The three scenarios were (1) four-way crossing with an incoming bus and a car approaching the crossing from the right (CR), (2) a hidden exit on the right side of the road with a warning sign (HE), and (3) a strong side wind in open terrain (SW). Figure 2 represents examples of these study scenarios. In Figure 2, the image in the left shows the CR scenario, the middle one shows the HE scenario, and the rightmost image shows the SW scenario. Thus, these scenarios implied threats in off-path locations without requiring the drivers to change their responses. As a withinmeasure study, each scenario was repeated four times during the approximately 40 min driving session where the participants were involved either in a cognitive load task, i.e., a 1-back or 2-back task, or were driving to pass a scenario (baseline or no-task). In the first test series, participants performed the normal driving and 1-back task while driving. However, in the second test series, the participants performed all three task conditions in the hidden exit and four-way crossing scenarios. The no-task and 2-back tasks were only performed under the side wind in the open field scenario. The 1-back and 2-back tasks are considered as the secondary auditory tasks, where a number is orally presented through the simulator's speakers at an interval of 2 s. The participants had to respond whenever the last presented number was the same as the previous one (1-back) or two steps earlier (2-back). The driving environment in the simulator consisted of three recurring scenarios in which the simulated road was a rural road with one lane in each direction, some curves and slopes, and a speed limit of 80 km/h. The three scenarios were (1) four-way crossing with an incoming bus and a car approaching the crossing from the right (CR), (2) a hidden exit on the right side of the road with a warning sign (HE), and (3) a strong side wind in open terrain (SW). Figure 2 represents examples of these study scenarios. In Figure 2, the image in the left shows the CR scenario, the middle one shows the HE scenario, and the rightmost image shows the SW scenario. Thus, these scenarios implied threats in off-path locations without requiring the drivers to change their responses. As a within-measure study, each scenario was repeated four times during the approximately 40 min driving session where the participants were involved either in a cognitive load task, i.e., a 1-back or 2-back task, or were driving to pass a scenario (baseline or no-task). In the first test series, participants performed the normal driving and 1-back task while driving. However, in the second test series, the participants performed all three task conditions in the hidden exit and four-way crossing scenarios. The no-task and 2-back tasks were only performed under the side wind in the open field scenario. The 1-back and 2-back tasks are considered as the secondary auditory tasks, where a number is orally presented through the simulator's speakers at an interval of 2 s. The participants had to respond whenever the last presented number was the same as the previous one (1-back) or two steps earlier (2-back). The physiological signals were acquired using a multi-channel amplifier with active electrodes (g.HIamp, g.tec Medical Engineering GmbH, Austria). The electroencephalography (EEG) electrodes were positioned based on the 10-20 system providing a 30-channel recording. The EEG signals were band-pass filtered between 0.5 and 60 Hz using an 8th order Butterworth filter, and frequencies between 48 and 52 Hz were removed using a 4th order Butterworth notch filter. In addition, electrooculography (EOG) (horizontal with electrodes at the outer canthi and vertical with electrodes above/below the left eye) were also acquired. ECG was measured using disposable ECG electrodes with a snap connection to the wiring. The respiration rate (RR) was measured using a SleepSence chest strap which was connected to the upper body. The skin conductance was measured using reusable gold-plated cup electrodes with conductive cream; the electrodes were connected to a GSR sensor (g.tec g.GSRsensor). Vehicular parameters such as lateral position (LatPos), lateral speed (LatSpeed), steering wheel angle (SWA), land departure (LanDep), and yaw rate were recorded in the simulator control computer. The physiological signals were acquired using a multi-channel amplifier with active electrodes (g.HIamp, g.tec Medical Engineering GmbH, Austria). The electroencephalography (EEG) electrodes were positioned based on the 10-20 system providing a 30-channel recording. The EEG signals were band-pass filtered between 0.5 and 60 Hz using an 8th order Butterworth filter, and frequencies between 48 and 52 Hz were removed using a 4th order Butterworth notch filter. In addition, electrooculography (EOG) (horizontal with electrodes at the outer canthi and vertical with electrodes above/below the left eye) were also acquired. ECG was measured using disposable ECG electrodes with a snap connection to the wiring. The respiration rate (RR) was measured using a SleepSence chest strap which was connected to the upper body. The skin conductance was measured using reusable gold-plated cup electrodes with conductive cream; the electrodes were connected to a GSR sensor (g.tec g.GSRsensor). Vehicular parameters such as lateral position (LatPos), lateral speed (LatSpeed), steering wheel angle (SWA), land departure (LanDep), and yaw rate were recorded in the simulator control computer. Classification Approach The aim of the classification task was to differentiate the driving events with the cognitive load task from normal driving. The influence of the scenarios on classification is evaluated by classifying cognitive load tasks for all individual situations. Each scenario had a duration of 60 s where the first 10 s of the data were discarded to adjust the stability of the driver with the cognitive load task. Hence, a 50 s recording of each scenario was used for feature extraction. Figure 3 shows the overall schematic diagram of the classification task. The steps include data gathering, data pre-processing, feature extraction, feature selection, dataset creation, training classifiers, and finally evaluation of each of the classifiers using the test dataset. Here, the data were gathered through a study with 66 participants (33 in test series 1 and 33 in test series 2) presented in Section 2.1. Classification Approach The aim of the classification task was to differentiate the driving events with the cognitive load task from normal driving. The influence of the scenarios on classification is evaluated by classifying cognitive load tasks for all individual situations. Each scenario had a duration of 60 s where the first 10 s of the data were discarded to adjust the stability of the driver with the cognitive load task. Hence, a 50 s recording of each scenario was used for feature extraction. Figure 3 shows the overall schematic diagram of the classification task. The steps include data gathering, data pre-processing, feature extraction, feature selection, dataset creation, training classifiers, and finally evaluation of each of the classifiers using the test dataset. Here, the data were gathered through a study with 66 participants (33 in test series 1 and 33 in test series 2) presented in Section 2.1. Brain Sci. 2020, 10, x FOR PEER REVIEW 5 of 19 Figure 3. Block diagram of the steps for driver cognitive load classification using multivariate data. Data Pre-Processing The driving task involves activities such as looking at the side and rear-view mirror, shifting gear, and changing body position that naturally causes muscle and ocular artifacts in the EEG signals. Therefore, it requires cleaning the EEG signal before extracting frequency component features from the EEG signal. Hence, EEG signals were artifacts handled using an in-house developed tool called ARTE (Automated aRTifacts handling in EEG) [41]. The median filter was used to handle noise in the vehicular data, respiration, and GSR signals. The median filter is particularly useful for removing spiky noise and can separate peaks from a slowly changing signal disturbed by unknown noise distribution [42]. A QRS detection algorithm proposed by [43,44] was used to extract inter-beatinterval (IBI) data from the ECG signal. The obtained IBI data were filtered using the ARTiiFACT tool MSet Test data Training data BSet-1 Test data Training data BSet-2 Test data Training data Feature Extraction Feature Selection Data Pre-processing models Figure 3. Block diagram of the steps for driver cognitive load classification using multivariate data. Data Pre-Processing The driving task involves activities such as looking at the side and rear-view mirror, shifting gear, and changing body position that naturally causes muscle and ocular artifacts in the EEG signals. Therefore, it requires cleaning the EEG signal before extracting frequency component features from the EEG signal. Hence, EEG signals were artifacts handled using an in-house developed tool called ARTE (Automated aRTifacts handling in EEG) [41]. The median filter was used to handle noise in the vehicular data, respiration, and GSR signals. The median filter is particularly useful for removing spiky noise and can separate peaks from a slowly changing signal disturbed by unknown noise distribution [42]. A QRS detection algorithm proposed by [43,44] was used to extract inter-beat-interval (IBI) data from the ECG signal. The obtained IBI data were filtered using the ARTiiFACT tool [45]. The collected raw dataset was in the European data format, which was converted into MATLAB (Matlab 2017b version. https://se.mathworks.com/products/new_products/release2017b.html) data format and all the works were done in MATLAB 2017b version. Feature Extraction Various features are extracted from both the physiological and vehicular parameters as presented in Table 1. Here, the feature vector consists of 323 extracted features with total observations of 721, where 306 of them are baseline or no-task, 237 observations are the 1-back task, and 178 observations are the 2-back task. EEG Features: From each of the 30 channel EEG signal power, the spectral density (PSD) of the δ (<4 Hz), θ (4-7 Hz), α (8-12 Hz), β (12-30 Hz), and γ (31-50 Hz) frequency bands are extracted as features. The Welch's method [46] is used with 50% overlapping with the Blackman window function. In addition, four different ratios of the PSDs, (θ + α)/β, α/β, (θ + α)/(α + β), and θ/β [47], are also estimated as features. These four ratios indicate the change of slow wave to fast wave of EEG activities over time. According to [47], an increase in the ratio is a good indicator of EEG activity compared to α and θ alone. Moreover, the authors found that α and θ, combined with the ratios, could better assess the fatigue condition of the drivers. Hence, nine features from each EEG channel resulted in 270 EEG features for each 50 s time segment driving event. The motivation of using EEG is that as the cognitive load increases, changes in alpha and theta powers in EEG have been observed in various studies [48][49][50]. It is reported that alpha and theta powers increase as the cognitive load increases [48][49][50]. Another common approach in the Brain-Computer interface is to apply the independent component analysis (ICA) to extract features from the PSDs of ICA components [51,52]. EEG classifications for different mental workload activities have been performed in [53,54]. However, depending on the study design and the type of cognitive load under scrutiny, the results are often ambiguous [48,50]. EOG Features: The EOG features derived from the vertical EOG using an automatic blink detection algorithm based on derivatives and thresholding was developed by Jammes and Sharabty [55]. The average spontaneous eye blink rate of a person is 15-20 per min [56]. The eye blink frequency increases as the cognitive load increases [48,57,58], whereas the decrease in blink duration is observed by [59]. ECG Features: Heart rate (HR) and heart rate variability (HRV), i.e., measure of the variations in time between each heartbeat, are two measures that can vary with the increasing cognitive load. HRV measures beat-to-beat (R-R interval) variations in terms of consecutive heartbeats articulated in the normal sinus rhythm from electrocardiogram (ECG) recordings [60,61]. HR and HRV features are obtained from the pre-processed interbeat interval (IBI) data. In time domain, statistical methods are applied to extract the time domain features. To obtain frequency domain features, the IBI data are transformed via FFT transformation. The PSDs of low frequency (LF) (0.04 to 0.15 Hz) and high frequency (HF) (0.15 to 0.40 Hz), LF/HR ratio, and total power are estimated. The time and frequency domain measures quantify the variability of the heart rate fluctuation characteristic in time scales. On the other hand, the non-linear measures quantify the structure or complexity of the R-R intervals, i.e., IBI data. Non-linear measures such as detrended fluctuation analysis, sample entropy, approximate entropy, and permutation entropy methods were applied to extract complexity from the IBI data [62]. An increased HR with respect to the increasing cognitive load has been reported in several studies; in contrast, the time domain measures of HRV such as mean RR, SDNN, RMSDD, pNN50, and HF power band (0.15-0.50 Hz) of HRV in the frequency domain decrease [14,63,64]. An increase in the LF power (0.04-0.15 Hz) and the LF/FH ratio of HRV have been associated with higher mental workloads [64][65][66]. Frequency bands: δ (<4 Hz), θ (4-7 Hz), α (8-12 Hz), β (12-30 Hz), γ (31-50 Hz), and the ratio (θ + α)/β, α/β, (θ + α)/(α + β), and θ/β Start position of blink, blink duration calculated from the start position of blink to the end value of blink, lid closure speed, PCV (peak closing velocity), delay of eye lid reopening, duration at 80%, PERCLOS, blink rate, blink count. GSR Features: GSR measures the electrical conductivity of the skin and can provide changes in the human sympathetic nervous system [67]. GSR is significantly correlated with the cognitive load task demand and usually used for the level of cognitive load classification [40,67,68]. In time domain several estimations, i.e., number of peaks, the amplitude of the peaks (maxima-minima), duration of the rise time of each peak, index of the detected peaks in the GSR signal, mean value, standard deviation, first quartile value, third quartile value, slope value between peak and valley are extracted as features [67]. One feature which is the average power of the signal under 1 Hz is extracted in frequency domain. A comprehensive review of GSR signal interpretation can be found in [69]. Further, relations between cognitive load and GSR features have been discussed in several studies [68,70,71]. [72,73]. The cognitive load has a distinct effect on the respiratory behaviour that can differ in sensitivity in the parameters obtained from respiratory signals [74]. According to Hidalgo-Muñoz et al. [72] significant increases in the respiration rate are observed while driving in comparison to the base line condition. Moreover, the RR showed variations with a different level task difficulty and RR accelerated with an increasing cognitive workload. Vehicular Features: The standard deviation from five time series data namely, lateral speed, steering wheel angle (SWA), yaw and yaw rate [75], and lateral position [15], are extracted as features. The steering wheel reversal rate (SWRR) [15], is defined as the absolute difference between maximum and minimum of the SWA signal. The SWRR is the number of reversals in a time period. Firstly, the raw SWA is smoothed using the Lowess method where the linear model is used for local fitting [76]. In this case, 110 points have been used for the moving average in the linear model. Steering wheel entropy [77,78], high frequency component (0.3 Hz), and number of zero crossings are the other features that are obtained from the SWA signal. Lanex or the fraction of lane exit feature is extracted from the lane departure signal which indicates the driver's tendency to exit the driving lane. Lanex is defined as the fraction of a given time interval spent outside driving [75]. In several studies, drivers' behavioural data in relation to vehicular signals such as speed, lateral position, steering wheel angle, etc. have been used to detect and classify drivers' cognitive load [15,16]. For example, driving performance relies on a right speed [79]. A reduced speed as a compensatory action due to the increased cognitive load is more often used as an indication of behaviour adaption rather than a change in driving performance [80,81]. Östlund and Nilsson [82] presented a few other parameters such as lateral position and steering wheel reversal rate that contribute to the driver's cognitive load. Wilschut [82] used the steering wheel angle and lane positioning to measure the driving performance. A lane change task can be used to investigate the effects of cognitive load on driving performance [83]. Feature Selection Feature selection is conducted only on the EEG signals since 270 EEG features are extracted from the 30 channels and many of them were neighbouring electrodes. Some overlapping and redundant features might exist. Hence, sequential forward floating selection (SFFS) [84][85][86] was used to also investigate the intra-feature relationships. SFFS is a successor of the sequential forward selection (SFS) method, which does not suffer from the 'nesting effect', and is computationally more efficient than other branch and bound methods [86]. SFFS was wrapped with an SVM classifier to obtain an optimal feature subset. Further, the SVM classification was evaluated using 5-fold cross-validation. For other features, random forest with the mean decrease accuracy (MDA) [87] approach was used in the feature selection process. The idea of using MDA is to find the direct impact of each feature on the performance of the random forest model. Here, a permutation of each feature measures the decreasing accuracy of the model and for the unimportant features the permutation has little effect on the model accuracy. On the other hand, removing important features should drastically decrease the accuracy. Cognitive Load Classification For cognitive load classification, data from both the test series are combined, and both multi-class (MSet data) and binary class (BSet data) classification are defined based on the n-back task and normal driving events. The binary class is defined as the task group and baseline group. For the binary classification, two data sets are created such that the first set (BSet-1) baseline consists of normal driving and 1-back task, and the task group contains data of 2-back task. In the second set (BSet-2), the baseline includes data from normal driving only, and the task group consists of data from both 1-back and 2-back tasks. These two binary datasets preparation was motivated by the assumption that the 1-back task did not have much influence on the driver (e.g., on working memory) compared to the 2-back task [88]. The MSet, BSet-1, and BSet-2 datasets are split into training and test datasets, where the training set contains 70% and the test dataset contains 30% of the data sets. Three separate classifiers k-NN, SVM, and RF are developed and trained using 5-fold cross-validation with the training datasets and later evaluated with the test datasets. In addition, the binary classification was performed for both scenario-wise and task-wise to discriminate the effect of scenarios on the cognitive load task. Here, only the training dataset was used in the feature selection step. The training set is further divided into two sets, where 80% of the training data is used for SFFS and MDA, and 20% of the training data is used as a validation set. k-NN is a simple memory-based algorithm that uses the observations in the training set to find the most similar properties of the test dataset [89]. In this work, the Euclidean distance function is used with a 'squared inverse' distance weight and K = 5 was considered. SVM finds the hyperplane that not only minimizes the empirical classification error but also maximizes the geometric margin in the classification [90]. SVM can map the original data points from the input space to a high dimensional feature space such that the classification problem becomes simple in this feature space. In this study, an SVM with a Gaussian kernel was used for the classification task. A popular ensemble algorithm in machine learning is RF, that consists of a series of randomizing decision-trees, where the output is the majority vote of all these decision-trees [91]. One important aspect of RF is that it does not assume independence of features. In the driving context, data is often noisy and rarely linearly separable into a different mental state [92]. RF is implemented using bagging, which is the process of bootstrapping the data plus using the aggregate to make a decision. During classification, MATLAB's fitcknn function is used for k-NN, the fitcecoc function with an SVM template is used for SVM, and the fitcensemble function with 4357 tree splits is used for RF. The three classifiers were evaluated considering confusion matrices, accuracy, balanced accuracy (BACC), Matthews correlation coefficient (MCC), F 1 -score, sensitivity, and specificity. Classification Evaluation The scenario wise binary classification was performed to see if there are any effects of the scenarios in the classification performance. Figure 5 shows the performance of binary classification for each scenario using both test datasets of BSet-1 and BSet-2. It can be seen that the balanced accuracies (BAcc) of HE and CR scenarios are lower and higher only for the SW scenario using RF on the data of BSet-1. It is important to mention that the ratio of the baseline and task groups in the BSet-1 is much imbalanced than the BSet-2. For both BSet-1 and BSet-2, the BAcc is higher for the side wind scenario. Using the test dataset of BSet-1, in the HE scenario, BAcc(s) are 47%, 58%, and 57% for k-NN, SVM, and RF, respectively; in the CR scenario, BAcc(s) are 51% for k-NN, 50% for SVM, and 57% for RF; in the SW scenario, BAcc(s) are 66% for k-NN, 71% for SVM, and 79% for RF. On the other hand, using the test dataset of BSet-2, in the HE scenario, BAcc(s) are 73%, 65%, and 64% for k-NN, SVM, and RF, respectively; in the CR scenario, BAcc(s) are 64% for k-NN, 68% for SVM, and 63% for RF; in the SW scenario, BAcc(s) are 72% for k-NN, 64% for SVM, and 72% for RF. As observed in Figure 5, the classification may have some influence on the driving scenario, hence a categorical feature is incorporated with the existing features presented in Table 2. Afterwards the multiclass classification was performed using the MSet dataset and the binary classification was performed using both BSet-1 and BSet-2 datasets. Multiclass classifications with k-NN, SVM, and RF are performed to investigate how each class On the other hand, using the test dataset of BSet-2, in the HE scenario, BAcc(s) are 73%, 65%, and 64% for k-NN, SVM, and RF, respectively; in the CR scenario, BAcc(s) are 64% for k-NN, 68% for SVM, and 63% for RF; in the SW scenario, BAcc(s) are 72% for k-NN, 64% for SVM, and 72% for RF. As observed in Figure 5, the classification may have some influence on the driving scenario, hence a categorical feature is incorporated with the existing features presented in Table 2. Afterwards the multiclass classification was performed using the MSet dataset and the binary classification was performed using both BSet-1 and BSet-2 datasets. Multiclass classifications with k-NN, SVM, and RF are performed to investigate how each class contributed to the classification performance. On the training dataset of MSet, using 5-fold cross-validation, k-NN achieved 53% classification accuracies, whereas both SVM and RF achieved 59% classification accuracies. Table 3 shows the confusion matrices for the test dataset. RF shows better performance than k-NN and SVM considering the number of correct classifications of each target group. Table 3. Confusion matrix of k-nearest neighbour (k-NN), support vector machine (SVM), and random forest (RF) multiclass classification on the test dataset. The grey cells represent the true positive (TP) value. TP represents the number of observations that were correctly classified, and the precision value in percentage. Table 4 represents the classification summary on the test dataset of MSet considering true positive (TP), true negative (TN), false positive (FP), false negative (FN), precision, sensitivity, specificity, and balanced accuracy (BACC). Here, one-vs.-rest was used to determine the target groups in the positive (P) and negative (N) classes. The positive class is the target group that corresponds to either baseline, 1-back, or 2-back task in each column. The negative (N) class consists of the other two target groups, i.e., 1-back + 2-back, baseline + 2-back, and baseline + 1-back. Overall, RF shows better performance considering the balanced accuracy. Binary classifications were performed using BSet-1 and BSet-2. The observed classification accuracies for k-NN, SVM, and RF with 5-fold cross-validation on the training dataset of BSet-1 are 79%, 81%, and 82%, respectively. On the training dataset of BSet-2, the achieved classification accuracies are 67% for k-NN, 72% for SVM, and 75% for RF. The prediction performance of k-NN, SVM, and RF, on the test dataset of each of BSet-1 and BSet-2 is presented in Table 5. Discussion Cognitive loading activities on traffic safety and its relation to driving performance has drawn an increasing attention to the traffic safety research issue. Here, the cognitive load dataset was acquired and analysed to understand the effect of cognitive load on traffic safety. Driving a vehicle is an anticipatory task where a driver needs adaptation concerning the road users' behaviours and their actions which are dynamic in nature. Driving is often considered as a process that is nearly automated, partially self-paced, and a satisficing task [93]. A driver can somewhat distribute the load of the driving task by deciding when, where, and what they do. This holds true not only for driving-related tasks but also for secondary tasks such as talking on a mobile phone or conversing with a passenger while driving. Most of the time this works well, but sometimes it does not [6,7,[94][95][96]. In the cognitive load theory, working memory is considered as an executive function that holds information and mentally processes that information [97]. Hence, in this paper the cognitive load is considered as the amount of cognitive resources (i.e., mechanisms necessary for cognitive control) used at a certain time [11]. The effect of cognitive load on traffic safety is considered utilizing the attention selection model (ASM) [11]. According to the ASM model, the cognitive load does not affect the automatic performance but impairs subtasks that rely on cognitive control. Among the physiological signals, the EEG is one accessible technique to measure cognitive load and the EEG signal analysis can detect changes in an instantaneous load and the effects of cognitively loading secondary tasks. The EEG feature selection in cognitive load classification showed the best feature subset selected by the SFFS algorithm, containing θ/β, α/β, (θ + α)/β, θ, β, and α features from only the frontal electrode. Features from the frontal region might suggest only motor function, and attention affected the cognitive loading activities. HRV from ECG, GSR, and RR features might be better indicators for cognitive load classification, a finding also supported by other studies. HRV features can be an important indicator for classifying cognitive load because cognitive load modulates the sympathetic and parasympathetic nervous systems inversely to driver sleepiness [98]. The time domain GSR, i.e., the peak amplitude, the duration of the rise time of each peak, and the mean GSR value were found to be useful indicators for cognitive load detection when a person is under the influence of different stress levels [99]. Furthermore, the states depend on the experimental design, driving environment, confounding factors, etc., and hence, multi-variate data and data fusion considering the driving context are needed to accurately assess the cognitive load. It should be noted that subjective measures, for example, the NASA-TLX [100] or the DALI (driving activity load index) [101], require understanding the importance of physiological features and vehicular features. In this paper, the cognitive load classification was performed based on the baseline (just driving) and n-back task (1-back and 2-back). This approach could have affected the classification performance because the influence of a cognitive loading task (e.g., on working memory) might not be the same for everyone, especially for the 1-back task. It is noteworthy to mention that the cognitive load classification distinguishes among different levels of cognitive-level tasks and does not imply how cognitively loaded participants are performed during the n-back task. In terms of classification, the problem lies in the class noise in the dataset. Apart from the analysis presented in this paper, several other classification experiments [102] have been conducted considering features according to (1) cerebral activities recorded via EEG, (2) cerebral activities recorded via EEG and eye blink waveform via EOG, (3) non-cerebral physiological signals recorded via HRV, GSR, and respiration, and (4) driving behavioural data based on vehicular parameters obtained from the control computer. The results showed poor performance than combining all features as the results presented in this paper. By using only the EEG features from BSet-1 (i.e., baseline = normal + 1-back task, and the task group = 2-back task) dataset, the height accuracy of 74%, 46% sensitivity, and 78% specificity was obtained by the RF algorithm. The performance was decreased using only the EEG features from the BSet-2 (i.e., baseline = normal, and the task group = 1-back task + 2-back task) dataset. Again, RF showed the best performance with 57% accuracy, 61% sensitivity, and 49% specificity. When features from both the EEG and EOG signals were combined, a slight improvement was observed in the classification performance using both the BSet-1 and BSet-2 datasets. Here, k-NN showed the best performance for both BSet-1 and BSet-2 datasets and the accuracy, sensitivity, and specificity were around 75%, 59%, and 81%, respectively. It has been observed that using only the vehicular features classification perform similarly as using the EEG features only. However, a combination of features from non-cerebral physiological signals, i.e., HRV, GSR, and respiration, was found to perform better for the classification compared to using only EEG, vehicular, and a combination of EEG and EOG features. The RF algorithm obtained the best performance using the BSet-1 dataset considering 78% accuracy, 70% sensitivity, and 82% specificity. Similarly, RF showed the best performance using the BSet-2 dataset considering 73% accuracy, 75% sensitivity, and 69% specificity. In all the cases, i.e., using only the EEG feature, a combination of features from EEG and EOG, features from vehicular signal, and combination of features from ECG, GSR, and RR signals the obtained classification accuracy was not more than 50%, and the sensitivity and specificity were around 55% and 60%, respectively. Overall, a 10% improvement in the classification performance was observed by using a combination of all multivariate features compared to the performance observed when using only the features from the EEG signals. A 20% improvement in the classification performance for multiclass classification was observed by using a combination of all multivariate features compared to that observed using only the feature based on the vehicular data. The current classification approach implies that it is not individualised; that is, the response pattern is assumed to be the same for all drivers. The scenario-wise classification shows that there is an effect of driving condition on the cognitive load. Thus, integrating contextual information as features can be beneficial to the classification. However, in this work, it was not fully comprehending the consequence of adding contextual features. The limitation of this approach can be overcome by incorporating subjective measures into the study design and adding a wide range of contextual information. Conclusions The objective of this paper was to provide analytics on multivariate data for driver cognitive load classification. The multiclass classification results portray the difficulty to correctly classify when there are imbalance classes in the dataset, which leads to performing the binary classification. These analytics emphasize the study design with a wide range of contextual information and subjective measure to predict or identify the level of cognitive load during driving. It is also found that multicomponent features could improve the overall classification performance. Another important issue of this study was the imbalanced class in the dataset. Hence, in this study, BACC, MCC, and F 1 -score were considered along with accuracy, sensitivity, and specificity. It should be noted that though for some occasions F 1 -score, sensitivity, and specificity showed reasonable measures but looking at MCC it is evident that the models tend to bias towards the class with higher observations. Though the inclusion of contextual feature is inconclusive, yet it is believed that contextual information not only can improve the classification performance but also can provide insights when it requires interpretation of the ML model. It is argued that the n-back task is an efficient task to measure the individual working memory capacity [103]. The scenario wise classification with a BSet-2 could better discriminate between normal driving and n-back task compared to the binary classification with BSet-1. It can be concluded from the result of the scenario wise classification that the cognitive load impairs the driving subtask that depends on cognitive control which is also the suggestion by ASM. Although studies [103,104] found more discriminatory EEG activity patterns between the n-back tasks, those studies only considered the n-back task as the main discriminatory factor. However, in this study the n-back task is adapted for ASM that may influence the classification performance and supports the idea of ASM that the automatic performances of the driving task are unaffected by the cognitive load.
9,063.6
2020-08-01T00:00:00.000
[ "Computer Science", "Engineering", "Psychology" ]
Algorithmic considerations when analysing capture Hi-C data Chromosome conformation capture methodologies have provided insight into the effect of 3D genomic architecture on gene regulation. Capture Hi-C (CHi-C) is a recent extension of Hi-C that improves the effective resolution of chromatin interactions by enriching for defined regions of biological relevance. The varying targeting efficiency between capture regions, however, introduces bias not present in conventional Hi-C, making analysis more complicated. Here we consider salient features of an algorithm that should be considered in evaluating the performance of a program used to analyse CHi-C data in order to infer meaningful interactions. We use the program CHICAGO to analyse promoter capture Hi-C data generated on 28 different cell lines as a case study. Introduction Chromosome conformation capture (3C) methodologies [1][2][3] have provided insight into the effect of 3D genomic architecture on gene regulation [4][5][6] . They preserve chromatin interactions by cross-linking followed by fragmentation, ligation and sequencing of interacting genomic regions. Hi-C exploits high-throughput paired-end sequencing to retrieve a short sequence from each end of each ligated fragment, allowing all pairwise interactions between fragments to be tested 7 (Figure 1). Chromatin interactions can result from biological functions, such as promoter-enhancer interactions, or from random polymer looping, whereby undirected physical motion of chromatin causes loci to collide. To identify 'true' interactions, it is necessary to identify the contribution from the null hypothesis, largely attributed to constrained Brownian motion and noise 8 . While not completely eliminating background noise, the development of in situ Hi-C, which preserves the integrity of the nucleus during Hi-C library generation, has gone some way to reducing it 3 . Analysis of Hi-C libraries involves filtering of invalid di-tags such as self-ligated pairs or adjacent fragment di-tags 9 before determining statistically significant and biologically important di-tag interactions. The expected frequency of interactions between two fragments decreases with their genomic distance, especially if the fragments lie in different chromosomes 8 . Hence, reliable estimates of the dependence on distance are a prerequisite to any analysis. While Hi-C allows for genome-wide characterization of chromatin contacts detection its effective resolution is determined by both restriction fragmentation and sensitivity of the experiment. Herein we consider data based on restriction enzyme digestion using the 6bp cutter Hind III, Sequencing of Chi-C library, alignment and filtering of valid di-tags using HiCUP 9 and identification of significant di-tag interactions using CHiCAGO 8 . Amendments from Version 1 This new version addresses the comments of the reviewers, making a few minor edits to presentation (e.g., changing promotor to promoter, more carefully labelling some figures, including explanations in the main text as well as in methods), and adding further to the discussion. These additions to the discussion focused on two areas, approaches to handling bait-bait pairs, and the Suggested Score Threshold (SST). We provide a paragraph to discuss approaches one could take to handling bait-bait pairs when only one direction is significant, highlighting possible areas of future research, and on the SST we mention how potential issues surrounding undersampling written about in the initial CHiCAGO paper may apply to our measure of the False Discovery Ratio. No changes were made to the methods section, and the only changes to the results were that Table 2 and Table 3 were extended. REVISED Capture Hi-C (CHi-C) is a recent extension of the Hi-C methodology that improves resolution by enriching defined regions of biological significance 10 (Figure 1). Analysis of CHi-C data is, however, more complicated than conventional Hi-C because: (1) varying targeting efficiency between capture regions introduces a bias not present in Hi-C 8 ; (2) contact maps in CHi-C arise from two distinct sources that have innately different visibility profiles -between the two captured fragments and between captured and non-captured fragments; (3) null hypotheses for each di-tag pair are not independent as these are tested simultaneously, requiring an alternative statistic instead of reliance on raw p-values from hypothesis tests. CHi-C, especially in the guise of promoter capture Hi-C (PCHi-C) is increasingly being used to decipher the genetic basis of aberrant gene expression in cancer. Since cancers rarely have diploid genomes, the analysis of PCHi-C from tumours is further complicated by copy number variation (CNV) 11 and presence of inter-chromosomal translocations 12 . Here we examine a number of features of an algorithm that should be considered in evaluating the performance of a program used to analyse CHi-C data. Specifically, (i) the appropriateness of the distance-correction used in the model; (ii) relative importance of weights assigned to model parameters; (iii) whether the null is accurately reproducing the distribution of the large majority of contacts and how thresholds for declaring significant interactions are obtained; (iv) whether the underlying model leads to asymmetry in test statistics of baitbait pairs; and (v) how an algorithm behaves processing CHi-C cancer cell line data. As an illustration we consider CHiCAGO 9 as a case study since it is a widely-used program for analysing PCHi-C data 13,14 . The algorithm features a novel background correction procedure using a two-component convolution model designed to account for real but expected interactions as well as experimental and sequence-based artefacts. Additionally, CHiCAGO implements a p-value weighting procedure, based on parameters that can be estimated from the data. Raw sequencing data was processed using HiCUP v0.6.1 9 to obtain only valid interaction di-tags aligned to build 38 of the human genome. Summary statistics for each PCHi-C dataset are provided in Supplementary Data 1 (see Extended data 15 ). Significance of interaction frequencies for di-tags, both ends baited (bait-bait) and where only one end was baited (bait-other end) were estimated using CHiCAGO v1.1.8 8 . Model considerations We initially considered PCHi-C libraries from the 18 non-tumour cell lines. In CHiCAGO, background interactions are modelled by the following components of a Delaporte distribution, which are assumed to be independent: (1) Brownian collisionsmodelled by a negative binomial random variable with expected levels a function of genomic distance, adjustment for biases associated with individual fragments and size parameter independent of the interacting pair; (2) assay artefacts/technical noise (i.e. sequencing errors) -modelled by a Poisson random variable, whereby the mean of Poisson random variable depends on the properties of interacting fragments, but is independent of genomic distance between fragments. We examined the validity of the model and estimation of central parameters. Assuming that in 'small' distance bins technical noise is low, as per CHiCAGO specifications, test statistics and corresponding p-values for the Kolmogorov-Smirnov (KS) test (testing probability that data observed is generated by the model specified, aggregated over distance bins) were generated for ACD4 (Table 1) and the other 17 cell lines Table 2, Extended data 15 ). In most cell lines the p-values associated with the small distance bins were effectively zero but rapidly increased to near-unity in the larger distance bins. This is to be expected, as the asymptotics of Rosa et al. 16 only hold when genomic separation is much larger than the distance between adjacent regions. The notable outlier was GM12878, which was typified by near zero p-values across all distance bins. The exact estimates of bin-wise p-values are not necessarily important, since they will be impacted by true interactions, bait specific biases permitted by the model, the effect of which is expected to be small, and the distribution of distances within each bin. Nevertheless, the fact that there was no rejection of the null hypothesis at large distance bins implies that the negative binomial model fits the data well for broad-scale behaviour. As technical noise is proportionally greater at large distances, the discrepancy in how well the negative binomial fits over distance cannot be attributed to the KS test ignoring the Poisson component. The 'distance function', a key component of CHiCAGO's implementation of a genomic distance dependence into the mean of the negative binomial, was generated for each cell line. Coefficients of distance fit curves and plots of estimates of the 15 ). In all 18 cases the cubic spline fitted by CHiCAGO provides a good fit to the data. With the exception of GM12878, there was strong concordance between the theoretical, linear and cubic fit, and the curvature of the cubic spline K further shows GM12878 as an outlier, which may well reflect GM12878 Hi-C libraries being prepared by dilution rather than in situ ligation. In view of GM12878 being an outlier, a linear model Linear Intercept ~ Linear Gradient was fitted with and without GM12878 (y = -0.776 -14.681x and y = 1.067 -12.919x, respectively), with the second having a lower residual square sum (RSS), so correspondingly providing for a better fit. This linear fit is consistent with interactions detected primarily being cis-chromosomal. The assumption that assay artefacts have minimal effect on expected reads in small distance bins was confirmed by calculating the mean technical noise parameter λ and mean number of trans pairs observed per bait for each cell line. Box plots of the parameter estimate per bait or other-end pool, shown in Supplementary Figure 2 (see Extended data 15 ), adhere largely to the patterns expected as laid out in the CHiCAGO vignette 8 , where it is stated that to interpret the noise box plots one needs to check that "distributions' median and variance should trend upwards as we move from left to right". Score statistic CHiCAGO implements a novel score statistic as a proxy for the strength of evidence supporting an interaction 8 . We investigated the suitability of this statistic and the threshold advocated for declaring significance. Initially, we compared CHiCAGO interaction scores between bait-bait pairs. Intuitively it might be assumed that the score for baits AB will be identical to BA. However, this is not the case as evidenced by plots of score ij against score ji statistics for ACD4 ( Figure 2) and the other 17 cell lines (Supplementary Figure 3). The asymmetry arises because when CHiCAGO constructs bait-end biases and other end biases, the former are assumed to be fixed for each bait, whereas the other-end bias is assumed to be drawn from a random distribution, resulting in a different number of expected reads for the pair (mean correlation 0.4854, interquartile range (IQR) 0.0970). To further understand this asymmetry, we define an interaction as 'reversible' if, for a given threshold, the significance of the interaction did not depend on the direction in which the score was calculated. The mean percentage of reversible interactions was only 23.06% (IQR = 4.06%); the presence of non-reversible interactions representing failure of the algorithm to assign biological relevance to a bait-bait interaction. Significance threshold The threshold advocated by the developers of CHiCAGO for declaring a significant interaction is a score ij > 5 8 -referred to as the normal score threshold (NST). We investigated power and false-discovery rate (FDR) at this threshold. To evaluate power, or equivalently the false-negative rate, (FNR) we calculated the proportion of interactions with log(p) < -10 (considering the null hypothesis of the Delaporte distribution) and score >5. log(p) < -10 (i.e. 'robust' interactions) was used as in the mathematical specification of CHiCAGO it is suggested that reproducible interactions are those that pass this threshold in all replicates 8 . Because the principle underlying the CHiCAGO score statistic precluded simply using p-values to identify true interactions, except for in extreme cases, we used the Jaccard index as a proxy for FDR (assuming interactions passing the score threshold in all replicates are true, and using the Jaccard index to measure the proportion of total interactions observed over all significant replicates; see methods section). To quantify the suitability of the score as a statistic at alternate thresholds, we calculated alternate score thresholds with the aim of improving the FNR, FDR, as well as the family-wise error rate (FWER) ( Table 2). Across all 18 cell lines the threshold to fix the theoretical FWER was a score ≳15; however, this limited discovery to O(10 4 ) interactions per cell line, compared with O(10 5 ) interactions when imposing a score threshold of 5. Weighting procedure CHiCAGO scores are computed from raw p-values corrected for the prior probability of true interaction, fitted by a four-parameter logistic regression model 8 . The parameters can be calculated from the reproducibility of interaction frequencies at different genomic distances for the cell line, otherwise by default the program uses estimates from macrophage data. To investigate the appropriateness of the estimation method and the extent to which the choice of weighting affects the identification of significant interactions, we used CHiCAGO's pre-built method to calculate parameters for each cell line. CHiCAGO uses the observed interactions to fit a curve of true-interaction prior probability that decreases monotonically with distance. The monotonicity of the model is intuitive because in general baits that have a greater separation distance will have a lower prior probability of interacting. The lack of the monotonicity of the observed data, a measure with range of 0 to 1, with 0 corresponding to perfect monotonicity, had mean 0.3222, IQR 0.0917 (4.d.p). While it is not possible to quantify how much of the lack of monotonicity is due to expected variance without making additional hypotheses of the model, visual inspection of the data the weighting curves produced by CHiCAGO fit to, as shown in Supplementary Figure 4 (see Extended data 15 ), shows a local peak around a log distance of 12. This behaviour is not detectable by the logistic model which is monotonically decreasing. This suggests that the non-zero lack of monotonicity is caused by underlying biological features. The RSS for the logistic regression had mean 26.08, IQR 6.50, the GM12878 cell line an outlier with RSS 73.11. Visually this is identifiable as the fit is near horizontal for the GM12878 cell line. GM12878 in fact has a lower RSS than one would expect, as there were only seven distance bins contributing, all others having zero observed interactions. This is a more general phenomenon whereby fits in which there are zero-observed-interaction bins will have an artificially lower RSS. As the threshold of log(p) < -10 for defining 'true' interactions is somewhat arbitrary, yet it affects weight parameters, we sought an alternate p-threshold and recalculated the RSS. With the weight parameters calculated by CHiCAGO that minimised the RSS, key statistics for the data were recalculated, giving a mean change of 0.0153 for the score correlation, 1.36% for the reversibility, 0.0355 for the FDR at the NST, and -0.0708 for the FNR. This suggests that, on balance, using the custom calculated weight parameters improves the quality of the resulting calculations. Moreover, the Jaccard index for concordance between significant interactions with and without suggested weights had mean and IQR values of 0.8732 and 0.0737, respectively, demonstrating that the choice of weights affects which interactions are reported as significant. Applying the new weights had no significant effect on the number of interactions observed at the NST. Application to cancer cell lines We next analysed PCHi-C data generated on the 10 cancer cell lines. Supplementary Plots of the score symmetry for bait-bait pairs are shown in Supplementary Figure 6 (see Extended data 15 ). Cancer cell lines tended to have a higher non-zero score correlation for baitbait pairs (mean 0.5136, IQR 0.1803) but a significantly lower percentage of reversible interactions (mean 14.54%, IQR 6.54%). The BLN2 and BLN3 cell lines showed substantive aberrant behaviour in their plots in which proximal bait-bait pairs in a similar region extended in long 'arms' away from the theoretical fit. This was not seen in any of the other cell lines and is likely to be a consequence of a vastly different underlying genomic architecture. Summary statistics to calculate the quality of the score threshold were again calculated. The FNR was higher for cancer cell lines (mean 0.197, IQR 0.069). Suggested score thresholds to improve the FNR or theoretical FWER (Table 3), were similar to those seen for non-cancer cell lines. The data showed a lack of the monotonicity (mean 0.5999, IQR 0.0865 (4.d.p); mean RSS of 55.4, IQR 56.9), but was significantly higher than that observed in non-cancer cell lines. This distortion was most pronounced with BLN2, BLN3 and HT29 cell lines, which all showed very low concordance between the fit and data points. Mean changes in summary statistics with CHiCAGO-calculated weight parameters were 0.0292 for score correlation, 2.27% for reversibility, 0.0683 for FDR at the NST and -0.0870 for the FNR. The corresponding mean Jaccard index was 0.6878 (IQR, 0.2010), highlighting the importance of using derived weights in analyses. Finally, we examined heatmaps of Hi-C interaction frequencies to detect potential cancer-related chromosomal abnormalities, finding that BLN2 and BLN3 exhibit large-scale inter-chromosomal translocations (Supplementary Figure 7, Extended data 15 ). However, such features are unlikely to be sufficient to solely account for the increased score asymmetry observed in cancer cell lines. Discussion When utilising any statistical test, it is necessary to verify that any necessary properties of input data are satisfied, and that under these assumptions sensible conclusions are drawn. In this study we have sought to evaluate CHiCAGO as a methodology for identifying statistically significant genomic interactions in PCHi-C data. This evaluation included examination of: (i) the suitability of the distance-correction model employed; (ii) evidence of discordance in association statistics at bait-bait pairs; (iii) significance thresholds of called interactions; (iv) importance of weight parameter estimates; (v) specific considerations for its application to analysis of cancer cell-line data. Our findings indicate that the Delaporte null fitted the data well in large distance bins, with the assumption that the Poisson contribution is small being verified. The cubic spline distance function fitted the data well, with the linear fit being sufficient for most non-cancerous cell lines. The symmetry in the score parameter was very low for bait-bait pairs. The default CHiCAGO score threshold of >5 was typically too low to ensure reliability in the data, evaluated either from the FNR or FWER, but correspondingly the sensitivity to detect interactions was greater than at higher score thresholds, with a resulting higher false discovery rate. In handling bait-bait pairs for which there is asymmetry in the assigned score, either the maximum or minimum of the score and its reverse could be reported depending, whether false negatives or false positives were the priority. Alternative approaches worthy of consideration might be combining the scores weighted by the expected score variance or imposing a framework with symmetry. We leave studies of the suitability of such approaches to future work. Estimating biases for other-ends is inherently more difficult, as noted in the original mathematical specification for CHiCAGO, due to the comparatively low number of nearby bait fragments, and so for discovery we would not expect bait-bait pairs to be a large source of biological information. Using custom cell-line specific weight parameters marginally improved summary statistics of the data compared to reliance on default parameters. The overlap of significant interactions with and without suggested parameters was around 90%, demonstrating the presence of either false positives or false negatives when using standard weight parameters. These features were also seen, albeit more pronounced, in cancer cell lines. Using custom weights improved the metrics applied to the output, as expected since it provides the theory-mandated adjustment of the p-values. In a recent best-practice guide for CHiCAGO published by the authors further aspects of parameter tuning have been proposed 17 . As the framework we provide only considers how CHiCAGO processes input data, our methodology is largely resistant to limitations due to the underlying CHi-C inputs. There are small differences in the two versions of the designed oligonucleotide baits to capture promoter fragments between cancer and non-cancer cell line data, but as CHiCAGO produces bait-specific biases as part of its model, we should not expect this to have a major influence on our conclusions. For calculations involving the FDR, FNR, and FWER, due to the lack of bona fide reference interactions, we were reliant on theoretically equivalent proxies. As a result, point estimates will be inherently imprecise and allow us to either only make comparisons or reference confidence intervals between different score thresholds. As a result, point estimates will be inherently imprecise and allow us to either only make comparisons or reference confidence intervals between different score thresholds. As demonstrated in Cairns et al. 8 ( Figure S4), pairs with a low number of counts that are nevertheless significant (because they occur at large genomic distance) within a dataset are unlikely to be identified as significant when viewing a subset of interactions. This was presented in the context of under sampling but suggests that the SST value suggested by optimising the FDR will be skewed towards a threshold which removes more low-read interactions. As score is intended to be a measure of significance that does not depend on distance, we don't expect this skew to be strong. Furthermore, although we provide a large range of statistics as an example of how to assess Hi-C algorithms, there are some visual features that are not necessarily amenable to numerical description. Criteria for selecting the bests summary statistics to efficiently assess algorithms is desirable, something that may potentially be tractable by applying approximate Bayesian computation 18 . From a numerical and computational perspective our study highlights a few key points. The fact that the cubic distance function fit implemented by CHiCAGO correctly matched the data for every cell line is unsurprising, given the large number of parameters it was able to utilise. We should similarly expect the same of the logistic regression, and so large failures to fit the data are indicative of the unsuitability of the underlying form of the curve for the data it is approximating. Moreover, the exact implementation of methodologies is demonstrated to be important. Numerical optimisation improved the FDR on average, but Nelder-Mead (NM) often produced thresholds too large to be useful in discovery, serving to demonstrate the importance of understanding the underlying processes behind standard R functions, as Broyden-Fletcher-Goldfarb-Shanno (BFGS) provided more practical thresholds. This difference in behaviour stems both from the fact that the Jaccard index is not a continuous function of the score threshold, and that NM is a heuristic algorithm. Using BFGS highlighted certain cell lines for which the SST was considerably different to the NST, but we were did not to see any meaningful biological reason for this occurrence. It is a possibility that this is the sparsity skew discussed previously. Inevitably, a challenge in evaluating the performance of Hi-C and Hi-C algorithms is not having reference to a large "gold standard" reference set of bona fide true interactions in a given cell line. if previously identified interactions are recovered. One way of generating a "null model" for comparison of di-tag interaction frequencies is to sequence a generated "random ligation" library prepared by reversal of cross-links prior to ligation 10 . This has, however, not generally been standard practise in preparation of large numbers of libraries. Analysis of CHi-C data generated from cancer cells clearly presents challenges beyond that of diploid cells. Translocations affect distance estimates, leading to highly significant interaction p-values between translocation breakpoints. As a prelude to any analysis of CHi-C, examining pre-capture Hi-C data can be used to identify translocations and inform downstream analyses. Other molecular abnormalities in cancer cell lines, such as focal amplifications/deletions, regions of kataegis and chromothripsis are more intractable sources of bias. Comprehensively accounting for such aberrations ideally requires de novo assembly of the cancer genome being investigated. In conclusion, our analysis highlights a number of features that should be considered when evaluating CHi-C algorithms. In application to CHiCAGO, while we saw that the underlying null hypothesis was entirely sensible, assigning significance to a given interaction is not entirely straightforward. It is clear that many issues associated with processing of CHi-C data are exacerbated when studying cancer derived data because of the complex nature of their genomes. Datasets analysed The 28 cell lines and PCHi-C datasets analysed are detailed in Supplementary 15 ). Genome-wide heatmaps of Hi-C contacts were generated using HiCExplorer v2.1.1 21 to identify large-scale chromosomal translocations. Evaluation of CHiCAGO Cairns et al. 8 provide a mathematical specification of the algorithm used by CHiCAGO, and we utilise the same notation. For pairs less than 1.5 Megabasepairs (Mbp) apart, the CHiCAGO algorithm assumes that the contribution to the total number of counts from the 'technical noise' component of the null model employed is sufficiently lower than that from the 'Brownian' component, so it is reasonable to approximate the model as where NB(μ,r) is a negative binomial distribution with probability mass function Discrete Kolmogorov-Smirnov test statistics for a goodness of fit were calculated in each distance bin B b . These were calculated under the null that where w is the width of the distance bin, and s 1 , s 2 are drawn from the distribution of the bait and other-end bias distributions. We implemented Monte Carlo hypothesis testing to obtain p-values, using 5,000 simulations of 5,000 pairs to measure the D-statistic, using the standard p-value estimator of Davison and Hinkley 1997 22 . Deriving a statistic per distance bin allowed us to examine how appropriate the null hypothesis is for separate distance bins. A cell-wise Bonferroni correction was applied to the significance threshold. We plotted estimates of the distance function f(d) against the data, which is the geometric mean of the non-zero reads between bait-other end pairs in each distance bin, alongside a linear fit and a 'theoretical' fit. Specifically, we fit • logf(d) = a 0 + a 1 logd + a 2 (logd) 2 + a 3 (logd) 3 (cubic) The 'theoretical' fit is of the form f(d) ∝ d -1 , as suggested to be the large-distance limit by Rosa et al. 16 . We further calculated the integral of the curvature of f over the distances considered of [10 4 , 1.5 × 10 6 ] base pairs, as a measure of the deviation from a power law, which would be represented by a straight line on a log-log scale. Specifically, we calculated K, given by The limits of integration can be chosen as, outside of this range, the distance function is extrapolated linearly on a log-log scale, where the curvature k will be zero. Careful treatment of the second derivatives at these limits is not necessary. To validate the assumption that the technical noise will have minimal effect at small distance, the mean λ parameter for the pairs was calculated as per CHiCAGO. Moreover, boxplots were produced demonstrating the distribution of the parameter in each pool used in the estimation procedure. To adjust for multiple testing in CHiCAGO, p-values are weighted according to the prior probability of a given null hypothesis being true or false 8 . By default, weights are those estimated from human macrophage data, which defines reproducible interaction as one for which log p < -10 in all replicates. To evaluate this definition, we considered the function g(ρ) given by: For ρ = -10, g gives a FNR for reproducible interactions. Furthermore, we calculated the value of ρ that gives g(ρ) = 0.05 (assuming that such a value exists) to determine a p-value threshold for reproducible interactions coherent with the score statistic. CHiCAGO's algorithm is not symmetric in its treatment of baitbait pairs and we found the correlation between non-zero values of score ij against score ji values for bait-bait pairs. By excluding pairs where both scores were zero, we avoided correlations being artificially inflated because there are many more non-interacting pairs than interacting pairs. We further computed the proportion of the bait-bait pairs that passed the advocated score threshold of > 5 in both pairs, relative to those passing the threshold in at least one pair, that is To examine the reproducibility of interactions called as significant by CHiCAGO, we produced alternate score thresholds based on: (i) a Bonferroni correction to control the FWER in the smallest distance bin; (ii) controlling for FNR, and (iii) minimising the FDR in Jaccard index between replicates. Specifically, for the Bonferroni correction, as thresholding at a score α requires that the evidence for an interaction exceeds that of a proximal pair with p-value e -α , we imposed the threshold The measure of reproducibility used was the Jaccard index of the sets of significant interactions in each replicate, which are reported as FDRs. Under the assumption that true interactions will be significant in all replicates and false interactions will not, we have the equivalence 1 . False Discovery Rate Jaccard Index = − At each score threshold, the number of interactions was reported to balance reliability and sensitivity. To demonstrate the importance in the choice of optimisation methodology in minimising FDR for a given cell line, two optimisation methods were used, Bound Limited-memory BFGS (L-BFGS-B), and the default NM utilised by optim in R. We evaluated CHiCAGO's weighting procedure. In the estimation of the weight parameters, CHiCAGO's algorithm fits a monotonic decreasing curve to the observed prior-probability of interaction through bounded logistic regression. To examine the extent to which monotonicity is observed in the data, a lackof-monotonicity statistic was generated for the data with the formula This is a natural formula in the sense that if the sequence (v i ) is monotonic, the lack of monotonicity is 0. To evaluate the fit we calculated a RSS; distance bins in which no interactions were observed were neglected, since for these bins CHiCAGO estimates the prior probability of interaction to be 0, hence log(p) is infinite. Thus, these points provide no information for the RSS, but nevertheless indicate a failure of the fitted model to represent the data, and so the presence of bins in which no interactions were observed is also reported. We estimated the weight parameters at the threshold of -10 and ρ, for each cell line. Parameters that provided the 'better' fit (i.e. had a lower RSS), were used and CHiCAGO re-run. After which, a Jaccard index was calculated for the sets of significant interactions called by CHiCAGO using either default or updated weight parameters. All methods described were implemented in R version 3.6.3. A copy of the program, modified for readability above utility, is released as Extended data (Trim_of_CHiCAGO_evaluation.R) 15 . This project contains the following extended data: Data availability - significance threshold. The paper highlights the importance on training p-value weights on own data and the challenges of choosing an optimal score threshold in the absence of a gold standard. As a developer of CHiCAGO, I thank the authors for an independent evaluation of our pipeline and am happy to see that it has withstood most tests. I only have a very small number of comments: "In most cell lines the p-values associated with the small distance bins were effectively zero but rapidly increased to near-unity in the larger distance bins". In the authors' opinion, why are the p-values so low in the small distance bins? It would be good to state this more explicitly. 1. It would be good to unpick a bit more the phenomena that underlie "non-reversible" baitto-bait contacts -which are generally expected due to the asymmetrical nature of CHi-C data and CHiCAGO's analytical approach. Theoretically, this may be to do with challenges in estimating s_i's (other-end scaling factors) based on much less information than available for s_j's (bait scaling factors). Secondly, this could be due to differences in the coverage of respective baits, potentially leading to differential sensitivity in signal detection depending on the viewpoint. Finally, from Fig 2 it seems that there are a few cases where the difference in score is quite small and so their position on either side of the threshold is incidental. It might be worth discussing these possible situations a bit more detail. Also, what is the authors' suggested strategy for dealing with these situations? In our lab, we pick the pair with the higher score, as we believe that false-negatives are a generally bigger issue in CHi-C and CHiCAGO than false-positives. Alternatively, bait-to-bait contacts could be analysed separately as a symmetrical matrix using appropriate tools. 2. I wonder to what extent the score threshold selection methods based on reproducibility across replicates are inflated due to sparsity issues (as we showed in Cairns et al., 2016, Fig S4). 1 This is the reason why in Javierre 2 we used a different approach (based on Blangiardo et al.) 3 to show consistency between datasets, while in Freire-Pritchett eLife 2017; Nat Prot 2021 4,5 use a threshold-tuning approach based on balancing enrichment and recall of PIRs containing enhancer-associated histone marks. It would be good to discuss these points. 3. those highlighted by the reviewer. We mention the possible skew effect of sparsity, which we expect to be small, and comment on the possible link to comment 9 of Reviewer 1. 3. These changes you will see in an updated version of the paper. Having made changes to the manuscript we hope it is now suitable for indexing. Borbala Mifsud College of Health and Life Sciences, Hamad Bin Khalifa University, Doha, Qatar Disney-Hogg et al. have considered what the important features of algorithms used for identifying biologically meaningful interactions in capture Hi-C analysis are. They used CHiCAGO, a widely applied algorithm, as their case study and assessed the appropriateness of the distance function used for correction, the effect of altering the weights in the model, looked at how thresholding influences the resulting significant interactions and analysed the reciprocity of bait-bait interactions. Given the lack of a set of gold standard interactions, this evaluation is based on the assumption that true interactions are reproducible. They performed the analyses both in normal hematopoietic cells and in cancer cell lines. They found that the cubic spline fits the data well and that the model used by CHiCAGO creates highly asymmetric results for bait-bait interactions. Changing the weights to custom calculated ones slightly improves the quality of the identified interactions. The suggested threshold for significance is appropriate for most cell lines to optimize FNR and FDR, however, there are notable exceptions to this. There are a number of methods developed for capture Hi-C data analysis and this approach can be applied to evaluate and compare their performance and therefore it is a valuable contribution to the field. Minor comments: Promoter is the correct term for the genomic region instead of promotor. 1. Supplementary data 1 is missing information on the captured read count. 2. In figure 1, the fragmentation step is missing after the ligation "Fragmentation, biotin pulldown, adapter ligation and PCR". 3. Table 1 would probably be easier to see in a plot where the test statistics and the p-value are plotted against the distance. 4. Please add labels to the scales in Supp. Figure 2. 5. In figures where the distance is one of the variables, could you indicate the unit? 6. The definition of FDR based on Jaccard distance should be in the main text. 7. In Table 2, it would add valuable information if the number of significant interactions were included for each threshold. 8. Could you comment on the observation that the optimal threshold for FDR using BFGS was mostly around the recommended threshold, but in some cases it was very different? Do those data sets share any characteristics that could explain it? 9. Are sufficient details provided to allow replication of the method development and its use by others? Yes If any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes Are the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes for the delay in responding, however this unfortunate delay was the consequence of me starting a PhD on an unrelated subject compounded by Covid-related issues. All your comments were extremely insightful and pertinent, and we have largely implemented all
8,310.4
2020-12-14T00:00:00.000
[ "Computer Science", "Biology" ]
Improvement of Uveal and Capsular Biocompatibility of Hydrophobic Acrylic Intraocular Lens by Surface Grafting with 2-Methacryloyloxyethyl Phosphorylcholine-Methacrylic Acid Copolymer Biocompatibility of intraocular lens (IOL) is critical to vision reconstruction after cataract surgery. Foldable hydrophobic acrylic IOL is vulnerable to the adhesion of extracellular matrix proteins and cells, leading to increased incidence of postoperative inflammation and capsule opacification. To increase IOL biocompatibility, we synthesized a hydrophilic copolymer P(MPC-MAA) and grafted the copolymer onto the surface of IOL through air plasma treatment. X-ray photoelectron spectroscopy, atomic force microscopy and static water contact angle were used to characterize chemical changes, topography and hydrophilicity of the IOL surface, respectively. Quartz crystal microbalance with dissipation (QCM-D) showed that P(MPC-MAA) modified IOLs were resistant to protein adsorption. Moreover, P(MPC-MAA) modification inhibited adhesion and proliferation of lens epithelial cells (LECs) in vitro. To analyze uveal and capsular biocompatibility in vivo, we implanted the P(MPC-MAA) modified IOLs into rabbits after phacoemulsification. P(MPC-MAA) modification significantly reduced postoperative inflammation and anterior capsule opacification (ACO), and did not affect posterior capsule opacification (PCO). Collectively, our study suggests that surface modification by P(MPC-MAA) can significantly improve uveal and capsular biocompatibility of hydrophobic acrylic IOL, which could potentially benefit patients with blood-aqueous barrier damage. Results P(MPC-MAA) was synthesized and grafted onto the IOL surface. P(MPC-MAA) copolymer was synthetized via free radical polymerization. Fourier transform infrared (FT-IR) spectroscopy and proton nuclear magnetic resonance ( 1 H NMR) spectroscopy of P(MPC-MAA) are shown in Fig. 1. A transmission absorption peak was observed at 1,720 cm −1 for all of the samples (Fig. 1a), which corresponded to the carbonyl group (C= O) in the PMAA and P(MPC-MAA). However, an absorption peak at 1,080 cm −1 was observed only in the spectra for P(MPC-MAA), which corresponded to the phosphate group (P-O) in the MPC unit 30,31 . The proton signals at 3.2 ppm were observed in the 1 H NMR spectroscopy of P(MPC-MAA) (Fig. 1b), which was attributed to -N + (CH 3 ) 3 of the MPC units 30,32,33 . Collectively, these results demonstrated that the P(MPC-MAA) copolymer was successfully synthesized. The molar fractions of MPC and MAA were 5.7:4.3 calculated from 1 H NMR spectroscopy. The Molecular weight (Mw) and polydispersity index (PDI, Mw/Mn) of P(MPC-MAA) copolymer were 2.3 × 10 5 and 3.02, respectively ( Supplementary Fig. S2). To construct a protein-resistant IOL surface, P(MPC-MAA) copolymer was grafted onto the IOL surface via plasma technology. Untreated hydrophobic IOL, IOL treated by plasma alone, and P(MPC-MAA) modified IOL are abbreviated as IOL, IOL-Plasma and IOL-P(MPC-MAA), respectively. X-ray photoelectron spectroscopy (XPS) spectra of the binding energy regions of the nitrogen (N) and phosphorous (P) electrons of IOL, IOL-Plasma and IOL-P(MPC-MAA) are shown in Fig. 1c, d. Relative intensities of nitrogen element are listed in Supplementary Table S1. Compared to IOL, a strong and broad N1s peak at approximately 400 eV appeared on IOL-Plasma, indicating that plasma treatment was achieved successfully [32][33][34] . After P(MPC-MAA) grafting, the peaks at 401.96 and 134 eV appeared on IOL-P(MPC-MAA). These peaks corresponded to the -N-(CH 3 ) 3 and phosphate groups attributed to the MPC unit. Meanwhile, the peak at 400.04 eV corresponded to -NH-C(= O). These data indicate that P(MPC-MAA) was successfully grafted onto the IOL surface via the amidation reaction. Surface characterization of the IOLs. Surface topography affects protein adsorption and subsequent cell behaviors 35 . We first characterized the surface morphology of the samples by atomic force microscopy (AFM) (Fig. 2). IOL had a surface roughness of 0.787 nm, exhibiting a relatively even morphology with a few particles and shallow grooves (Fig. 2a,b). IOL-Plasma had many deep grooves appeared on the surface, and the roughness Scientific RepoRts | 7:40462 | DOI: 10.1038/srep40462 was increased to 4.818 nm (Fig. 2c,d). IOL-P (MPC-MAA) exhibited many wave-like clusters of polymer chains, and the surface roughness was 3.469 nm (Fig. 2e,f). Next, we measured the water contact angles (WCAs) to characterize the hydrophilicity of the IOL surface ( Fig. 3a,b). The WCA of IOL was 78.9 ± 2.2°, suggesting a hydrophobic surface property. Plasma treatment introduced amino groups onto the IOL surface, and the WCA of IOL-Plasma decreased to 21.8 ± 5.0°, indicating increased surface hydrophilicity. The WCA of IOL-P(MPC-MAA) also decreased to 24.5 ± 3.1°. To investigate the electrokinetic properties of the samples, we measured the zeta potential of the samples (Fig. 3c). At pH 7.2, the zeta potential of IOL was − 13.6 mv. The average zeta potential of IOL-Plasma increased to − 11.5 mv due to the introduction of positively charged amino groups, while the average zeta potential of IOL-P(MPC-MAA) decreased to − 16.4 mv due to the introduction of the negatively charged carboxylic acid groups. The optical characteristics of the samples, such as diopter, resolution and transmission properties, demonstrated no significant differences between IOL-P(MPC-MAA) and IOL. The haptics of all groups could endure bending and stretching 2.5 million times with a compression amplitude of + /− 0.25 mm. These optical and physical properties meet the standards of State Food and Drug Administration (SFDA) in China. IOL-P(MPC-MAA) inhibits protein adsorption. Protein adsorption is the first phenomenon observed after IOL implantation, and will affect subsequent cell interaction in the material-tissue interface in the following minutes or hours 8 . We used bovine serum albumin (BSA) to monitor protein adsorption on the IOL surface by quartz crystal microbalance with dissipation (QCM-D) analysis (Fig. 3d). BSA adsorption on IOL was 130.8 ± 9.9 ng/cm 2 , which was similar to that on other hydrophobic IOL surfaces we previously reported 36 . Compared to IOL, BSA adsorption on IOL-Plasma decreased to 43.1 ± 8.2 ng/cm 2 , and BSA adsorption on IOL-P(MPC-MAA) further decreased to 14.5 ± 3.1 ng/cm 2 . These data were consistent with the previous reports that increased surface hydrophilicity and introduction of negative charges onto the material surface can significantly decrease protein adsorption 16,17 . IOL-P(MPC-MAA) inhibits the adhesion and proliferation of lens epithelial cells in vitro. Cell interaction in the material-tissue interface include an initial phase of cell adhesion followed by subsequent cell proliferation and migration 12 . We used human lens epithelial cell (LEC) line SRA01/04 to evaluate cell behaviors on modified IOLs in vitro. Adhesion of LECs on IOL-P(MPC-MAA) (107.1 ± 5.1/mm 2 ) was significantly decreased compared to that on IOL (201.7 ± 8.1/mm 2 ) and IOL-Plasma (176.7 ± 8.9/mm 2 ) (Fig. 4a,b). However, there was no significant difference between cell adhesion on IOL and IOL-Plasma (P = 0.174). In order to characterize cell proliferation on the IOL surfaces, we incubated LECs on the IOL for 24 and 48 hours and performed a cell viability assay. Compared to IOL and IOL-Plasma, IOL-P(MPC-MAA) significantly decreased cell proliferation after 24 and 48 hours of incubation (Fig. 4c). Collectively, these results demonstrate that P(MPC-MAA) modification significantly increases cell repellency of the IOL surface. IOL-P(MPC-MAA) reduces postoperative inflammation. Uveal biocompatibility of the IOL can be assessed by the severity of postoperative inflammation 8 . Breakdown of the blood-aqueous barrier and the foreign body reaction to the IOL implant results in release of protein and cells into the anterior chamber, which can be manifested as anterior chamber flare (ACF) and anterior chamber cell (ACC) respectively 37 . Therefore, we first evaluated ACF and ACC scores as indicators of inflammation (Fig. 5a,b). Similar to the inflammatory responses in human patients after IOL implantation 4,38 , both ACF and ACC scores peaked 1 day postoperatively, and then decreased to the baseline after 4 weeks. Rabbit eyes with implantation of IOL-P(MPC-MAA) had significantly lower ACF and ACC scores 1 day, 4 days, and 1 week after surgery. Persistent inflammation may cause IPS, which refers to the adhesion of the iris to the anterior surface of the IOL or lens capsule. Eight weeks after surgery, slit lamp examination showed that IOL-P(MPC-MAA) implantation group had a significantly lower IPS score than IOL implantation group (Fig. 5c, Supplementary Fig. S3a). We also noticed other postoperative complications occurred in IOL, IOL-Plasma groups, including pupil capture (1 eye in IOL group and 1 eye in IOL-Plasma group), IOL displacement (1 eye in IOL group and 1 eye in IOL-Plasma group) and severe cortical proliferation (1 eye in IOL group) ( Supplementary Fig. S3b). However, no obvious postoperative complication due to inflammation was found in IOL-P(MPC-MAA) implantation group. Intraocular pressure (IOP) values were within normal range in all the groups ( Supplementary Fig. S4). To further characterize the cellular response to the IOL implants, we extracted the IOLs 8 weeks after surgery and performed scanning electron microscopy (SEM). A large number of amorphous debris and polygonal cells were found adhered to the surfaces of IOL (Fig. 5d,e). However, only a few debris and small round cells were found on the surfaces of IOL-P(MPC-MAA) (Fig. 5f,g). Collectively, these results indicate that P(MPC-MAA) modification greatly improves uveal biocompatibility of hydrophobic acrylic IOLs in vivo. IOL-P(MPC-MAA) inhibits anterior capsule opacification. ACO is caused by proliferation and epithelial-mesenchymal transition (EMT) of the remnant LECs between the inner surface of the anterior capsule and IOL implant 39 . In our study, IOL and IOL-plasma implantation groups developed ACO 2 weeks after surgery (Fig. 6a). After 4 weeks, severe fibrosis occurred on the anterior capsule covering IOL and IOL-plasma optics, leading to anterior capsule shrinkage (black arrows). However, in IOL-P(MPC-MAA) implantation group, ACO developed slowly, and the anterior capsule was relatively transparent 6 weeks after surgery. The ACO score of IOL-P(MPC-MAA) group was significantly lower than that of the IOL and IOL-plasma groups 6 weeks postoperatively (Fig. 6b). Also, histopathological examination showed that multilayer LECs presented underneath the anterior capsule in IOL implantation group, while LECs were arranged regularly in a single layer in IOL-P(MPC-MAA) group (Fig. 6c, black arrowheads). The expression levels of EMT markers fibronectin (Fn) and α -smooth muscle actin (α -SMA) were lower in IOL-P(MPC-MAA) group compared to those in IOL group ( Supplementary Fig. S5). TEM showed that in IOL group, LECs underneath the anterior capsule presented an elongated fibroblast-like appearance with massive ECM deposition ( Supplementary Fig. S6a). However, in IOL-P(MPC-MAA) group, LECs maintained epithelial morphology with a few ECM depositions ( Supplementary Fig. S6b). These results indicate that IOL-P(MPC-MAA) significantly suppressed LECs proliferation and EMT under the anterior capsule and thus inhibited ACO formation. IOL-P(MPC-MAA) does not affect posterior capsule opacification. PCO is caused by migration of remnant LECs to the posterior capsule 39 . In our study, slit lamp examination showed that all three groups developed moderate PCO 8 weeks after surgery (Fig. 7a). Fundus examination showed that the optic disk, retinal vessels and choroid vessels could not be clearly seen 8 weeks postoperatively (Supplementary Fig. S7). EPCO 2000 analysis showed no significant difference in all the groups, and Miyake-Apple view analysis showed no difference of CPCO, PPCO and Soemmering's area in all the groups (Fig. 7b). In both IOL and IOL-P(MPC-MAA) groups, LECs exhibited fibroblast-like morphology and were arranged irregularly underneath the posterior capsule (Fig. 7c, black arrowheads) with massive ECM surrounded ( Supplementary Fig. S6c,d). These results suggest that MPC-MAA surface grafting does not affect PCO formation. Discussion Optimization of IOL biocompatibility is critical for vision reconstruction after cataract surgery. Uveal biocompatibility is typically important for hydrophobic acrylic IOL because many studies have shown that hydrophobic IOL will cause more inflammatory responses than hydrophilic IOL after implantation 3,40,41 . Both the cataract surgery and the IOL implant trigger the release and adhesion of inflammatory cells, including macrophages and giant cells, onto the IOL surface 8 , leading to a high incidence of IPS and ACO, especially in patients with blood-aqueous barrier damage. MPC has excellent biocompatibility since the phosphorylcholine group on MPC mimics the neutral phospholipids of the cell membrane 14 . Previous studies have shown that grafting MPC onto silicone IOL surface can reduce adhesion of macrophages 20 . However, grafting MPC monomers does not reduce aqueous flare in vivo, possibly due to inadequate negative charges on the material surface 14 . In this study, we synthesized the copolymer P(MPC-MAA) and covalently grafted this copolymer onto the surface of hydrophobic acrylic IOLs. Compared to MPC monomer, P(MPC-MAA) has two advantages. First, P(MPC-MAA) is heavily negatively charged (Fig. 3c). The introduction of negative charges by MAA resulted in a significant reduction of protein adsorption (Fig. 3d) and cell adhesion (Fig. 4). Second, the intermolecular repulsion between MAA in the copolymer could make more MPC diffuse into the aqueous humor, so that the IOL surface was more inert to the surrounding biological system. Therefore, P(MPC-MAA) modification significantly reduced post-operative inflammation after IOL implantation (Fig. 5a-c) and showed excellent biocompatibility in vivo. The remaining anterior LECs (A cells) following cataract surgery have the potential to form fibrous tissue and cause capsular opacification around the capsulorhexis margin, resulting in ACO. Formation of ACO includes two stages: an early stage of LEC proliferation and a late stage involving EMT and ECM production. The LEC proliferation process is regulated by various cytokines and growth factors, such as IL-1, IL-6, transforming growth factor (TGF) and fibroblast growth factor (FGF), which are secreted by residual LECs and inflammatory cells [42][43][44] . Hydrophobic IOL has a higher incidence rate of ACO than hydrophilic IOL, because hydrophobic surfaces tend to attract more remnant LECs and inflammatory cells to adhere and proliferate 5,6,45 . Our results showed that surface modification by P(MPC-MAA) significantly suppressed ACO formation, which could be a direct consequence of decreased LECs adhesion and proliferation. Also suppression LECs and inflammatory cells adhesion may lead to less secretion of cytokines, contributing to a relatively mild inflammatory response in IOL-P(MPC-MAA) group than IOL and IOL-Plasma groups. This is consistent with other studies that hydrophilic surface modifications such as HSM coating 38 or PEG-grafting 15,46 could significantly reduce LECs adhesion and postoperative foreign-body reaction of hydrophobic IOL. Posterior capsule opacification (PCO), also known as secondary cataract, results from proliferation, migration and EMT of residual LECs across the posterior capsule. In clinical application, hydrophobic IOL has a relatively low PCO rate compared to hydrophilic IOL, as the rapid adhesion of IOL to the posterior capsule can effectively inhibit the migration of LECs 12,47 . In this study, we did not observe a difference in PCO severity between IOL, IOL plasma and IOL-P(MPC-MAA) groups. Similarly, Xiaodan et al. 14 also did not find a change in PCO incidence after grafting MPC on silicone IOL. It is possible that the surface property of IOL may be not as important as the optic configuration in the prevention of PCO. Many studies have shown that a sharp optic edge is the key factor for preventing LECs migration from anterior to the posterior capsule [48][49][50] . Although all the IOLs we used in this study had sharp optic edges, we still observed PCO formation in all the groups 8 weeks after surgery, possibly because rabbit LECs have higher proliferation and migration capacity than human LECs. Interestingly, we noticed that introduction of amino groups by ammonia plasma treatment alone could also increase surface hydrophilicity, decrease protein adsorption and cell proliferation. However, our previous study showed that the increased hydrophilicity of IOL after plasma treatment can only last for 14 days 51 . On the contrary, covalent immobilization of hydrophilic molecules onto the material surface can greatly weaken the hydrophobic recovery process 20,46 . Here, we showed that although the hydrophilicities of IOL-Plasma and IOL-P(MPC-MAA) were comparable after modification, IOL-P(MPC-MAA) exhibited more protein resistance. Moreover, only IOL-P(MPC-MAA) showed decreased postoperative inflammation and ACO formation in vivo. Therefore, modification by plasma treatment alone is insufficient for the improvement of IOL biocompatibility. In conclusion, we synthesized a new copolymer P(MPC-MAA) and successfully grafted the copolymer onto the surface of hydrophobic acrylic IOL by plasma technology. IOL-P(MPC-MAA) showed increased surface hydrophilicity and reduced protein adsorption while maintaining the bulk optical and physical properties. IOL-P(MPC-MAA) significantly inhibited LECs adhesion and proliferation in vitro. and suppressed postoperative inflammation and ACO formation in vivo. Overall, these results suggest that P(MPC-MAA) modification improved uveal and capsular biocompatibility of hydrophilic acrylic IOLs. More studies need to be carried out to assess the long-term biocompatibility of IOL-P(MPC-MAA). Methods Synthesis and purification of P(MPC-MAA). 0.01 mol MPC (Nanjing Institute of Natural Science and Technology Development, Nanjing, China) and 0.01 mol MAA (Kemiou Chemical Reagent Co., Ltd, Tianjing, China) were dissolved in 20 g of ultrapure water (monomer concentration: 1 mol/L). After argon was introduced for 30 minutes, a sodium sulfite/ammonium sulfate initiator system (ammonium persulfate, sodium sulfite = 1:1.5) was added at a concentration of 0.03 mol/L. The reaction was carried out at 37 °C for 24 hours and was stopped with liquid nitrogen. After introducing anhydrous ethanol, sedimentation was carried out and collected by a suction filter. The sediment was again dissolved in water and dialyzed for 3 days. Then, the sample was freeze-dried for 2 days, and P(MPC-MAA) was obtained. FT-IR spectra was obtained using an FT-IR analyzer (VECTOR-22, Bruker, Germany) with the potassium bromide pressed-disk technique for 32 scans over the 500-4,000 cm −1 range at a resolution of 4.0 cm −1 . The composition of the polymers was determined by 1 H NMR (AVANCE 300, Bruker, Germany) spectral measurements at 400 MHz. Static water contact angle (WCA) measurement. WCA was characterized with a contact angle goniometer (OCA15, Dataphysics, Germany) at 25 °C using distilled water as a reference liquid. A total of 1.00 μ L of reference liquid was pumped onto the surface through a stainless steel needle at a rate of 1.0 μ L/s. The results are mean values calculated from five independent measurements on different points of the films. Zeta potential measurement. Zeta potential was obtained using an electrokinetic analyzer (SurPASS, Anton Paar Surpass, Austria). For the determination of zeta potential, streaming current measurements were performed using an Adjustable Gap Cell (SurPASS, Anton Paar Surpass, Austria). 1.0 mM potassium chloride (KCl) solution was used as the background electrolyte, and 0.1 M potassium hydroxide as well as 0.1 M hydrochloric acid solutions were used to adjust the pH value to 7.2. Optical and physical characteristics. Diopter, resolution, transmission properties and anti-fatigue resistance of the IOL haptics were assessed according to the standards of State Food and Drug Administration (SFDA) by Medical Equipment Quality Supervision and Inspection Center of Zhejiang Province in China. BSA adsorption assay. BSA adsorption of the surface was measured by quartz crystal microbalance with dissipation (QCM-D, E4, Q-Sense, Sweden). Briefly, the BSA solution (dissolved in PBS buffer at a concentration of 50 μ g/mL) was introduced onto the samples. After balancing, PBS was introduced again to wash off the non-adsorbed protein. Then, BSA adsorption was obtained from Q-Tools. Cell adhesion assay. IOLs were placed into a 48-well plate. 300 μ L SRA01/04 cell (human lens epithelial cell) suspension at a concentration of 1 × 10 4 /mL was loaded onto the IOL surface. After incubation for 12 hours, the IOLs were stained with hematoxylin and eosin (HE) and examined with an inverted phase contrast microscope (CKX41, Olympus, Japan). Five fields were selected with one in the central and four in peripheral quadrants at random. Image-Pro Plus 6.0 was used to quantify the number of LECs in each field. At least 5 IOLs in each group were tested. Cell viability assay. The seeding procedure was the same as described above. After incubation for 24 or 48 hours, 200 μ L Dulbecco's Modified Eagle Medium (DMEM) with 10% fetal bovine serum (FBS) and 20 μ L Cell Counting Kit-8 (CCK-8) reagent were added to IOLs. Wells without IOLs were used as controls. After incubation for 1 hour, the OD values at 450 nm were measured with a microplate reader. The assay was repeated 3 times. Phacoemulsification and IOL implantation. Twenty-four 1.5 kg male New Zealand albino rabbits were divided into three groups at random. Phacoemulsification was performed on the left eye using the Legacy 20000 System (Alcon Laboratories, Fort Worth, TX, USA). Briefly, a 3.2-mm corneal limbus tunnel incision was made at the 12 o' clock position, followed by a central continuous curvilinear capsulorhexis with 5.5 mm in diameter. Then, the lens materials were extracted and IOL was implanted into the capsule. The tunnel incision was closed with interrupted 10-0 nylon sutures. All surgeries were performed by one surgeon (M.X.W.), who was blind to the group assignment. All experiments were conducted in accordance with the ethical guidelines set forth by the Laboratory Animal Care and Use Committee of the Association for Research in Vision and Ophthalmology (ARVO). The study protocol was reviewed and approved by the Animal Ethics Committee of Zhongshan Ophthalmic Center, Sun Yat-sen University, China. Follow-up ophthalmic examinations. Digital slit lamp photos were taken by the SL-D7 anterior eye segment analysis system (Topcon Medical Systems, Inc., Tokyo, Japan) at indicated times postoperatively. Intraocular pressure (IOP) was measured by a Tono-Pen tonometer (Reichert Inc., Seefeld, Germany). Fundus images were acquired by a fundus camera (Topcon Medical Systems, Inc., Tokyo, Japan). All examinations were conducted by two researchers who were blind to the group assignment. Serious PCO usually occurred at 8 weeks due to the strong proliferative ability of rabbit LECs, so we defined 8 weeks postoperatively as the endpoint of the ophthalmic examinations. Inflammation evaluation. Anterior chamber flare (ACF), anterior chamber cell (ACC), iris posterior synechiae (IPS) were scored to evaluate uveal biocompatibility of the IOL as previously described 52 . The grading is summarized in Supplementary Table S2. Postoperative complications, such as corneal edema, glaucoma, IOL displacement, pupil capture, and cortical proliferation, were also recorded. ACO scoring. Six weeks after surgery, ACO was scored from grade 0 to IV based on the severity of the anterior capsule opacity and the contraction of the anterior capsulorhexis opening: Grade 0: clear (transparent) anterior capsule; Grade I: opacification localized at the edge of the capsulorhexis; Grade II: moderate and diffuse opacification, in some cases with areas of capsular folding; Grade III: intense opacification, with areas of capsular folding; Grade IV: constriction (phimosis) of the capsulorhexis opening 45 . PCO scoring. PCO was quantified by Evaluation of Posterior Capsule Opacification (EPCO) 2000 software or Miyake-Apple view analysis. Standard retroillumination pictures were taken 6 and 8 weeks postoperatively, imported into EPCO 2000 and processed as previously described 53 . Eight weeks after surgery, the rabbits were euthanatized and the eye balls were enucleated for Miyake-Apple view analysis. Eye balls were sectioned at the equator and gross examinations were performed from the posterior aspect. Miyake-Apple view analysis of PCO was conducted as previously described 54 .
5,233.2
2017-01-13T00:00:00.000
[ "Materials Science", "Medicine" ]
Plasmonic Modes and Optical Properties of Gold and Silver Ellipsoidal Nanoparticles by the Discrete Dipole Approximation The discrete dipole approximation (DDA) is used to model the absorption efficiency of isolated gold (Au) and silver (Ag) ellipsoidal nanoparticles. The characteristics of the plasmonic bands of those nanostructures depend strongly on the size and orientation of the particles in both the lab and target frames. At specific rotation and incident angles, the desired plasmonic mode can be excited. The result of the simulation shows the possibility of excitation of three plasmonic modes—one longitudinal mode (LM) and two transverse modes (TM)—corresponding to the redistribution of the polarization charges along each principal axis. At oblique incidence of the incoming light, both the Au LM and a hybrid Au TM are observed whereas three more distinct plasmonic modes can be found in the case of the Ag particle. The effect of length distribution on the characteristics of the plasmonic bands is also examined for the three principal axes. The band position of the plasmonic bands associated with the electronic oscillation along each principal axis is found to vary linearly with the axis length. The linear variation of the band position of the LM is steeper as compared with the one found for the other modes. Introduction The localized surface plasmon resonance (LSPR) is the electronic oscillatory motion in the conduction band of the metallic nanostructure [1,2].The unique characteristics of such fluctuations originate from the confined spatial distribution of the polarization charges over the surface of the nanostructure [3].Controlling the size and the shape of the nanoparticles will result in changing the negative/positive charge separation and hence tailoring the frequency and the intensity of the LSPR in the visible and the infrared region [4][5][6].Technical applications of metal nanoparticles normally require incorporating an assembly of those nanostructures of different size distributions, and thus understanding the optical behavior of an isolated nanoparticle and the effect of the coupling between the LSPR of nearby particles is required [7][8][9].Enhanced capabilities of recent nanofabrication techniques to fabricate and arrange the metallic nanoparticles of different sizes and shapes indeed make them highly attractive in many technical applications [10][11][12][13][14][15][16][17][18][19][20][21][22].Due to the high-order symmetry of spherical nanoparticles, their optical response only exhibits a single PM [23,24].In the case of particles with different symmetry axis, more than one PM is observed [25][26][27][28] and among the most interesting nanostructures is the ellipsoidal nanoparticle.Due to its 3-fold symmetry, it exhibits both longitudinal and transverse plasmon modes.The oscillatory shift of the negative electron cloud relative to the positive core along each principal axe results in three plasmonic modes (PM), one longitudinal mode (LM), and two transverse modes (TM).The characteristics of each band depend on the orientation of the particle in both the lab and the target frames as well as the length distribution of each axis.The effect of this latter parameter on the extinction coefficient of gold oblate and prolate spheroidal nanoparticles arranged in two dimensional arrays has been studied experimentally [29], and the linear dependency of the peak position of the PM with respect to the length of the corresponding principal axis has been noted.The experimental results were then compared to the ones calculated by the quasistatic approximation (QSA) of the first order (dipole mode).Simulations by a finite-difference time domain (FDTD) method were also done but the effect of the target orientation on the possibility of excitation of all the PMs has not been investigated.Another theoretical treatment has been proposed to study the PMs of the ellipsoidal particle equally in the framework of the QSA complemented by the inclusion of higher orders of multipolar oscillations in order to find an analytical expression for the plasmon frequency [30].On the other hand, Kalkbrenner et al. [31] succeeded in rotating a single gold ellipsoidal nanoparticle attached to the tip of a glass fiber mounted on a stage of a scanning near-field optical microscope (SNOM).Due to the threefold symmetry of the ellipsoidal particle, three PMs were observed individually at a distinct combination of the polarization angle and the rotation angle of the tip in the incident light. To better appreciate the behavior of particles of this specific and quite interesting shape, the current work aims at studying in a comprehensive way the optical properties of an oriented isolated nanoellipsoidal particle both for the gold (Au) and silver (Ag) cases since these two materials in nanoparticle form exhibit most interesting selective absorption in the visible and near-infrared range.It would thus be of great interest to find whether there exist combinations of the rotation and orientation angles that can allow the simultaneous excitation of all PMs for Au and Ag nanoellipsoids.To this end the effect of different parameters on the optical response of the Au and Ag nanoparticles will be discussed.These parameters include the orientation of the target in both the lab and the target frames and the sensitivity of the band position of the PM to the length distribution of each principal axis.To achieve this goal, the DDA [32][33][34][35][36][37] is employed to calculate the absorption coefficient for both Au and Ag ellipsoidal nanoparticles.The result found will be useful to optimize the particle properties for applications such as in the plasmonic photovoltaic field [13,14] and in biosensing [21].As an example, due to the high absorbance of the incident light at different wavelengths in the UV-Vis region, incorporating those nanostructures in the plasmonic solar cell will enhance effectively photoelectrons generation and hence the energy conversion efficiency. This paper is organized as follows: in Section 2, we will discuss briefly the basic idea of the computational tool DDA, followed by a presentation of the target geometry, the corresponding structural parameters, and the relative orientation of the nanoparticle with respect to the incident electromagnetic field.Section 3 will include the results and discussion, which is divided into two subsections.The first subsection concerns the discussion on the modeled absorption spectra of an isolated nanoellipsoid and the effect of the relative orientation on the excitation of different LSPR modes.Comparison will be made between an Au particle and an Ag particle of the same size.The second subsection will examine the effect of the axis length distribution on the particle optical properties. Discrete Dipole Approximation (DDA) DDA is one of the well-known computational tools to mimic the optical response of the nanostructure due to the interaction of the target under investigation with the incident electromagnetic waves.To model the morphology of the target, it is required to represent the nanoparticles with an assembly of 3D-induced dipoles.The number of dipoles should be large and in the order of 10 4 to model the precise shape of the target, and requires the interdipole separation to be smaller than the incident wavelength and any structural parameter.The idea of the DDA was first introduced to study the optical response of molecular aggregates [34,35].The retardation effect was not included in the first application of the approximation method.Later, this effect was introduced in the DDA to study the interstellar dust grains [38].The code of DDA was as well developed to calculate the scattering and the absorption properties of the particles [38].The formalism of the method was improved by incorporating a correction for radiative reaction and the anisotropic dielectric function [32].An algorithm called complex-conjugate gradient (CCG) method was then introduced to evaluate the polarization iteratively and use the fast fourier transform (FFT) to solve the matrix-vector multiplications involved in the iteration method (CCG) [33].The description of the mathematical formulation of the DDA is out of the scope of this paper, but more details can be found in the references cited.Based on the interaction between the induced dipoles, the optical response of the metallic nanostructures can be calculated.The outputs of the DDA are the extinction, absorption and scattering cross-sections of the nanostructure normalized to its geometrical cross section.In this study, the output of the DDA is the absorption cross-section of the nanostructure normalized to its geometrical cross-section, which yields the corresponding efficiencies Q abs .The open-source Fortran-90 software package (DDSCAT 7.1) was used to calculate the absorption cross-section. Target Geometry and the Target Orientation The geometry of the target under investigation is a quadric surface where the morphology is characterized by three semiprincipal axes.The structural parameters of the ellipsoidal nanoparticles are represented by two semi-minor (2b and 2c) axes oriented along the y and z axes, respectively, and a semimajor 2(a) axis perpendicular to the yz plane.According to the relative length of the three principal axes, the types of the ellipsoidal particles are classified into oblate spheroids (a = b > c), prolate spheroids (a = b < c), and scalene ellipsoids (a > b > c).The latter case is considered in this study as illustrated in Figure 1(a).The effective radius of the equivolume sphere for the ellipsoidal nanoparticles is given by r eff = (a * b * c) 1/3 .The corresponding aspect ratio (A.R) is defined as the ratio of the longest axis to the shortest axis (a/c). It is assumed that the incident radiation is linearly polarized in the y-direction (p-polarized) and propagates along the x-direction.The orientation of the target in the lab frame is achieved by rotating the major axis with respect to the propagation direction (k) by an angle θ as shown in Figure 1(a).At oblique angles, the incident electric field has two components: one parallel to the a-axis and the other one oriented along the yz plan.The electric field of the s-polarized light has one component perpendicular to the major axis of the ellipsoid at any angle of incidence, and no information, therefore, is reported on the excitation of all the plasmonic modes.In the case of unpolarized light, the absorption spectrum is calculated as an average over the two polarizations directions, and the spectrum exhibits all LSPR modes. The absorption spectrum exhibits different dipolar plasmonic modes when the electric field has a component along each principal axis.When the ellipsoidal particle is rotated in the target frame around the main axis by an angle β as shown in Figure 1 Results and Discussion All the DDA calculations presented here refer to air as the surrounding material where the ellipsoidal nanoparticle is embedded.The orientation of the nanoparticle under investigation relative to the direction of the propagation of the incident p-polarized light determines the type of the excited PM.The angle of rotation (θ) governs the probability of excitation of the LM.The β angle plays an important role for the observation of both TMs.On the other hand, all the possible PMs can be found at a distinct combination between the two angles.Those parameters will be addressed in detail in Section 4.1.Section 4.2 will concern the dependency of the band position of the PM on the length distribution and will be presented separately for the excited plasmonic band along each axis. Effect of Orientation on the Optical Response of the Ellipsoidal Nanoparticles.The excitation of the desired plasmonic band is achieved at distinct combination between the incident angle and the rotation angle.In multifold symmetry particles, the oscillations of the polarization charges along a certain direction determine the type of the LSPR bands.The redistribution of the charges along the axis of different length changes the separation of the driven electron cloud relative to the positive cores and result in tailoring the intensity and the band position of the LSPR modes. The effect of the rotation and the orientation of both the metallic ellipsoidal particles is investigated with the electric field oriented along each of the three axes.Three plasmonic bands are reported due to the oscillation of the charges along each one of them.When the incident angle is chosen to be 90 • , the major axis is aligned parallel to the incident electric field resulting in the excitation of the LM.Whatever the rotation angle is, no other plasmonic band is observed due to the absence of the induced charges along the other semiaxes.At normal incidence, the nanoellipsoidal particle can be rotated in such a way that the electric field has one component along either the b-axis or the c-axis, or along both of them.When θ = 0 • and β = 0 • and 90 • , the TM along b-axis (b-TM) and c-axis (c-TM) is excited, respectively, as shown in Figure 2. The excitation of the three distinct PMs of a single ellipsoidal nanoparticle is consistent with experimental data reported earlier [31].The most intense plasmonic band corresponds to the LM while the less intense one is for the c-TM (for clarity, data in Figure 2 are shown after normalization).The difference in the absorption amplitude would be attributed to the difference in the charge separation (the axis length). As well, the observed red shift in the LM as compared to the corresponding one in the TMs could be related to the decreasing in the restoring force (the columbic interaction) due to increase of charges separation along the longer axis. Although the plasma frequencies for both Au and Ag would be comparable, the corresponding LSPR mode of the ellipsoidal nanoparticle with the same size occurs at different wavelengths.The deviation originates from the additional contribution of the interband electronic transitions to the dielectric function.The resonance frequency of the LSPR (W LSPR ) [39] is given by where w P , ε m , and χ are the plasma frequency of the bulk metal, the dielectric function of the host material where the nanoparticles are embedded, and the interband susceptibility, respectively.In the case of the Au nanoparticle, the resonance occurs at higher wavelengths as compared to the one for Ag because χ-Au > χ-Ag. The full width at half maximum (FWHM) can be compared between the different LSPR bands for both metallic particles.It can be seen that the LM has the larger value among the modes; the Ag-TMs have a comparable width while the Au-b-axis-TM is broadened as compared to the corresponding c-axis-TM. The value of FWHM for the noble metals [39] is given by where γ, χ 1 , and χ 2 are, respectively, the damping constant, the real part and the imaginary part of the interband susceptibility.For Ag, at the resonance frequency, because of the small value of χ 2 (χ 2 approaches zero), the square root of the last equation is about unity; therefore the value of the bandwidth is equal to γ as described in the free electron model.In the case of Au, the imaginary part of the interband transition contributes more to the bulk dielectric function as compared to Ag.Therefore, the square root is larger than unity, and hence the corresponding band width is larger than γ.This would explain why the plasmonic mode of the Au nanoparticle is broadened as compared to the one for Ag. In the case of the Au ellipsoidal particle, when the excitation of the LM is not possible, and the rotation angle is between 0 and 90 • , apparently only a single TM is observed. In reality the band positions of the TMs are very close to each other, making them indistinguishable, resulting in the excitation of a single broadened band that we would label as a hybrid TM as illustrated in Figure 3(a).The plasmonic bands which correspond to the multifold symmetry are well separated in the case of the Ag particle, showing the excitation of more than a single band (Figure 3 The LM can be excited in the presence of the TMs.To demonstrate this, β has been chosen to be 60 • while the incident angle is changed uniformly in steps of 30 • between the two extreme values of 0 • and 90 • as shown in Figure 4.The absorption spectrum of the Au ellipsoidal particle is characterized by the presence of the most intense plasmonic LM band and the hybrid TM band.The intensity of the latter band is decreased dramatically with the incident angle, while the LM intensity is directly proportional to the incident angle.At θ = 90 • , the incident electric field is perfectly aligned with the major axis which results in a maximum absorption for the LM.At the other extreme of the incident angle, the LM is not observed, and the hybrid TM is predominant.Since the Ag-TMs are well separated, the three plasmonic bands can be observed simultaneously when 15 • ≤ β ≤ 75 • and 156 • ≤ θ ≤ 75 • .The previous observations regarding the dependency of the plasmonic band intensity on the incident angle remain valid for the Ag ellipsoid. Effect of the Length Distribution on the Optical Response of the Ellipsoidal Nanoparticles. To investigate the effect of length distribution on the absorption coefficient, a series of simulation were performed for different lengths of each semiaxis.The dependency of the band position of the dipolar plasmonic modes on the axis length is examined for both Au and Ag scalene ellipsoidal nanoparticles.First, at fixed length of b-axis and c-axis with various A.R., the characteristic of the LM is investigated in terms of its length.Secondly, the lengths of the a-axis and c-axis are kept constant at 50 nm and 10 nm, respectively, and the b-axis length is varied with b ∈ {15, 20, 25, 30, 35, 40} nm with a corresponding A.R = 5.Different c-axis lengths with fixed a,b-axis values will be the final case.In all cases, the wavelength of either the LM or the TMs will be plotted versus the axis length.As well, the energy of the LSPR band will be presented in terms of the effective size of the selected ellipsoidal particles. The incident angle is set to be 90 • when the a-axis is directed along the y-axis.In such a situation, the rotation of the ellipsoidal in the target frame occurs in the xz-plane such that β does not have any effect on the optical response of the particle.The length of the transverse axes is kept constant, at respectively, 10 nm (b-axis) and 5 nm (c-axis) while the aaxis is varied in the domain ∈ {15, 20, 25, 30, 35, 40} nm. To study the influence of the length distribution on the position of the LM, the simulated absorption is plotted versus the incident wavelength for different lengths as shown in The position of LM is found to be red shifted, and its intensity is increased with the length (data are shown after normalization).The band position can therefore be tuned in both the visible and near-infrared regions.The Au-LM is broader when compared to the corresponding Ag-LM, and the FWHM is increased with the length in both metallic nanoellipsoids.The change in the longitudinal band position with the length shows a linear variation with a comparable slope as shown in Figure 5(c).The excitation of the Ag-LM occurs at shorter wavelengths as compared to the calculated one for Au. Figure 5(d) shows the modification in the band position of the LM when the effective radius of the ellipsoidal nanoparticle is changed.It seems that the difference in the excited wavelength increases with the length, and it is larger in the case of Ag as compared with the Au's case. We will now consider the effect of the length distribution of the b-axis on the characteristics of the induced charge oscillation along that axis.The target is oriented in the incident electromagnetic field in such a way that the major axis is parallel to the direction of the propagation and the baxis is aligned along the incident electric field by (β = 0 • ).The absorption spectra for the Au ellipsoidal nanoparticles at different lengths of b-axis are shown in Figure 6(a).It can be noted that the band position of the b-TM is red shifted with the length.The change in the excited energy of the plasmonic band is accompanied by a dramatic increase in the absorption amplitude.On the other hand, the optical response of the corresponding Ag nanoellipsoid is quite different regarding the change in the intensity and the red shift in the band position of the TM.In the latter case, the absorption cross-section contributes mainly to the extinction one in a certain range of length; otherwise, the scattering cross-section is dominant.The slope of the Ag-linear trend is twice compared to the one calculated for Au. The band position of the TM excited along the c-axis for Au (Figure 7(a)) changes insignificantly with increases in the absorption amplitude while the calculations for Ag show more pronounced changes in both the excited wavelength and the absorption amplitude as shown in Figure 7(b).The band positions of the PMs that correspond to the plasmon oscillations along the b and c-axes show less dependency on the length distribution in both metals as compared with the one found for the length distribution of the a-axis. Conclusion The optical response of the metallic ellipsoidal nanoparticle is quite specific when compared to other nanoparticle morphologies due to the possibility of three different LSPR modes.These modes are associated with the electronic fluctuations along each symmetry principal axis.The possibility of observing the distinct plasmonic bands depends on the orientation the ellipsoid in both the lab and the target frames.The excitation of the Ag-LSPR modes always occurs at higher energy as compared with the ones calculated for the Au particle.The Ag-PMs are well separated, so that the three modes could be found simultaneously at a given combination of the orientation and rotational angles while the band positions of Au-TMs are very close to each other making them indistinguishable.The band position of the PMs depends linearly on the axis length in both metals.Increase in the axis length results in a red shift of the PM band position.The Ag-PMs are relatively more broadened when compared to the ones found for Au nanoparticles.For both metals, the LM shows a stronger dependency on the length distribution of the axis as compared to the TMs. The plasmon spectroscopy of a single Au nanoparticle performed by Kalkbrenner et al. [31] has shown three distinct plasmon resonances found in an ellipsoidal particle.The resonance wavelengths were located, respectively, at 614 nm for the longest axis (a-axis), 571 nm for the b-axis, and 528 nm for the c-axis.This display of a systematic blue shift as the axis becomes shorter is consistent with our calculations, and the range of the shift is also comparable.As mentioned by these authors, the locations of these resonance wavelengths could however have been influenced by the glass tip to which the nanoparticle was attached.Such influence could result in a slight shift in the resonance wavelengths, and in fact, our calculations can be extended to take into account the tip influence.As far as we know, no similar plasmon spectroscopy for a single Ag ellipsoidal particle can be found.Considering the fact that the three plasmon resonances in the Ag case are well separated and more easily observable, experiments performed with a single Ag ellipsoidal nanoparticle would be of great interest.Interactions with nearby molecules are expected to modify substantially the resonance wavelengths as well as the bandwidths of the plasmon resonances, and the DDA approach can be effectively used to study these interactions. Figure 1 : Figure 1: (a) The geometrical parameters of the ellipsoidal nanoparticle and the orientation of the particle in the lab frame around the x-axis with angle θ, (b) the rotation of the nanoellipsoid in the target frame around the a-axis with angle β. (b), two TMs are observed corresponding to the oscillations of the induced polarization charges, respectively, along the b-axis and c-axis. 1 E 2 EFigure 2 : Figure 2: The normalized absorption spectra as a function of the incident wavelength for (a) Au and (b) Ag ellipsoidal nanoparticle.In both cases, the principal axis of the particle is aligned parallel to the incident electric field.The corresponding structural parameters are 2a = 40 nm, 2b = 20 nm, and 2c = 10 nm. (b)).When 15 • < β < 75 • , the intensity of the Au-TM has contributions from both b-TM and c-TM.The amplitude of the hybrid TM is decreased with β.Decreasing the value of β results in increasing the amplitude of the excited electric field along the b-axis.This enhances the absorption amplitude and shifts the TM to a lower energy.The energy difference of the Ag-TMs is larger compared with the calculated one for Au and it resulted in well-separated modes.No change in the band position of Ag modes is observed due to change in the rotation angle.Previous observations of the dependency of the absorption amplitude on β are still valid for the Ag-TMs. Figure 3 : Figure 3: The dependency of the absorption efficiency on the rotation angle for (a) Au and (b) Ag ellipsoidal nanoparticles.The corresponding structural parameters are 2a = 40 nm, 2b = 20 nm and 2c = 10 nm. Figure 4 : Figure 4: The dependency of the absorption efficiency on the incident angle at a constant rotation angle (β = 60 • ) for (a) Au and (b) Ag ellipsoidal nanoparticles.The insets represent the spectra in selected wavelength ranges. Figure 5 : Figure 5: Normalized absorption spectra of a (a) Au and (b) Ag ellipsoid of different a-axis length, (c) the band position of LM versus the length of the a-axis.(d) the band position of LM as a function of the nanoellipsoid effective radius.The b-axis and the c-axis are respectively 10 and 5 nm. Figure 6 : Figure 6: Absorption efficiency spectra of (a) Au and (b) Ag ellipsoid at different b-axis lengths, (c) the band position of LM versus the length of the b-axis.The respective values for a-axis and c-axis are 50 and 10 nm. Figures 5 ( Figures 5(a) and 5(b).The position of LM is found to be red shifted, and its intensity is increased with the length (data are shown after normalization).The band position can therefore be tuned in both the visible and near-infrared regions.The Au-LM is broader when compared to the corresponding Ag-LM, and the FWHM is increased with the length in both metallic nanoellipsoids.The change in the longitudinal band position with the length shows a linear variation with a comparable slope as shown in Figure5(c).The excitation of Figure 7 : Figure 7: absorption coefficient of (a) Au and (b) Ag ellipsoid at different c-axis lengths.The respective lengths of the a-axis and the b-axis are 50 and 40 nm.
6,062
2012-01-01T00:00:00.000
[ "Physics", "Materials Science" ]
The Role of Structural Defects in the Growth of Two-Dimensional Diamond from Graphene The presented work is devoted to the study of the formation of the thinnest diamond film (diamane). We investigate the initial stages of diamond nucleation in imperfect bilayer graphene exposed by the deposition of H atoms (chemically induced phase transition). We show that defects serve as nucleation centers, their hydrogenation is energy favorable and depends on the defect type. Hydrogenation of vacancies facilitates the binding of graphene layers, but the impact wanes already at the second coordination sphere. Defects influence of 5|7 is lower but promotes diamondization. The grain boundary role is similar but can lead to the final formation of a diamond film consisting of chemically connected grains with different surfaces. Interestingly, even hexagonal and cubic two-dimensional diamonds can coexist together in the same film, which suggests the possibility of obtaining a new two-dimensional polycrystal unexplored before. Introduction Diamond is probably the best-known crystalline compound of carbon. Diamond nanostructures of different dimensions have also attracted much attention along with the two-dimensional diamond or diamane [1], which is of a great interest currently. Numerous theoretical studies have outlined the prospects for the application of this nanostructure in nanooptics and nanoelectronics as ultra-hard coatings with broad-range optical transparency, host material for single-photon emitter, defect center for quantum computing, etc. [2]. However, the synthesis of a 2D diamond is the most challenging field since unlike graphene and many other two-dimensional materials, diamane cannot be cleaved from the bulk. Moreover, a thermodynamic analysis shows that a few-layered diamond film without a coverage layer is simply unstable and decomposes into multilayered graphene [3,4] because the diamond surface energy is higher than the one of graphite. This conclusion is well supported by experiment [5] where the direct pressure in diamond anvil cells was used to induce conversion of the whole graphene-flake while the diamondization pressure was much higher than in the bulk case, and instability of the formed diamondized film was apparent after pressure release. The most promising way to obtain a two-dimensional diamond seems to be the use of graphene as a precursor, by deposition of reference atoms (e.g., hydrogen) on its surface. In this case the thermodynamic stability of the material is reversed, the previously unstable diamond film becomes energy favorable, and graphene layers tend to bond to each other [4]. Despite a number of encouraging experimental results [6][7][8][9][10] confirming such predictions, the question of diamane synthesis is far from being resolved. Indeed, the nucleation of the diamane in graphene is hindered by the high stability of the graphene π-system resisting attachment of reference atoms. As a result, only two layers of graphene can be connected relatively easily, and only in the case of using hydrogen plasma as a hydrogen Results and Discussion The growth of the diamond phase in multilayer graphene has a nucleating character [11]. This means that the final structure is determined by the initial stages of diamond core formation and can be affected by the imperfections involved in the nucleation. To study this problem it is necessary to consider the step-by-step growing of the diamond in graphene by subsequent attachment of H atoms. We considered small groups of H on graphene starting from 1, 2, and 3 atoms and gradually increasing up to large clusters. For the cluster of n chemisorbed H atoms the average formation energy ε b (n) is ε b (n) = 1 n E g + nε H − E nH@g , where E g is the energy of either monolayer or bilayer graphene substrate, ε H is the energy of a single H atom, and E nH@g is the total energy of the hydrogenated structure. Simulation of the diamond phase growth in the graphene monolayer and bilayer revealed a fundamental difference despite the similar trend of the formation energy on the number of attached hydrogen atoms [11]. The attachment of H atoms to bilayer graphene leads to the formation of interlayer C-C bonds due to the pyramidalization of adjacent hydrogenated C atoms. However, the absolute values of hydrogen binding energy at the initial stages of nucleation are sufficiently (by~1 eV) lower than the same value for a monolayer (see Figure S1). This indicates much less stability of the formed diamond core which should not be a significant issue in the case of hydrogen plasma treatment of graphene because the nucleation proceeds barrier-free in any case. However, if the hydrogen source is taken in its molecular form (which is more accessible for the experiment) the situation changes dramatically. If we compare the energy of the formed C-H bonds on the bigraphene with the bonding energy in H 2 molecule (Figure 1, horizontal dashed line) it becomes obvious that hydrogen adsorption from the molecular form is energetically unfavorable up to the large size of the diamond core, particularly more than 70 atoms in the case of perfect bilayer graphene. Indeed, the binding energy for H atoms is weaker than the H 2 bond for all considered hydrogenation steps with a very slow tendency to a fully hydrogenated case ( Figure 1, horizontal solid line). This unfavorably distinguishes the hydrogenation process of bigraphene from the case of the graphene monolayer where the diamond core becomes stable after 16 hydrogen atoms [14]. hydrogenated case (Figure 1, horizontal solid line). This unfavorably distinguishes the hydrogenation process of bigraphene from the case of the graphene monolayer where the diamond core becomes stable after 16 hydrogen atoms [14]. Despite the lower energy of C-H bonds in bilayer graphene, the formation of the diamond phase occurs almost immediately after the adsorption of 6 hydrogen atoms, i.e., 3 in each layer ( Figure S2). This is critically important to change the hybridization of the carbon atoms in the first coordination sphere (see the inset in Figure 1a). The geometry of the first coordination sphere determines the way of a diamane formation. Therefore, it is important to accurately determine, or adjust, the structure of the nucleus at the initial stages of diamane formation. Moreover, even for the same bigraphene stacking it is possible to form diamond films with various surfaces [18]. If we consider that the stacking energy profile in the bilayer graphene is smooth [19], it can provide us control over the final structure of the diamond film. The diamondization of defectless bigraphene is hindered by the stable π-system of sp 2 -hybridized carbon. However, the presence of structural imperfections potentially can promote both the functionalization of carbon and the connection between graphene layers. Here we studied commonly considered graphene point defects: vacancy, Stone-Wales, as well as linear defect, grain boundary. The monovacancy defect is attractive for hydrogenation which yields the initially strong C-H bonds. The high activity of carbon atoms near the vacancy allows rapid formation of a diamond core when the bonding of layers occurs already after the adsorption of 3-4 hydrogen atoms. The first three hydrogen atoms in the case of AB stacking passivate the dangling bonds of the vacancy atoms ( Figure S3). In the case of AA' packing, only two of the three vacancy atoms are passivated, after which the third hydrogen atom attaches to the defect-free neighbored graphene layer, resulting in the formation of the first C-C interlayer bond. Next, we studied different patterns of diamond core formation in the case of AB and AA' bigraphene stacking, as seen in Figure 1a. While in the case of AB packing all neighboring vacancy atoms are passivated with hydrogen atoms, in the case of AA' one of the atoms binds to the atom of nearby perfect bigraphene layer. This effect Despite the lower energy of C-H bonds in bilayer graphene, the formation of the diamond phase occurs almost immediately after the adsorption of 6 hydrogen atoms, i.e., 3 in each layer ( Figure S2). This is critically important to change the hybridization of the carbon atoms in the first coordination sphere (see the inset in Figure 1a). The geometry of the first coordination sphere determines the way of a diamane formation. Therefore, it is important to accurately determine, or adjust, the structure of the nucleus at the initial stages of diamane formation. Moreover, even for the same bigraphene stacking it is possible to form diamond films with various surfaces [18]. If we consider that the stacking energy profile in the bilayer graphene is smooth [19], it can provide us control over the final structure of the diamond film. The diamondization of defectless bigraphene is hindered by the stable π-system of sp 2 -hybridized carbon. However, the presence of structural imperfections potentially can promote both the functionalization of carbon and the connection between graphene layers. Here we studied commonly considered graphene point defects: vacancy, Stone-Wales, as well as linear defect, grain boundary. The monovacancy defect is attractive for hydrogenation which yields the initially strong C-H bonds. The high activity of carbon atoms near the vacancy allows rapid formation of a diamond core when the bonding of layers occurs already after the adsorption of 3-4 hydrogen atoms. The first three hydrogen atoms in the case of AB stacking passivate the dangling bonds of the vacancy atoms ( Figure S3). In the case of AA' packing, only two of the three vacancy atoms are passivated, after which the third hydrogen atom attaches to the defect-free neighbored graphene layer, resulting in the formation of the first C-C interlayer bond. Next, we studied different patterns of diamond core formation in the case of AB and AA' bigraphene stacking, as seen in Figure 1a. While in the case of AB packing all neighboring vacancy atoms are passivated with hydrogen atoms, in the case of AA' one of the atoms binds to the atom of nearby perfect bigraphene layer. This effect can be explained by the curvature of the defective graphene sheet which changes the interlayer distance C-C. This leads to more rapid formation of the diamond core which further results into the formation of 1010 lonsdaleite surface. If we take into account that AA' and AB bigraphene stackings have very close energy and can occur within the same film, we can conclude that there is a probability of lonsdaleite formation via chemically induced phase transition instead of previously reported cubic diamond for the case of perfect bigraphene [11]. However, it should be noted that the energy of C-H bonds formed near the single vacancy quickly tends to a corresponding dependence obtained for perfect graphene already after deposition of 15 hydrogen atoms indicating rapidly decaying influence of the active center on the graphene structure. The attachment of such a number of H atoms forms a diamond core of sufficient size to spread beyond the defective region of the bigraphene repeating the corresponding hydrogen arrangement scheme for the perfect case and, consequently, preserving the geometry of the corresponding diamond film (cubic and hexagonal diamond for AB and AA' stacked bigraphene, respectively). The presence of several nearby vacancies can facilitate diamond formation. In contrast to the vacancy located only in one layer, we considered the agglomeration of cross layer vacancies in bilayer AB stacked graphene which can be obtained by its irradiation with low-energy ion beams of high density. In this structure, the defects affect each other. It leads to the formation of a reactive area between them which easily binds hydrogen atoms and forms interlayer bonds. We considered an agglomerate of vacancies separated from each other by about 5 Å, as seen in Figure 1b. After full passivation of the atoms in the first coordination sphere of the vacancy (binding energy 4.6-4.9 eV), passivation occurs in the region between the vacancies producing a fully hydrogenated area ( Figure S4). After this step, hydrogen adsorbs on the outer perimeter of the agglomerate forming a hydrogenation front that spreads uniformly in all directions. As can be seen from Figure 1b, already after the hydrogenation of the second coordination sphere, the C-H bond energy is equivalent to the corresponding value obtained for perfect graphene. This also confirms the local influence of the defects on the phase transition processes in the bilayer graphene. As the number of nearby vacancies increases, the reactive region enlarges. It results in the shift of ε b (n) intersection with ε H 2 from 15 atoms for 1 vacancy (blue line in Figure 1b) to 65 hydrogen atoms for the agglomerate of 4 vacancies (purple line in Figure 1b), respectively. For the latter case, the average C-H binding energy differs only slightly from ε H 2 . In the case of the Stone-Wales (SW) defect commonly observed in graphene [20], we found that the energy of the initially formed C-H bonds is significantly lower than that in the case of the vacancy defect. Nevertheless, it is higher compared with the perfect surface case, see Figure 2a. The atoms of this defect are displaced from the plane which favors hydrogen adsorption and bond formation between the graphene layers. The adsorption of just two hydrogens on both bigraphene surfaces leads to the formation of the interlayer bond ( Figure S5). The SW defect facilitates diamondization both in the case of AB and AA' stacked graphene with the formation of cubic and hexagonal diamond, respectively. In the latter case, the bonding of the first hydrogen atoms leads to the binding energy increase due to the favorable adsorption of hydrogen onto carbon atoms shared by the 5-and 7-member rings [21]. After the adsorption of hydrogen atoms onto the second coordination sphere, the impact of the defect on the energy of adsorption almost vanishes and the character of binding energy becomes almost the same as in the perfect case. Note that the Stone-Wales defect is a constituent part of the grain boundary (GB) in polycrystalline graphene connecting the graphene domains with different orientations [22]. Since stacking of bigraphene defines surface orientation and even symmetry of the produced diamond film [2], layer connection of polycrystalline bigraphene can lead to polycrystalline two-dimensional diamond consisting of grains with different surfaces. Finally, since C-H bonding at the interface is more favorable than in the case of ideal bigraphene we can assume that hydrogen deposition occurs first at the GB atoms and only then propagate in both directions connecting diamond films of different orientations in the same structure. We considered such a case in the example of the polycrystalline bilayer graphene with grains misoriented by the angle of 11.5 • (Figure 2b). It was found that the C-H bonds formation in such a structure is the same as in the case of the Stone-Wales defect, as seen in Figure 2a (green line). As we expected, the initial hydrogenation occurs through the grain boundary ( Figure S6) and further diamondization front spreads parallel in both directions. We noted that the energy trend of the latter region is close to that of perfect diamane. Such a process finally leads to a fully diamondized film composed of grains. This film consists of the connection of cubic and hexagonal 2D diamonds with surfaces (111) and 1010 , respectively, as seen in Figure 2c. propagate in both directions connecting diamond films of different orientations in same structure. We considered such a case in the example of the polycrystalline bila graphene with grains misoriented by the angle of 11.5° (Figure 2b). It was found that C-H bonds formation in such a structure is the same as in the case of the Stone-W defect, as seen in Figure 2a (green line). As we expected, the initial hydrogenation occ through the grain boundary ( Figure S6) and further diamondization front spreads para in both directions. We noted that the energy trend of the latter region is close to tha perfect diamane. Such a process finally leads to a fully diamondized film composed grains. This film consists of the connection of cubic and hexagonal 2D diamonds w surfaces (111) and 101 0 , respectively, as seen in Figure 2c. Thus, hydrogenation appears to be a prospective way to obtain a specific two-dim sional diamond structure that combines different surfaces. The grain boundary energ the studied junction is ~1.3 eV/Å, which is only slightly higher than other considered t dimensional carbon interfaces, namely graphene (<0.4 eV/Å [22]) and graphene/graph (1.01 eV/Å [14]). Conclusions In summary, we found that type and concentration of structural defects can su ciently impact the initial stages of diamond nucleation. At the same time, it does not fluence further diamond formation. Defects impact on C-H bonding strength is disapp ing at the second coordination sphere already. We show that vacancies agglomera (that can be produced by low energy ion irradiation) can sufficiently expand the reac region which vanishes the nucleation barrier for the first stages of nucleation. Stone-W defects impact is lower but promotes the hydrogenation and bonding of the graph layers. We show that 1D defect (dislocation) not only facilitates the diamondization also may lead to the appearance of 2D diamond consisting of chemically connected gra of different crystallographic orientations. Therefore, polycrystalline graphene usually served in the experiment can produce specific 2D diamond polycrystals containing dif ent surfaces. Even hexagonal and cubic 2D diamonds can coexist together in the same f Thus, hydrogenation appears to be a prospective way to obtain a specific twodimensional diamond structure that combines different surfaces. The grain boundary energy of the studied junction is~1.3 eV/Å, which is only slightly higher than other considered two-dimensional carbon interfaces, namely graphene (<0.4 eV/Å [22]) and graphene/graphane (1.01 eV/Å [14]). Conclusions In summary, we found that type and concentration of structural defects can sufficiently impact the initial stages of diamond nucleation. At the same time, it does not influence further diamond formation. Defects impact on C-H bonding strength is disappearing at the second coordination sphere already. We show that vacancies agglomeration (that can be produced by low energy ion irradiation) can sufficiently expand the reactive region which vanishes the nucleation barrier for the first stages of nucleation. Stone-Wales defects impact is lower but promotes the hydrogenation and bonding of the graphene layers. We show that 1D defect (dislocation) not only facilitates the diamondization but also may lead to the appearance of 2D diamond consisting of chemically connected grains of different crystallographic orientations. Therefore, polycrystalline graphene usually observed in the experiment can produce specific 2D diamond polycrystals containing different surfaces. Even hexagonal and cubic 2D diamonds can coexist together in the same film with grain boundary energy comparable with the same values for other two-dimensional carbon structures. Our study can be further expanded by more detailed investigation of the thermodynamic stability of formed diamond clusters and the explicit calculations of nucleation barrier dependence on a pressure. Other possible structural defects (divacancies, 5|8|5, Nanomaterials 2022, 12, 3983 6 of 7 555|777, etc.) as well as multilayer graphene (more than two layers) should be also considered in further work. We believe that the present study will help in further research on the diamondization of multilayer graphene and producing of new carbon nanomaterials with tunable properties for various applications. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano12223983/s1, Figure S1: Average binding energy ε b (n) as a function of the H number on the surface of perfect monolayer and bilayer graphene; Figure S2: Structures of H atoms adsorbed on perfect bilayer graphene with AB stacking; Figure S3: Structures of H atoms adsorbed on bilayer graphene contained vacancy defect in one layer with AB and AA' stacking; Figure S4: Structures of H atoms adsorbed on bilayer graphene contained 4 cross layer vacancies; Figure S5: Structures of H atoms adsorbed on bilayer graphene contained SW defect in one layer with AB and AA' stacking; Figure S6
4,559.8
2022-11-01T00:00:00.000
[ "Materials Science" ]
Width Confinement in 3D Dielectric Waveguides and Comparison to 2D Analytical Models Two-dimensional (2D) analytical models are only approximations for 3D structures where one cross-sectional dimension is much larger than the other. This paper uses the finite-difference time-domain method (FDTD) to perform numerical experiments on fully 3D dielectric waveguide structures to compute the wave-impedance and the propagation-constant for finite-width dielectric waveguides. These data are used to determine the width required to achieve good correlation against 2D analytical models. Results show that width ≥ 10× height is the limit for good approximation. I. INTRODUCTION Nano-scale dielectric waveguides are a critical component in modern integrated circuit design for both signal/power integrity and ultra high-speed data transfer.Analytical formulation of dielectric waveguides has historically been limited to the spatially two-dimensional (2D) canonical form of the slab waveguide [1]- [7].However, realistic dielectric waveguides are fully three-dimensional (3D) structures.Despite the dimensional discrepancy, 2D analytical models are often used as approximations, provided that there is only tight confinement in one of the two cross-sectional dimensions, where length is functionally infinite along the longitudinal-section in both directions.3D dielectric waveguides with width much larger than height are spatially inefficient.Therefore, in this paper we conduct numerical experiments in FDTD to determine the limit for width where 2D analytical models may be considered a good approximation of 3D dielectric waveguides.The 2D analytical models for wave-impedance Z w (Ohm) and the propagation-constant β (rad/m) are used for comparison. II. FORMULATION The waveguides of interest are fully 3D and have step index contrast between core and cladding regions with refractive indices n 1 = 3.5 and n 2 = 1.5 (corresponding to Si/SiO 2 ), respectively, where the core is surrounded uniformly on all sides by a cladding which is assumed to be infinite in extent.The core region geometry is shown in Fig. 1, where we note the top and bottom walls of the waveguide are separated by the height (δ), the left and right walls are separated by the width (w), and all walls are smooth. The 2D analytical Z w model is defined in (1) [4] may be used as an approximation for comparison with 3D simulations data.where ), µ 0 is the freespace magnetic permeability (H/m), angular frequency ω = 2πf (rad/s), cyclic frequency f (Hz), and field components E y (V/m) and H z (A/m) are time-harmonic frequency-domain phasors, and Z w is purely imaginary for a smooth dielectric waveguide with no added loss mechanism (such as surface roughness or lossy material). The propagation constant can be evaluated from the relation of E-field components using the imaginary component of the complex logarithm in (2) [8] where k 0 is the free-space wave number at the source wavelength λ 0 , n eff is the effective refractive index as calculated from the Effective Index Method [2, §D.C], log(E 1 /E 2 ) = α+β, α = 0 in the absence of any additional loss mechanism, and is the distance between E 2 and E 1 .The arg function returns the complex angle including any and all additional 2πm turns with m ∈ Z. III. RESULTS AND DISCUSSION For computing Z w using (1), each numerical experiment uses λ 0 = 1.54 µm and δ = 200 nm, and reported values are evaluated at f = 194.8THz.FDTD field data are collected at the midpoints along and w, with a three-cell offset from the bottom of the waveguide core region. These data are collected over widths varying from 200 nm to 5 µm.Those results are shown in Fig. 2, where {Z} is the imaginary component of the complex number Z.We see therein that Z w calculated with FDTD approaches the 2D analytical model and saturates at w ≈ 800 nm.There is a noticeable offset between the 3D FDTD data and the 2D analytical approximation below that point but little variation between FDTD and analytical calculations above that point.In this case, the 2D analytical model is a good approximation for w ≥ 4δ. Sample FDTD Z w data are shown with all discrete cells in the w × cross-section in Fig. 3.The boundary of the region directly below the waveguide core is shown as dotted lines.Within the below core region there is minimal variation along length.Z w settles to a stable value within 2 µm from the source location at z = 0, and the variations along w appear only near the region boundary.Outside the boundary there are several null points appearing periodically along length.The length interval between nulls seems to be inversely proportional to waveguide width.The null points are likely the result of 3D multi-modal behavior as more propagating modes exist in the waveguide with increasing w.FDTD fields data are also collected at two w×δ planes along length.The average of all E y values in those planes is then related to β using (2), where = 5 µm.We see saturation-like behavior in Fig. 4 similar to Fig. 2.However, the saturation point for β appears to be at w ≈ 2 µm.This implies that Fig. 4: β across widths.the 2D analytical model is a good approximation for the 3D dielectric waveguide where w ≥ 10δ.Since Z w is an implicit function of β through γ, this w limit should be used when utilizing the 2D analytical approximations. IV. CONCLUSION The experiments conducted in this paper showed that fundamental parameters Z w and β have a strong dependence on the w/δ ratio, where in this case, good correlation to 2D analytical model is achieved for w ≥ 10δ; the data suggests an order-ofmagnitude difference would be sufficient.Further experiments should be conducted for additional parameters, e.g., α, in both smooth and rough waveguides. Fig. 5 Fig. 5 : Fig.5shows the point-to-point β calculations (without fieldaveraging), where the boundary between core and cladding is shown as a red dotted line.The mode configuration changes as w increases, hence the use of field-averaging in Fig.4.
1,367.8
2023-01-10T00:00:00.000
[ "Physics", "Engineering" ]
Pheomelanin Effect on UVB Radiation-Induced Oxidation/Nitration of l-Tyrosine Pheomelanin is a natural yellow-reddish sulfur-containing pigment derived from tyrosinase-catalyzed oxidation of tyrosine in presence of cysteine. Generally, the formation of melanin pigments is a protective response against the damaging effects of UV radiation in skin. However, pheomelanin, like other photosensitizing substances, can trigger, following exposure to UV radiation, photochemical reactions capable of modifying and damaging cellular components. The photoproperties of this natural pigment have been studied by analyzing pheomelanin effect on oxidation/nitration of tyrosine induced by UVB radiation at different pH values and in presence of iron ions. Photoproperties of pheomelanin can be modulated by various experimental conditions, ranging from the photoprotection to the triggering of potentially damaging photochemical reactions. The study of the photomodification of l-Tyrosine in the presence of the natural pigment pheomelanin has a special relevance, since this tyrosine oxidation/nitration pathway can potentially occur in vivo in tissues exposed to sunlight and play a role in the mechanisms of tissue damage induced by UV radiation. Introduction Pheomelanin is one of the existing forms of the natural pigment melanin. Melanin is present in the skin in two forms: eumelanin and pheomelanin. Eumelanin is a heterogeneous polymer composed mainly of dihydroxyindole units derived from tyrosinasecatalyzed oxidation of tyrosine or 3,4-dihydroxyphenylalanine (DOPA) to dopaquinone. Compared to eumelanin, pheomelanin structure differs due to non-enzymatic addition of cysteine to dopaquinone during the pathway of pigment biosynthesis. DOPA-derivatives with cysteine, such as 5 -S-cysteinyldopa and in minor amount 2 -S-cystenyldopa, are incorporated into the pigment in the form of 1,4-benzothiazine units ( Figure 1) [1]. Before being incorporated into pheomelanin, a minor part of 1,4-benzothiazine units may undergo further structural modifications with formation of benzothiazole moiety which copolymerizes with benzothiazine units [2][3][4]. Interestingly, slight variations in the monomer composition of pigment polymer skeleton have been shown to determine significant differences in light absorption, antioxidant activity, redox behavior, and metal chelation [5]. It is commonly believed that melanin plays an important role in the modulation of the photochemical reactions that occur in the skin. Numerous experimental and clinical evidences have shown a protective role of eumelanin on the damage triggered by UV irradiation on the skin [6]. A lower incidence of UV-induced skin diseases is observed in individuals with darker skin pigmentation, where eumelanin is present. Conversely, a higher incidence of UV-induced skin diseases was found in red-haired individuals with pale It is commonly believed that melanin plays an important role in the modulation of the photochemical reactions that occur in the skin. Numerous experimental and clinical evidences have shown a protective role of eumelanin on the damage triggered by UV irradiation on the skin [6]. A lower incidence of UV-induced skin diseases is observed in individuals with darker skin pigmentation, where eumelanin is present. Conversely, a higher incidence of UV-induced skin diseases was found in red-haired individuals with pale skin and freckles. Traditionally, this UV susceptibility trait has been associated with a high tendency to sunburn and an increased risk of skin tumors and melanoma [7][8][9]. The damage caused by UV rays would be determined either by the absence of pigmentation or by the photosensitizing properties of the pheomelanin present in the skin of these individuals. Notably, pheomelanin has the capacity to act as photosensitizer by inducing the generation of reactive oxygen species (ROS) upon irradiation with UV light [8,[10][11][12][13][14]. Pheomelanin has been observed to increase lipid peroxidation following exposure of liposomes to UV irradiation, suggesting that pheomelanin may act as a pro-oxidant [15]. In particular, by exposure to UV radiation, aromatic rings present in the pheomelanin (Pheo) are excited to the singlet state ( 1 Pheo*) and rapidly converted to the excited triplet state ( 3 Pheo*) [19,20]. The triplet state of pheomelanin can act as photosensitizer triggering photooxidative events by radical-mediated (type I) and singlet oxygen-mediated (type II) mechanisms. The type I mechanism involves free radical formation through the hydrogen atom or electron ransfer by interaction of the triplet excited state of the sensitizer with target molecules (S) or molecular oxygen. 3 Pheo* + S → Pheo •− + S •+ The triplet state of pheomelanin can act as photosensitizer triggering photooxidative events by radical-mediated (type I) and singlet oxygen-mediated (type II) mechanisms. The type I mechanism involves free radical formation through the hydrogen atom or electron ransfer by interaction of the triplet excited state of the sensitizer with target molecules (S) or molecular oxygen. Recently, pheomelanin has also been implicated in UV-independent pathways of oxidative stress [22]. In this study, the photoproperties of this natural pigment were studied by analyzing the effect of pheomelanin on the oxidation/nitration of tyrosine induced by UVB radiation under different pH values and in presence of iron ions. In particular, pheomelanin effect on UVB-induced oxidation/nitration of tyrosine has been studied at physiological pH and at a weakly acid pH. Under pathophysiological situations, such as inflammation, tissue pH close to 5.5-6 can be found. Moreover, recent studies have shown that acid melanosomal pH suppress melanogenesis, especially eumelanin formation, in melanocytes [23]. Notably, it has been observed that, at pH 5.8, eumelanin biosynthesis is suppressed, while pheomelanin production is enhanced [24]. Following UVB radiation of L-Tyrosine, tyrosyl radical that is generated dimerizes with the formation of 3,3′-dityrosine; in presence of nitrite the photochemical reaction produces tyrosyl radical and reactive nitrogen species which combine to form 3nitrotyrosine as a further product [25][26][27] (Figure 2). Both 3,3′-dityrosine and 3-nitrotyrosine are considered diagnostic markers of the in vivo production of reactive oxygen and nitrogen species [27][28][29]. Although reactive nitrogen species, such as peroxynitrite, has been the most widely studied nitrating species, tyrosine nitration occurs also through several alternative routes. At this regard, free tyrosine and tyrosine protein residue nitration can be achieved through mechanisms Both 3,3 -dityrosine and 3-nitrotyrosine are considered diagnostic markers of the in vivo production of reactive oxygen and nitrogen species [27][28][29]. Although reactive nitrogen species, such as peroxynitrite, has been the most widely studied nitrating species, tyrosine nitration occurs also through several alternative routes. At this regard, free tyrosine and tyrosine protein residue nitration can be achieved through mechanisms involving peroxidase/H 2 O 2 -dependent oxidation of nitrite to nitrogen dioxide radical ( • NO 2 ) [30][31][32]. In inflammation, myeloperoxidase from activated leukocytes catalyzes tyrosine nitration at high levels [33][34][35][36]. Photonitration of tyrosine to 3-nitrotyorosine has been already shown by methylene blue dye and riboflavin as sensitizers [37,38]. Methylene blue-sensitized photomodification of tyrosine in the presence of nitrite occurs mainly through a process which involves singlet oxygen (type II mechanism). Conversely, singlet oxygen plays a minor role in the tyrosine photooxidation/photonitration mediated by riboflavin as sensitizer [38][39][40]. Interestingly, the oxidation and nitration of tyrosine residues in proteins are considered important post-translational modifications with consequences on the function of proteins and therefore on cellular homeostasis [41][42][43][44]. The study of the photomodification of tyrosine in presence of the natural pigment pheomelanin has a special relevance, since this tyrosine oxidation/nitration pathway can potentially occur in vivo in tissues exposed to sunlight and play a role in the mechanisms of tissue damage induced by UV radiation. UVB Radiation-Induced Photooxidation/Photonitration of L-Tyrosine The exposure to ultraviolet light (UVB), at room temperature, of a solution containing 1 mM tyrosine leads to the formation of 0.25 ± 0.07 µM at pH 5.5 and 0.13 ± 0.02 µM at pH 7.4 of 3.3 -dityrosine after 30 min of exposure. Tyrosine dimerization was not observed in controls kept in the dark. The exposure of 1 mM tyrosine solution to UVB radiation in the presence of 10 mM nitrite, under the same experimental conditions reported above, leads to 3-nitrotyrosine as a further product in addition to 3,3 -dityrosine ( Figure 3). When nitrite is present, 3,3 -dityrosine is 0.08 ± 0.02 µM and 3.60 ± 0.46 µM, at pH 5.5 and pH 7.4, respectively. The amount of 3-nitrotyrosine formed is 2.37 ± 0.4 µM and 1.89 ± 0.15 µM, at pH 5.5 and pH 7.4 respectively, after 30 min of exposure. At low pH values nitrite generates nitrating species which, in the presence of tyrosine, lead to the formation of 3-nitrotyrosine [45]. Control experiments, in which tyrosine and nitrite are incubated in the dark, indicate that, under our experimental conditions, this reaction pathway can contribute minimally to the production of 3-nitrotyrosine only at pH below 3.3. The study of the photomodification of tyrosine in presence of the natural pigment pheomelanin has a special relevance, since this tyrosine oxidation/nitration pathway can potentially occur in vivo in tissues exposed to sunlight and play a role in the mechanisms of tissue damage induced by UV radiation. UVB Radiation-Induced Photooxidation/Photonitration of L-Tyrosine The exposure to ultraviolet light (UVB), at room temperature, of a solution containing 1 mM tyrosine leads to the formation of 0.25 ± 0.07 μM at pH 5.5 and 0.13 ± 0.02 μM at pH 7.4 of 3.3'-dityrosine after 30 min of exposure. Tyrosine dimerization was not observed in controls kept in the dark. The exposure of 1 mM tyrosine solution to UVB radiation in the presence of 10 mM nitrite, under the same experimental conditions reported above, leads to 3-nitrotyrosine as a further product in addition to 3,3'-dityrosine ( Figure 3). When nitrite is present, 3,3′-dityrosine is 0.08 ± 0.02 μM and 3.60 ± 0.46 μM, at pH 5.5 and pH 7.4, respectively. The amount of 3-nitrotyrosine formed is 2.37 ± 0.4 μM and 1.89 ± 0.15 μM, at pH 5.5 and pH 7.4 respectively, after 30 min of exposure. At low pH values nitrite generates nitrating species which, in the presence of tyrosine, lead to the formation of 3nitrotyrosine [45]. Control experiments, in which tyrosine and nitrite are incubated in the dark, indicate that, under our experimental conditions, this reaction pathway can contribute minimally to the production of 3-nitrotyrosine only at pH below 3.3. . UVB-induced photooxidation/photonitration of tyrosine. A reaction mix containing 1 mM tyrosine in 0.2 M K-phosphate buffer at pH 5.5 or pH 7.4, 0.1 mM DTPA, 10 mM K-nitrite is exposed to UVB radiation. After 30 min of exposure, the reaction is stopped by placing the mixture in the dark and the solution is analyzed by HPLC, to determine the formation of 3,3'-dityrosine (A) and 3-nitrotyrosine (B), as reported in Materials and Methods. Controls in the dark correspond to the unexposed solution. Effect of Pheomelanin on UVB Radiation-Induced Photooxidation/Photonitration of L-Tyrosine In order to evaluate the photoproperties of pheomelanin on the oxidative/nitrative modifications of tyrosine induced by UVB rays, 1 mM tyrosine and 10 mM nitrite were exposed to UVB radiation in the presence of 4.2 μg/mL synthetic pheomelanin at physiological pH 7.4 and pH 5.5. Pheomelanin was enzymatically prepared from L-Dopa and cysteine as reported in the experimental section. After an exposure of 30 min both the Figure 3. UVB-induced photooxidation/photonitration of tyrosine. A reaction mix containing 1 mM tyrosine in 0.2 M K-phosphate buffer at pH 5.5 or pH 7.4, 0.1 mM DTPA, 10 mM K-nitrite is exposed to UVB radiation. After 30 min of exposure, the reaction is stopped by placing the mixture in the dark and the solution is analyzed by HPLC, to determine the formation of 3,3 -dityrosine (A) and 3-nitrotyrosine (B), as reported in Materials and Methods. Controls in the dark correspond to the unexposed solution. Effect of Pheomelanin on UVB Radiation-Induced Photooxidation/Photonitration of L-Tyrosine In order to evaluate the photoproperties of pheomelanin on the oxidative/nitrative modifications of tyrosine induced by UVB rays, 1 mM tyrosine and 10 mM nitrite were exposed to UVB radiation in the presence of 4.2 µg/mL synthetic pheomelanin at physiological pH 7.4 and pH 5.5. Pheomelanin was enzymatically prepared from L-Dopa and cysteine as reported in the experimental section. After an exposure of 30 min both the formation of 3,3 -dityrosine and the conversion of tyrosine to 3-nitrotyrosine was assayed. Overall, pheomelanin exerts a photoprotective effect (antioxidant) on the oxidation/nitration of tyrosine induced by UVB radiation (Figure 4). However, at pH 5.5 pheomelanin acts as photosensitizer (prooxidant) in the nitrative modification of tyrosine. As shown in the Figure 5B, pheomelanin does not inhibit the nitration of tyrosine but there is a 60% increase in the formation of 3-nitrotyrosine compared to the control exposed to UVB radiation in the absence of pheomelanin. In control experiments in which pheomelanin alone and nitrite were exposed to UVB radiation, neither nitrotyrosine nor dityrosine were detectable. pheomelanin acts as photosensitizer (prooxidant) in the nitrative modification of tyrosine. As shown in the Figure 5B, pheomelanin does not inhibit the nitration of tyrosine but there is a 60% increase in the formation of 3-nitrotyrosine compared to the control exposed to UVB radiation in the absence of pheomelanin. In control experiments in which pheomelanin alone and nitrite were exposed to UVB radiation, neither nitrotyrosine nor dityrosine were detectable. . Photooxidation/photonitration of tyrosine by the nitrite/pheomelanin/UVB system. Pheomelanin 4.2 μg/mL is added to reaction mixture containing 1 mM tyrosine, 10 mM K-nitrite, 0.1 mM DTPA in 0.2 M K-phosphate buffer at pH 5.5 or pH 7.4. The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to measure 3,3'-dityrosine (A) and 3nitrotyrosine (B), as reported in Materials and Methods. Controls in the dark correspond to unexposed reaction mixtures (pheomelanin/nitrite/tyrosine system).*** p < 0.001, ** p < 0.01, * p < 0.05. . The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to determine the formation of 3-nitrotyrosine (•) and 3,3′-dityrosine (■) as reported in Materials and Methods. . Photooxidation/photonitration of tyrosine by the nitrite/pheomelanin/UVB system. Pheomelanin 4.2 µg/mL is added to reaction mixture containing 1 mM tyrosine, 10 mM K-nitrite, 0.1 mM DTPA in 0.2 M K-phosphate buffer at pH 5.5 or pH 7.4. The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to measure 3,3 -dityrosine (A) and 3-nitrotyrosine (B), as reported in Materials and Methods. Controls in the dark correspond to unexposed reaction mixtures (pheomelanin/nitrite/tyrosine system). *** p < 0.001, ** p < 0.01, * p < 0.05. Figure 4. Photooxidation/photonitration of tyrosine by the nitrite/pheomelanin/UVB system. Pheomelanin 4.2 μg/mL is added to reaction mixture containing 1 mM tyrosine, 10 mM K-nitrite, 0.1 mM DTPA in 0.2 M K-phosphate buffer at pH 5.5 or pH 7.4. The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to measure 3,3'-dityrosine (A) and 3nitrotyrosine (B), as reported in Materials and Methods. Controls in the dark correspond to unexposed reaction mixtures (pheomelanin/nitrite/tyrosine system).*** p < 0.001, ** p < 0.01, * p < 0.05. . The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to determine the formation of 3-nitrotyrosine (•) and 3,3′-dityrosine (■) as reported in Materials and Methods. . The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to determine the formation of 3-nitrotyrosine (•) and 3,3 -dityrosine ( ) as reported in Materials and Methods. Figure 5 shows the formation of 3,3 -dityrosine and 3-nitrotyrosine at various concentrations of pheomelanin (0.1-4 µg/mL) at pH 7.4 and pH 5.5. At all concentrations used, pheomelanin has a dose-dependent photoprotective action on the formation of 3,3dityrosine at both pH 5.5 and pH 7.4. The photosensitizing action on the formation of 3-nitrotyrosine at pH 5.5 is observed in the range 0.4-4 µg/mL. Photoproperties of Pheomelanin on UVB-Induced Oxidative/Nitrative Modifications of L-Tyrosine: Effect of Fe(III) It is known that melanins have the ability to bind various metals with the result of modifying their photoproperties [46][47][48][49]. In order to evaluate how the presence of metals can influence the oxidative/nitrative modifications of tyrosine, exposure to UVB rays was performed with the addition of Fe(III) to the reaction mixture. Experiments performed in the absence of metal chelator DTPA showed analogous results (data not shown). At pH 5.5, it is observed that the presence of metals influences the photoproperties of pheomelanin by reducing its antioxidant activity against dityrosine formation ( Figure 6). Regarding the formation of 3-nitrotyrosine, the photosensitizer effect of pheomelanin is not affected either by the absence of the chelator or by the addition of Fe(III). can influence the oxidative/nitrative modifications of tyrosine, exposure to UVB rays was performed with the addition of Fe(III) to the reaction mixture. Experiments performed in the absence of metal chelator DTPA showed analogous results (data not shown). At pH 5.5, it is observed that the presence of metals influences the photoproperties of pheomelanin by reducing its antioxidant activity against dityrosine formation ( Figure 6). Regarding the formation of 3-nitrotyrosine, the photosensitizer effect of pheomelanin is not affected either by the absence of the chelator or by the addition of Fe(III). The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to determine the formation of 3-nitrotyrosine and 3,3′dityrosine, as reported in Materials and Methods. ** p < 0.01, * p < 0.05. Pheomelanin Effect on Oxidative/Nitrative Modifications of L-Tyrosine Induced by UVB Radiation: Role of Singlet Oxygen The photooxidative reactions can be the result of radical type processes (type I) or of processes mediated by singlet oxygen (type II). Both mechanisms can contribute to the photooxidative reactions at the same time. In order to evaluate the role of singlet oxygen ( 1 O2) in pheomelanin-sensitized nitration reaction of tyrosine at pH 5.5, the yields of 3nitrotyrosine in H2O and D2O as solvent were compared. Replacement of H2O by D2O increases the lifetime of singlet oxygen by about 15 times [50] and, consequently, stimulates 1 O2-dependent reactions. As shown in Figure 7, the production of 3-nitrotyrosine is approximately 8.4 times greater in D2O than in H2O. This effect is indicative of the participation of singlet oxygen in the reaction. The formation of 3,3'-dityrosine is not affected by D2O (Supplementary Data). It has been also observed that the formation of 3-nitrotyrosine is significantly reduced in the presence of sodium azide (NaN3), a known quencer of singlet oxygen (Figure 8). The inhibitory effect of azide confirms intermediacy of type II mechanism in the pheomelanin-sensitized formation of 3-nitrotyrosine. It has been also observed that the formation of 3-nitrotyrosine is significantly reduced in the presence of sodium azide (NaN 3 ), a known quencer of singlet oxygen (Figure 8). The inhibitory effect of azide confirms intermediacy of type II mechanism in the pheomelaninsensitized formation of 3-nitrotyrosine. Pheomelanin Effect on Oxidative/Nitrative Modifications of L-Tyrosine Induced by Peroxyitrite Peroxynitrite induces both tyrosine oxidation to 3,3 -dityrosine and tyrosine nitration to 3-nitrotyrosine. Under our experimental conditions, peroxynitrite (100 µM, final concentration) added to a solution containing 100 µM of tyrosine generates 0.53 ± 0.02 µM of 3,3 -dityrosine and 6.75 ± 0.27 µM of 3-nitrotyrosine, respectively. As shown in Figure 8, pheomelanin, at a concentration of 4.2 µg/mL, is able to inhibit both the formation of 3,3 -dityrosine (~42%) and that of 3-nitrotyrosine (~47%). As reported [51,52], peroxynitrite reacts, in vivo, mainly with carbon dioxide, forming a peroxynitrite-CO 2 adduct which decomposes generating the nitrogen dioxide radicals ( • NO 2 ) and carbonate radical anion (CO 3 •− ). In the presence of bicarbonate, tyrosine nitration mediated by peroxynitrite is generally increased due to high oxidative/nitrative properties of radicals generated by the decomposition of the peroxynitrite-CO 2 adduct. The results shown in Figure 8 indicate that pheomelanin is equally effective in protecting tyrosine from the nitrative and oxidative action of peroxynitrite also in the presence of 25 mM bicarbonate. The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to determine the formation of 3-nitrotyrosine, as reported in Materials and Methods. In D2O, the pD (5.5) was taken as the measured pH + 0.4. NaN3 is added to a final concentration of 1 mM. *** p < 0.001. Pheomelanin Effect on Oxidative/Nitrative Modifications of L-Tyrosine Induced by Peroxyitrite Peroxynitrite induces both tyrosine oxidation to 3,3'-dityrosine and tyrosine nitration to 3-nitrotyrosine. Under our experimental conditions, peroxynitrite (100 μM, final concentration) added to a solution containing 100 μM of tyrosine generates 0.53 ± 0.02 μM of 3,3'-dityrosine and 6.75 ± 0.27 μM of 3-nitrotyrosine, respectively. As shown in Figure 8, pheomelanin, at a concentration of 4.2 μg/mL, is able to inhibit both the formation of 3,3'-dityrosine (~42%) and that of 3-nitrotyrosine (~47%). As reported [51,52], peroxynitrite reacts, in vivo, mainly with carbon dioxide, forming a peroxynitrite-CO2 adduct which decomposes generating the nitrogen dioxide radicals ( • NO2) and carbonate radical anion (CO3 •-). In the presence of bicarbonate, tyrosine nitration mediated by peroxynitrite is generally increased due to high oxidative/nitrative properties of radicals generated by the decomposition of the peroxynitrite-CO2 adduct. The results shown in Figure 8 indicate that pheomelanin is equally effective in protecting tyrosine from the nitrative and oxidative action of peroxynitrite also in the presence of 25 mM bicarbonate. The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to determine the formation of 3-nitrotyrosine, as reported in Materials and Methods. In D 2 O, the pD (5.5) was taken as the measured pH + 0.4. NaN 3 is added to a final concentration of 1 mM. *** p < 0.001. Figure 7. Photooxidation of tyrosine by the nitrite/pheomelanin/UVB system: effect of D2O and NaN3. Pheomelanin 4.2 μg/mL is added to the solution, containing 1 mM tyrosine, 10 mM K-nitrite in 0.2 M K-phosphate buffer at pH 5.5 and 0.1 mM DTPA. The solution is exposed to UVB rays for 30 min. The reaction is stopped by placing the mixture in the dark and the supernatant, obtained after centrifugation, is analyzed by HPLC to determine the formation of 3-nitrotyrosine, as reported in Materials and Methods. In D2O, the pD (5.5) was taken as the measured pH + 0.4. NaN3 is added to a final concentration of 1 mM. *** p < 0.001. Pheomelanin Effect on Oxidative/Nitrative Modifications of L-Tyrosine Induced by Peroxyitrite Peroxynitrite induces both tyrosine oxidation to 3,3'-dityrosine and tyrosine nitration to 3-nitrotyrosine. Under our experimental conditions, peroxynitrite (100 μM, final concentration) added to a solution containing 100 μM of tyrosine generates 0.53 ± 0.02 μM of 3,3'-dityrosine and 6.75 ± 0.27 μM of 3-nitrotyrosine, respectively. As shown in Figure 8, pheomelanin, at a concentration of 4.2 μg/mL, is able to inhibit both the formation of 3,3'-dityrosine (~42%) and that of 3-nitrotyrosine (~47%). As reported [51,52], peroxynitrite reacts, in vivo, mainly with carbon dioxide, forming a peroxynitrite-CO2 adduct which decomposes generating the nitrogen dioxide radicals ( • NO2) and carbonate radical anion (CO3 •-). In the presence of bicarbonate, tyrosine nitration mediated by peroxynitrite is generally increased due to high oxidative/nitrative properties of radicals generated by the decomposition of the peroxynitrite-CO2 adduct. The results shown in Figure 8 indicate that pheomelanin is equally effective in protecting tyrosine from the nitrative and oxidative action of peroxynitrite also in the presence of 25 mM bicarbonate. Discussion The results of this study show that the photoproperties of pheomelanin can be modulated by various experimental conditions, ranging from the photoprotection to the triggering of potentially damaging photochemical reactions. These properties were studied by analyzing the effect of pheomelanin on UVB radiation-induced oxidation/nitration of tyrosine. UVB irradiation leads, through tyrosyl radical intermediate, to the dimerization of tyrosine with the formation of 3,3 -dityrosine and in the presence of nitrite the photochemical reaction forms 3-nitrotyrosine as an additional product. The mechanism underlying the formation of 3-nitrotyrosine likely involves the combination of tyrosyl radical with nitrogen dioxide radical ( • NO 2 ), which may be generated by photooxidation of nitrite [53,54]. In the presence of pheomelanin, tyrosine is dose-dependently protected from oxidation to 3,3 -dityrosine both at pH 5.5 and physiological pH (pH 7.4). Furthermore, pheomelanin can perform a protective function on the conversion of tyrosine to 3-nitrotyrosine at pH 7.4. It is known that UVB radiation induces the formation of oxyradicals capable of trigger-ing oxidative reactions [55]. Therefore, the protective action of pheomelanin against the photooxidation of tyrosine could be related to its ability to act as a free radical scavenger. The experiments conducted on the formation of 3,3 -dityrosine and 3-nitrotyrosine induced by peroxynitrite (ONOO − ) confirm this hypothesis. Peroxynitrite, which is generated in vivo from the reaction of nitric oxide ( • NO) with the superoxide anion (O 2 •− ), is a very reactive species capable of nitrating and oxidizing tyrosine. This reactivity is mediated by the hydroxyl radical ( • OH) and by the nitrogen dioxide radical ( • NO 2 ) which are generated by the homolytic cleavage of peroxynitrite. In the presence of carbon dioxide (CO 2 ), a peroxynitrite-CO 2 adduct is formed which generates a further radical, the carbonate radical anion (CO 3 •−) . Pheomelanin showed protective properties both on the formation of 3,3 -dityrosine and on the conversion of tyrosine to 3-nitrotyrosine induced both by peroxynitrite and peroxynitrite-CO 2 adduct. These results indicate that pheomelanin can act as free radical scavenger and the observed protective action of the pigment on UVB-induced tyrosine modifications can be attributed to this property. An interesting result that emerged from our investigations is that pheomelanin can have pro-oxidant properties under some experimental conditions. We observed that the nitration of tyrosine to 3-nitrotyrosine induced by UVB radiation in presence of nitrite at pH 5.5 is increased when carried out in the presence of pheomelanin. These results indicate that the properties of pheomelanin can be significantly influenced by the pH during UVB irradiation, switching from antioxidant (pH 7.4) to pro-oxidant (pH 5.5). The photochemical experiments conducted with the addition of iron ion are also of particular interest. Pheomelanin has a remarkable ability to bind metals and this property leads often to a modification of the photoprotective capabilities of the pigment. In our experimental conditions by adding Fe(III), we observed a reduced ability to inhibit the oxidative reaction. These results indicate that the antioxidant properties of pheomelanin are sensitive to the effect of metal ions such as iron. It is plausible that the pigment bond with iron induces an increase in the production of highly oxidizing reactive species whose action can only be partially counteracted by the antioxidant activity of the pigment itself. The production of reactive oxygen species (ROS) resulting from the interaction of oxygen with pheomelanin exposed to UV radiation has often been interpreted as the cause of its pro-oxidant properties. In these hypotheses, pheomelanin (Pheo) would act as a sensitizer and its ability to stimulate the formation of 3-nitrotyrosine (NO 2 Tyr) in the pheomelanin/nitrite/UVB system could be rationalized with the following sequence of reactions: In competition with the nitrogen dioxide radical ( • NO 2 ), the tyrosyl radical can dimerize with the formation of 3,3 -dityrosine (Dityr): Another possible pathway of formation of the tyrosyl radical and of the nitrogen dioxide radical, responsible for the production of 3-nitrotyrosine, could occur through the direct interaction of the photoexcited pheomelanin in the excited triplet state ( 3 Pheo*) with tyrosine and with nitrite (type I mechanism): Photooxidation of pheomelanin-dependent tyrosine can also be mediated by singlet oxygen that is generated by energy transfer from photoexcited pheomelanin in the triplet state to ground state oxygen according to the following scheme (type II mechanism): Tyrosyl radicals can either dimerize or react with a nitrite-derived species to form 3-nitrotyrosine. To our knowledge, the production of nitrating species by direct interaction of nitrite with singlet oxygen has not been reported. It is possible that indirect oxidation of nitrite by the radicals produced by type II mechanism can give rise to further oxidizing species which, as shown above, can contribute to the formation of 3-nitrotyrosine. The investigations carried out to obtain information on the mechanism through which the nitrite/pheomelanin/UVB system induces the nitration of tyrosine at pH 5.5 indicate that, in our experimental conditions, the process can involve singlet oxygen, indeed, in the presence of D 2 O, the production of 3-nitrotyrosine is considerably higher than that formed in H 2 O. Accordingly, the inhibition exerted by sodium azide on the generation of 3-nitrotyrosine possibly obeys to the known quenching effect on singlet oxygen [21]. These investigations are in agreement with previous studies on nitrite-induced nitration of tyrosine in the presence of methylene blue as a photosensitizer [37]. On the other hand, previous studies carried out in our laboratory have shown that the nitration of tyrosine in presence of riboflavin as a photosensitive agent is mainly of type I [38]. It cannot be excluded that in our case, which uses pheomelanin as photosensitizer, the type I mechanism may participate in the photonitration reaction of tyrosine at slightly acid pH. UVB radiation exposure experiments under anaerobic conditions are currently underway to verify the role of the type I photochemical reaction. Noteworthy, the photosensitizing effect exerted by pheomelanin is particularly efficient in a slightly acid environment and in the presence of metal ions, such as iron. These conditions, although not physiological, could acquire significance in some pathological situations such as during inflammatory processes or in the case of tissue ischemia. In both these states, pH values close to those used in our experiments are found in vivo (pH 5.8-6.1). Moreover, the forearm of a healthy man has an average surface pH around 5.4-5.9 [56]. Interestingly, pheomelanin synthesis is chemically promoted by weakly acid pH [24,57]. It has been reported that melanosomal pH regulates eumelanin/pheomelanin ratio in melanocytes with a shift towards a pheomelanic phenotype by lowering pH [58,59]. In our experimental conditions, supraphysiological concentrations of nitrite (10 mM) were used to highlight the nitration reaction of tyrosine. Nitrite is the main product of nitric oxide ( • NO) catabolism, the production of which increases in inflammation [60]. Nitrite is also a constituent of sweat, where it can reach concentrations in the µM range following its formation on the surface of the skin by commensal bacteria. Furthermore, in normal conditions of exposure to the sun and heat, the surface layer of sweat undergoes a rapid increase in concentration caused by evaporation, so that the local concentration of nitrite can increase several times [61]. The result presented herein indicates that photoproperties of pheomelanin can be modulated by various experimental conditions. It is well-known that pheomelanin undergoes structural modifications by UV rays. In the course of the biosynthetic pathways, modification involves benzothiazine units which are gradually converted to benzothiazole [4]. The relative ratio of these two types of pheomelanin moieties appears important in determining whether pheomelanin acts as a pro-oxidant [5,62]. Recently, exploring the photoreactivity of pheomelanin by UVA radiation, the benzothiazole moiety has been shown to be more reactive than benzothiazine moiety [63]. Under our experimental conditions, UVB radiation and reactive nitrogen species could similarly influence pigment photoreactivity and induce structural modifications of pheomelanin worth to be further explored. Chemicals L-Cysteine, L-Dopa, L-Tyrosine, diethylenetriaminopentacetic acid (DTPA), sodium azide (NaN 3 ), mushroom tyrosinase, horseradish peroxidase was provided by Sigma Aldrich (St. Louis, MO, USA). Deuterium oxide (D 2 O) was obtained from Aldrich (Milwaukee, WI, USA). The 3-nitrotyrosine was from Fluka (Buchs, Switzerland). All other reagents were used with the highest level of purity commercially available. The synthesis of 3,3 -dityrosine was enzymatically carried out from L-Tyrosine and hydrogen peroxide by horseradish peroxidase [64]. Peroxynitrite was prepared from K-nitrite and hydrogen peroxide under acid conditions as previously reported [65]. Synthesis of Pheomelanin Pheomelanin was synthesized from L-cysteine and L-Dopa [66]. L-Dopa (25 µmoles) was dissolved in 20 mL of K-phosphate buffer (0.05 M pH 6.8) and incubated with mushroom tyrosinase (1.2 mg). After 30 s, L-cysteine (53 µmol) was added, and the reaction mixture was left overnight at 37 • C in agitation. To ensure total conversion to cysteinyldopa isomers, pheomelanin was prepared with cysteine/Dopa molar ratio of 2:1 [67]. The reaction was stopped by reducing pH to 2.2 with 6 N HCl, and the reaction mixture was subsequently centrifuged at 5000 rpm for 25 min. The residue is suspended in H 2 O and subsequently lyophilized, thus obtaining approximately 3.4 mg of dark brown pheomelanin. The identification of pheomelanin as a synthetic product was performed by analyzing the absorption spectrum (λ = 800-200 nm) of a solution containing 2.5 µg/mL pheomelanin in 1 M K-phosphate buffer, pH 8.0. The absorbance of a 4 µg/mL pheomelanin solution was on average 0.092 at 400 nm [68]. Stock solutions of synthetic pheomelanin were in 1 M K-phosphate buffer, pH 8.0. The spectrum analysis was carried out using a UV-vis Cary 50 Scan spectrophotometer. Nitration and Oxidation of L-Tyrosine Induced by UVB Radiation The reaction mix containing 1 mM L-Tyrosine, 0.05 M K-phosphate buffer, pH 7.4, or 0.2 M K-phosphate, pH 5.5 and 0.1 mM DTPA, was incubated in the absence or by adding 10 mM K-nitrite to a Petri dish. To study the effect of pheomelanin on the oxidative/nitrative modifications of tyrosine induced by UVB rays, the final concentration of the pigment in the reaction mixture was 4.2 µg/mL. The photooxidation of tyrosine was initiated by exposing the reaction mixture to UVB radiation produced by two fluorescent lamps at room temperature. The irradiation was interrupted for 1 min every 5 min to mix the suspension and prevent overheating of the reaction mixture. After 30 min of irradiation, the samples were centrifuged for 5 min at 12,000 rpm, and the supernatant analyzed by HPLC to verify the formation of 3,3 -dityrosine and 3-nitrotyrosine. Exposure to UVB Radiation The exposure was carried out in an irradiation cabin built by the Bioltecnical Service, Nettuno, RM (Italy). Two Sankyo Denki G15T8E UVB fluorescent lamps (λ = 270-320 nm with a maximum peak at 313 nm) were mounted on the ceiling of a closed aluminum cabin equipped with a front door for loading the Petri dishes. The lamps emit a luminous flux, with energy administered in the unit of time equal to 2.5 J m −2 s −1 , perpendicular to the radiation plane placed at 23 cm distance. The total energy administered was 4500 J m −2 . The lamps have an efficiency equal to 1, i.e., all the absorbed power is transformed into UV radiation; furthermore the radiation emitted, given the geometry of the cabin and the reflectivity of the walls, ends entirely on the irradiation plane. Nitration and Oxidation of L-Tyrosine Induced by Peroxynitrite The experiments with peroxynitrite were performed as described in [69]. The reaction mixture containing 4.2 µg/mL pheomelanin, 100 µM L-Tyrosine, 0.2 M K-phosphate buffer, pH 7.4 and 0.1 mM DTPA, was incubated in the absence or presence of 25 mM Na-bicarbonate. The reaction was started by the addition of peroxynitrite (final concentration of 100 µM). After 5 min at room temperature, the solution was centrifuged for 5 min at 12,000 rpm, and the supernatant analyzed by HPLC to verify the formation of 3-nitrotyrosine and 3,3 -dityrosine. HPLC Analysis 3-nitrotyrosine and 3,3 -dithyrosine were analyzed by HPLC using a Waters chromatograph equipped with a model 600 pump and a model 600 gradient control module as reported [65,70]. Chromatographic separation was performed using a Nova-pak column. C18 (3.9 mm × 150 mm), 4 µm (Waters) and as mobile phase: (A) K-phosphate/H 3 PO 4 buffer, 50 mM, pH 3.0; (B) acetonitrile-water (50:50, v/v) with a flow rate of 1 mL/min at room temperature and a linear gradient from A to 33% of B in 10 min. 3-nitrotyrosine was analyzed at 360 nm, using a Waters 996 photodiode spectrophotometric detector. 3,3dithyrosine was analyzed using a Waters 474 fluorescence detector, setting the wavelength at 260 nm for the excitation and at 410 nm for emission. Peaks were identified using external standards and sample concentrations were calculated using standard curves. The elution times of 3-nitrotyrosine and 3,3 -dithyrosine are 8.9 and 7.5 min, respectively. The limit of determination of 3-nitrotyrosine and 3,3 -dithyrosine is 20 pmoles and 1 pmol, respectively. Data Analysis The results are expressed as mean values ± SEM of at least three separate experiments performed in duplicate. The statistical analyses were performed using Student's t-test; p < 0.05 was deemed significant. The graphs and data analysis were performed using the GraphPad Prism 4 program. Conclusions UVB radiation induces the photooxidation/photonitration of tyrosine. Pheomelanin is able to perform a protective function both on the tyrosine oxidation to 3,3 -dityrosine and on the conversion of tyrosine to 3-nitrotyrosine when the exposure is conducted at physiological pH; conversely at pH 5.5, the presence of pheomelanin induces a 60% increase in the formation of 3-nitrotyrosine. The photosensitizing action of pheomelanin in the nitration reaction of tyrosine to 3-nitrotyrosine at pH 5.5, is further increased by about 8 times in D 2 O, suggesting a role of 1 O 2 in the reaction mechanism. The addition of Fe(III) during the irradiation of tyrosine in presence of nitrite provokes a decrease of the antioxidant activity of pheomelanin also against the formation of 3,3 -dityrosine, indicating that the photoproperties of pheomelanin may be affected by the presence of metal ions. Finally, pheomelanin showed protective properties on oxidation/nitration of tyrosine induced by peroxynitrite and by the decomposition of the peroxynitrite-CO 2 adduct. An important implication of the results obtained is that the pheomelanin-dependent photonitration of tyrosine in presence of nitrite could exert toxic effects by inducing the nitration of tyrosine protein residues present in the skin with consequent functional alteration of proteins themselves.
8,853.6
2021-12-27T00:00:00.000
[ "Biology", "Chemistry", "Physics" ]
Comparison of Methods for Batik Classification Using Multi Texton Histogram , Introduction Indonesia is a country rich with diverse cultural heritages from its ancestors.One of the symbols reflecting Indonesian culture is Batik.Batik is the techniques, symbolism, and culture surrounding hand-dyed cotton and silk garments [1].Batik motif has symbolic meaning and high aesthetic value for Indonesians [1].The existence and uniqueness of Batik have been acknowledged by UNESCO on October 2, 2009 as an Intangible Cultural Heritage of Humanity [2].Most Indonesians, however, cannot recognize the characteristic of Batik motifs due to its diversity richly exhibited in each Indonesian region.Batik motifs depict the character, custom, and virtuous value from which they are originated [3][4].This study attempts to develop Batik classification. Feature extraction method based on textons has been successfully conducted in Batik image analysis [4][5][6][7][8][9].Textons are the elements of texture perception proposed by Julez [10].One of the method based texton that is capable to show faster performance is Multi Texton Histogram (MTH) [11][12][13].Originally, MTH is developed to analyze natural images.Moreover, MTH can also work well in image retrieval study [9], [14][15][16].MTH can represent shape, color, and texture correlation through spatial correlation without prior process of any image segmentation process.Support Vector Machines (SVM) and k-Nearest Neighbor (k-NN) classifier are employed to compare which one becomes the optimal classifier that can be used as Batik classification. SVM and kNN are well-known and powerful classifiers that can simultaneously handle many attributes and large data [17][18][19][20].Both of them are classifiers that are based on statistical theory.SVM can avoid over-fitting and local minimal on the other hand, k-NN is simpler but more sensitive to noise.According to the previous sentences, this study attempts to develop Batik classification using MTH as feature extraction method.Furthermore, this study performs classification method comparison between k-NN and SVM classifier to seek the best classification method for Batik classification.The rest of this paper is organized as follows: Comparison of Methods for Batik Classification using Multi… (Agus Eko Minarno) 1359 Section 2, the dataset used to test the performance is presented; Section 3, MTH scheme is presented; Section 4, the performance of MTH using k-NN as classifier is evaluated and compared with MTH using SVM as classifier; and Section 5, this chapter presents the conclusion of the study. Dataset In this study, the dataset consists of 300 images divided into 50 classes; hence, each class has 6 images.Those classes refer to the types of Batik motif.The size of each image is 128x128 pixels.Figure 1 shows the images utilized in this study. Multi Texton Histogram (MTH) The extraction process used in this study is Multi Texton Histogram (MTH).MTH is used to extract the feature in an image utilizing texton idea. Feature extraction of edge orientation Edge orientation feature extraction is one of the important processes in pattern recognition [11].There are many methods used for edge feature extraction.This study uses Sobel operator as edge feature extraction method because it can reduce noise before performing edge detection calculations compared to other gradient operators or other edge detection methods.Thus, it is considered to be more efficient and simpler.To add, Sobel operator could give optimal performance in the previous study [11].Sobel operator gives emphasis to the neighboring pixels of a pixel, such as giving high weight value to neighboring pixels.Therefore, the effect of neighboring pixels will differ according to their location to a pixel at which the gradient is calculated.Gradient is a function calculating the alteration intensity which an image is viewed as a collection of several continuous intensity functions.The results obtained from Sobel operator are vector and magnitude which afterward are quantized into 18 bins.At this stage, features are generated 18 edge orientation features. Feature extraction of color Color is very useful information for object detection process.Human visual can sense three basic colors and the combination of three basic colors namely red, green, and blue.Moreover, RGB is color space commonly used in digital processing.In the previous study, RGB color space could give optimal performance.Hence, RGB color space is used in this study.Each color component of RGB color space is extracted and quantized into 4 bins of color intensity, R=4 bins, G=4 bins, and B=4 bins.The combination of those color bins produces 64 color variations obtained from 4x4x4 bins.Hence, the proposed generated features are 64 color features. Texton detection Texton was introduced by Julesz as a microstructure of an image [10].Julesz proposed MTH algorithm as its basic idea coming from texton theory.MTH uses four types of texton to detect the microstructure of an image [11].Figure 2 and Figure 3 show four-type and six-type texton used for texture detection, respectively.The 2x2 grid texton is used in this study.In addition, each grid is marked with v1, v2, v3, and v4.This is employed to increase the texture difference since the texton gradient only provides texture boundary [11].Each texton type is convoluted from left to right and from top to bottom through two pixels.The texture is detected when there are two pixels having the same value intensity at the corresponding grid position of the texton, hence, the grid can be called as texton.Those four types of texton will produce two histograms, color feature histogram and edge orientation feature histogram. Figure 4 illustrates how the convolution of texton works.Initially, the T1 type of texton is chosen.Afterwards, the texton is convoluted on image of RGB color space that has been quantized.When there are two pixels having the same value intensity at the corresponding grid position of T1, the frequency of occurrence of those intensity value of a color component in color feature histogram will be added by one.After all types of texton are convoluted, the information obtained from histogram is called as texton feature.After extracted, textures will be depicted as a vector value in histogram, color feature histogram and edge orientation feature histogram.Histogram is the distribution of several pixels in an image.Furthermore, the color feature histogram and edge orientation feature histogram will be combined to be one histogram concatenatedly.Therefore, the concatenated histogram has 82 features obtained from 64 color features and 18 edge orientation features.Figure 5 shows the workflow of MTH. 7 shows color quantization image.Quantization is a process used for decreasing color intensity value.In this study, each component value of RGB color space is quantized into 4 bins of color intensity, R=4 bins, G=4 bins, and B=4 bins.The combination of those color bins produces 64 color variations obtained from 4x4x4 bins.Figure 7 shows color quantization of images in Figure 6.Meanwhile, Figure 8 shows merged image of RGB channel.To perform the previous quantization process, image is separated into three color components of red, green, and blue.After quantization process is completed, those color components are recombined using equation ( 1). (1) Figure 9. Color Histogram Texton is an extraction when there are two pixels that have the same intensity value at the corresponding grid position of a texton type.The result of four-type texton extraction on the color quantization image is color feature histogram which is presented in Figure 8. Edge orientation extraction The original image must be converted to grayscale image to extract edge orientation feature.Figure 10 illustrates the grayscale image of Figure 6.The grayscale image attained from previous process is converted to Sobel image using Sobel operator.Sobel image contains edge orientation information of objects in an image.Figure 11 shows the Sobel image of Figure 10.The range of edge orientation information got from previous process is from 0 until 180.In this study, the information of edge orientation is quantized into 18 bins.Thus, at this stage, features that will be generated are 18 edge orientation features.Figure 12 shows the edge orientation quantization image of Figure 11. Merged histogram The histogram obtained from texton extraction on the color and edge orientation features are combined concatenately.Therefore, the concatenated histogram has 82 features obtained from 64 color features and 18 edge orientation features.Those features will be used in the classification process.Figure 14 depicts the merged histogram of color histogram of Figure 9 and edge orientation histogram of Figure 13. Experiment scenario of cross validation Cross validation is executed to perceive the consistency of classification performance [21][22][23][24][25]. Cross validation experiment is completed by varying the data used for training data and testing data.The variation is obtained by randomizing an image, changing an image as training data and testing data.It is intended to alternate all data becoming training data and testing data.Table 1 illustrates the cross-validation experiment results using sixfolding.From Table 1 to Table 4, the performance of MTH using six-type texton is better than that of the four-type texton.The average accuracy of six-type texton is 6.83% higher than the average accuracy of four-type texton using k-NN classifier.When using SVM classifier, the accuracy of six-type texton is 8.5% higher than the accuracy of four-type texton.Six-type texton has more variation types of texton than four-type texton.Thus, the variation of texture in an image can be more defined using six-type texton.The missing information in an image can be reduced. Additionally, the best performance of distribution scenario of training data, and testing data is 70% of training data and 30% testing data distribution scenario.The average accuracy from 70/30 distribution is 73%.Meanwhile, the worst performance of distribution scenario of training data and testing data is 50% of training data and 50% testing data distribution scenario.The average accuracy from 50/50 distribution is only 56.98%.It can be implied that to achieve more superior accuracy, the number of training data used for classification should be higher than the number of testing data. The experiment results show the achived highest accuracy using k-Nearest Neighbor (k-NN) algorithm is 70% and 82% using 4 textons and 6 textons, respectively.Coversely, the highest accuracy that can be achieved using Support Vector Machine (SVM) algorithm is 64% and 76% using 4 textons and 6 textons, respectively.In this study, the performance of k-NN classifier is better than SVM classifier.The average accuracy of k-NN classifier 5.09% higher than that of the SVM classifier The accuracy achieved in this study is plausible for Batik classification.It can be said that Multi Texton Histogram (MTH), k-NN and SVM are significantly applicable in Batik image classification.MTH algorithm can well represent Batik motifs in texton information.The total number of features used during classification process is 82 features, 62 color features and 18 edge orientation features.It is pretty efficient to represent the information in an image, especially for texture image.Furthermore, it shows that MTH can well extract the feature in any image dataset on a large scale variation. Conclusion This study attempts to develop a system that can help people to classify Batik motifs using Multi Texton Histogram (MTH) for feature extraction.In addition, this study employs classification method comparison between k-Nearest Neighbor (k-NN) and Support Vector Machine (SVM) classifier to seek the best classification method for Batik classification.The experiment result shows the average accuracy of k-NN classifier is 5.09%, higher than the average accuracy of SVM accuracy.Moreover, the highest accuracy that can be achieved by k-NN and SVM classifier is 82% and 76% using six-type texton, respectively.This study has successfully reached satisfying accuracy for Batik classification.Consecutively, MTH, k-NN, and SVM are pretty good to be applied in Batik image classification. 5 Experiment scenario of equal distribution 50/50 In this experiment, 300 Batik images are grouped into equal number, 50% as training data and 50% as testing data.Thus, three images in each class are employed as training data, and three other images in each class are used as testing data.Table2shows the experiment results of equal distribution scenario. Table 2 . The average accuracy of equal distribution scenario In this experiment, 300 Batik images are divided into 60% as training data and 40% as testing data.Thus, four images in each class are used as training data, and two other images in each class are used as testing data.Table3shows the experiment results of 60/40 distribution scenario. Table 3 . The average accuracy of 60/40 distribution scenario Experiment scenario of 70/30 distribution In this experiment, 300 Batik images are grouped into 70% as training data and 30% as testing data.Thus, five images in each class are used as training data, and one image in each class is used as testing data.Table4shows the experiment results of 70/30 distribution scenario.
2,812.6
2018-02-24T00:00:00.000
[ "Computer Science" ]
Fastest Frozen Temperature for a Thermodynamic System For a thermodynamic system obeying both the equipartition theorem in high temperature and the third law in low temperature, the curve showing relationship between the specific heat and the temperature has two common behaviors:\ it terminates at zero when the temperature is zero Kelvin and converges to a constant as temperature is higher and higher. Since it is always possible to find the characteristic temperature $T_{C}$ to mark the excited temperature as the specific heat almost reaches the equipartition value, it is reasonable to find a temperature in low temperature interval, complementary to $T_{C}$. The present study reports a possibly universal existence of the such a temperature $\vartheta$, defined by that at which the specific heat falls \textit{fastest} along with decrease of the temperature. For the Debye model of solids, above the temperature $\vartheta$ the Debye's law starts to fail. I. INTRODUCTION In classical statistical mechanics, we have the equipartition theorem that applies for a system of many particles that obey classical mechanics. Suppose the energy = + + + ... of a molecule is expressed as a sum of independent terms , , , ... each referring to a different degree of freedom, and all these terms of the energy are expressed as squared terms of type α j ξ 2 j with ξ j being the j-th generalize position or momentum and α j being the j-th coefficient independent of all ξ j (j = 1, 2, 3, ..., f ), we can prove that average energy per molecule u ≡ = f k B T /2 where k B is Boltzmann's constant and T is the temperature in Kelvins. This embodies the equipartition theorem that formally states that when a large number of indistinguishable, quasi-independent particles whose energy is expressed as the sum of f squared terms come to equilibrium, the average internal energy per particle is f times k B T /2. [1][2][3][4][5] For instance, a monatomic ideal gas has only three translational degrees of freedom, then u = 3k B T /2. The diatomic gases are of three translational degrees of freedom and two rotational ones and two vibrational ones, and we should have u = 7k B T /2; but at room temperature, only translational and rotational degrees of freedom are activated so u = 5k B T /2 is agreement with experiments. The vibrational degrees of freedom within a diatomic molecule are frozen out at room temperature. Generally speaking, the equipartition theorem is not always valid; it applies for a degrees of freedom that can be freely excited. At a given temperature T , there may be certain degrees of freedom which are more or less frozen due to quantum mechanical effects. [1][2][3][4][5] In order to capture the essential cause to possible violation of the equipartition theorem, we can construct a model system consisting of indistinguishable, quasi-independent particles each of them has continuous energy level except the ground state one, then explore its behaviors of the heat capacity. The key finding is that, when temperature rises from zero Kelvin, there is a definite temperature at which the heat capacity increases rapidly, and vice versa. In other words, once the temperature is lower than this one, the relevant degree of freedom is almost frozen; and in this sense we can take this fastest frozen temperature as the frozen temperature itself as well. In present paper, we confine ourself to the case the non-degenerate gases such that the Boltzmann statistics can apply. In textbooks, for a specific degree of freedom the frozen criterion is qualitatively expressed as T T C where T C is the characteristic temperature defined via in which ε 0 and ε 1 are, respectively, the ground state and first excited energy of the degree of freedom under study. This characteristic temperature T C is the temperature at which the degree of freedom is almost activated and it makes a significant contribution toward the specific heat of the system. For instance, for the vibrational degrees of freedom, the value of the heat capacity at T C is "about 93 per cent of the equipartition value". [2] For simplicity, in rest part of the present paper, we will use specific heat defined by arXiv:2001.03007v1 [cond-mat.stat-mech] 9 Jan 2020 In our approach, all other parameters remain unchanged except the temperature T , the partial derivative "∂" above can be replaced by the derivative "d". The third law of thermodynamics implies that all degrees of freedom are completely frozen at zero Kelvin [1][2][3][4][5] lim T →0 We will show that for a given degree of freedom, the existence of the characteristic temperature T C (1) accompanies a fastest frozen temperature ϑ, which is defined by the maximum of the dc/dt dc dT The paper is organized in the following. In section II, we present a theorem based on a model system to explicitly demonstrate the existence of the fastest frozen temperature ϑ. In section III, some examples are given. The final section IV concludes this study. II. EXISTENCE OF FASTEST FROZEN TEMPERATURE ϑ: A THEOREM Construct a model system that has a kind of degrees of freedom whose energy levels are simply continuous except the ground state one which is isolated from the rest, i.e., in which the spacing between 1 , 2 , 3 , ...is negligible but 1 is appreciably different from ground state one 0 which can be conveniently chosen to to be zero, 0 = 0, and the density of states can be A (D−2)/2 , in which D = 1, 2, 3, 4, 5, 6 can be understood as the number of the dimension of the spaces, where coefficient A can be set to be unity. Utilize the Boltzmann statistics, the partition function is with β = (k B T ) where Γ(s, x) = ∞ x t s−1 e −t dt is the incomplete gamma function and Γ(s) = Γ(s, 0) the ordinary gamma function. This problem is analytically tractable, but the relevant expressions are lengthy. The energy per particle and the specific heat c are, respectively, determined by We do not explicitly show their expressions. We compute c(T ) and dc(T )/dT , respectively, for D = (1, 2, 3, 4, 5, 6), and find they are similar. Thus, we plot c(T ) and dc(T )/dT for D = 3 in Fig.1 only. It is clearly that the c( The values of the fastest frozen temperature ϑ and ratios of two specific heats c(ϑ)/c(T C ) are listed in following Table. Since c(ϑ)/c(T C ) are all significantly smaller than 1, we can therefore take ϑ as a quantitative criterion for the frozen temperature. The results above can be summarized in a theorem: For a degreee of freedom whose energy levels are continuous except for the ground state one, it has a fastest frozen temperature in the course the degreee of freedom is freezing. III. FASTEST FROZEN TEMPERATURE ϑ: EXAMPLES In this section, we present some examples to demonstrate the theorem above. Example 1: ϑ of vibrational degrees of freedom for a diatomic gas. The energy level is n = (n + 1/2) ω, (n = 0, 1, 2, ...), where is the Planck's constant and ω is the vibrational frequency. The partition function is, [2] With the characteristic temperature T C = ω/k B , we have the mean vibrational energy u and specific heat c, respectively, The specific heat at T C is c(T C ) = 0.921k B lim T →∞ c(T ) = k B . The fastest frozen temperature ϑ = 0.223T C at which c(ϑ) = 0.231k B from which we see that vibrational degrees of freedom are thermally depressed. The value T C of different diatomic gases is of order 10 3 K, [2] and we have then ϑ ∼ (200 − 300) K so the vibrational degrees of freedom at room temperature is almost frozen. We plot the specific heat c(T ) and dc/dt against temperature in Fig.2. Example 2: ϑ of rotational degrees of freedom for a heteronuclear diatomic gas. The energy level is l = l(l + 1) 2 /2I, (l = 0, 1, 2, ...), where I is the moment of inertia. The partition function is, Note that in textbooks [1][2][3][4][5] the characteristic temperature T C = 2 / (2Ik B ) rather than (1). We follow this convention. The mean vibrational energy u and specific heat c are, respectively, determined by where The numerical calculations give both c(T C ) = 1.07k B > lim T →∞ c(T ) = k B and the fastest frozen temperature ϑ = 0.390T C at which c(ϑ) = 0.452k B . Once T decreases from ϑ the rotational degrees of freedom is rapidly frozen out. The value of T C , for example, for HCl is about 15 K, [2] we have ϑ = 12 K so the rotational degrees of freedom at room temperature is freely excited. We plot the specific heat c(T ) and dc/dt against temperature in Fig.3. The mean vibrational energy u and specific heat c are, respectively, determined by We still follow the convention of taking the rotational characteristic temperature T C = 2 / (2Ik B ) as done in standard textbooks. [1][2][3][4][5] The numerical calculations give both the fastest frozen temperature ϑ = 1.33T C at which c(ϑ) = 0.305k B and c(T C ) = 0.117k B << k B = lim T →∞ c(T ). From these results, we see that this T C is right the frozen temperature rather than the excited one. The specific heat c(T ) and dc/dt against temperature in Fig.4. The value of T C for H 2 is 85 K and the excited temperature is much higher. [2] Note that at T = 0 K, hydrogen contains mainly para-hydrogen which is more stable, and in general the concentration ratio of the ortho-to parahydrogen in thermal equilibrium is given by [2] from which we obtain r(T C ) = 7.44, r(ϑ) = 4.74, r(300) = 3.01. These numbers indicate again that both ϑ and T C are actually frozen temperature rather than the activated ones, while the room temperature T ≈ 300 K= 3.53T C is the activated temperature at which the ratio r ≈ 3, and thus "the name ortho-is given to that component which carries the larger statistical weight". [2] In fact, our definition of the characteristic temperature (1) is 3T C = 2 − 0 and 5T C = 3 − 1 , respectively, for ortho-and para-hydrogen. Other results are shown in Fig.4. We leave some exercises to the readers to determine the fastest frozen temperatures for, e.g., two-level system, gas of deuterium molecule and model system (5)-(6) with fractal dimensions D. The results are similar. IV. CONCLUSION The known frozen temperature for a degree of freedom is qualitatively expressed as T T C which is usually the characteristic temperature defined via k B T C ≡ ε 1 − ε 0 in which ε 0 and ε 1 are, respectively, the ground state and first excited energy of the given degree of freedom. This characteristic temperature T C is actually the excited temperature for the degree of freedom because the heat capacity is at least 90% of the equipartition value. A welldefined temperature at which the specific heat falls fastest along with decrease of the temperature is identified, which can be taken as the quantitative criterion of the frozen temperature itself. It is specially useful for some systems such as a mixture of ortho-and para-hydrogen molecules, in which the characteristic temperature is hard to define. Curves for specific heat (solid) and its derivative with respect to temperature (dashed) for the rotational degrees of freedom of 1 : 3 mixture of para-hydrogen and ortho-hydrogen molecules, and the auxiliary dotted lines and arrows are guide for the eye. The equipartition value of the specific heat is 1kB, and c(TC ) = 0.117kB is much smaller than 1kB. The dc/dT has a global maximum at ϑ = 1.33TC at which the specific heat c(ϑ) = 0.305kB. The conventional definition of characteristic temperature TC = 2 /(2IkB) is nothing but a unit, and once we use our definition (1), we have for ortho-and para-hydrogen, respectively, c(ϑ ortho ) = 0.907kB and c(ϑpara) = 0.999kB.
2,878.4
2020-01-09T00:00:00.000
[ "Physics" ]
Lamin B Phosphorylation by Protein Kinase Cα and Proteolysis during Apoptosis in Human Leukemia HL60 Cells* Protein phosphorylation plays an important role in signal transduction, but its involvement in apoptosis still remains unclear. In this report, the p53-null human leukemia HL60 cells were used to investigate phosphorylation and degradation of lamin B during apoptosis. We found that lamin B was phosphorylated within 1 h after addition of the DNA topoisomerase I inhibitor, camptothecin, and that lamin B phosphorylation preceded lamin B degradation and DNA fragmentation. Using a cell-free system we also found that cytosol from camptothecin-treated cells induced lamin B phosphorylation and degradation in isolated nuclei from untreated HL60 cells. Lamin B phosphorylation was prevented by the protein kinase C (PKC) inhibitor 7-hydroxystaurosporine (UCN-01) but not by the Cdc2 inhibitor, flavopiridol. Phosphorylation of lamin B was inhibited by immunodepletion of PKCα from activated cytosol and was restored by addition of purified PKCα. PKCα activity also increased rapidly as lamin B was phosphorylated after initiation of the apoptotic response in HL60 cells. These data suggest that lamin B is phosphorylated by PKCα and proteolyzed before DNA fragmentation in HL60 cells undergoing apoptosis. Protein phosphorylation plays an important role in signal transduction, but its involvement in apoptosis still remains unclear. In this report, the p53-null human leukemia HL60 cells were used to investigate phosphorylation and degradation of lamin B during apoptosis. We found that lamin B was phosphorylated within 1 h after addition of the DNA topoisomerase I inhibitor, camptothecin, and that lamin B phosphorylation preceded lamin B degradation and DNA fragmentation. Using a cell-free system we also found that cytosol from camptothecin-treated cells induced lamin B phosphorylation and degradation in isolated nuclei from untreated HL60 cells. Lamin B phosphorylation was prevented by the protein kinase C (PKC) inhibitor 7-hydroxystaurosporine (UCN-01) but not by the Cdc2 inhibitor, flavopiridol. Phosphorylation of lamin B was inhibited by immunodepletion of PKC␣ from activated cytosol and was restored by addition of purified PKC␣. PKC␣ activity also increased rapidly as lamin B was phosphorylated after initiation of the apoptotic response in HL60 cells. These data suggest that lamin B is phosphorylated by PKC␣ and proteolyzed before DNA fragmentation in HL60 cells undergoing apoptosis. The nuclear lamins are karyophilic proteins located at the nucleoplasmic surface of the inner nuclear membrane where they assemble in a polymeric structure referred to as the nuclear lamina (for review, see Refs. [1][2][3]. Lamins belong to the family of intermediate filaments, which share a tripartite organization consisting of a central ␣-helical rod domain of conserved size, flanked by N-and C-terminal non-␣-helical end domains of variable size and sequence (see Fig. 9). The lamina has been suggested to serve as a major chromatin anchoring site of nuclear scaffold-associated regions during interphase and possibly to be involved in organizing higher order chromatin domains. The lamina is a dynamic structure regulated by phosphorylation. Phosphorylation by p34 cdc2 kinase is key to the dissolution of the nuclear lamina during mitosis. Other lamin kinases include mitogen-associated protein kinases, c-AMP-dependent protein kinase (PKA) 1 and protein kinase C (PKC) (1)(2)(3). Major PKC phosphorylation sites have been mapped to serine residues located in close proximity to the nuclear localization signal in the C-terminal non-␣-helical region, and phosphorylation of these residues interferes with the nuclear transport of lamin B (2). The p34 cdc2 phosphorylation sites are on both sides of the central ␣-helical rod domain. While many mammalian cells contains three distinct lamins (lamins A, B, and C), human leukemia HL60 cells express primarily lamin B (4). Lamin proteolysis during apoptosis has been reported in various cell lines treated with different stimuli. In human leukemia HL60 cells treated with etoposide (VP-16) (5) or camptothecin (CPT) (6), apoptosis is accompanied by diminished levels of lamin B. Etoposide is a topoisomerase II inhibitor (7) and CPT a topoisomerase I inhibitor (8). Both drugs are effective anti-cancer agents. Lamin B1 degradation was also reported to precede DNA fragmentation in apoptotic thymocytes (9) and in HeLa cells treated with anti-CD95 antibody (10). Lamin A and B proteolysis into 45-kDa fragments is also observed in apoptosis induced by serum starvation of ras transformed primary rat embryo cells (11) and in reconstituted cell-free systems (6,12). The site of lamin A and B cleavage yielding the 45-kDa fragment has recently been mapped to a conserved aspartate residue at position 230 (13,14) corresponding to a consensus sequence for caspases. Furthermore, lamin A has been shown to be cleaved by caspase 6 (Mch-2␣) but not caspase 3 (CPP32/YAMA) (13,15). The death-related cysteine proteases of the caspase family play a central role in the execution phase of apoptosis (16 -19). Each caspase cleaves selectively a subset of cellular proteins. For instance, poly(ADP-ribose)polymerase is preferentially cleaved by caspase 3 (CPP32/Yama) (20,21), and lamin A can be cleaved by caspase 6 (Mch-2) (13,15). Interestingly, recent observations demonstrated that overexpression of mutant lamins A or B resistant to caspase cleavage delayed DNA fragmentation, suggesting that lamin cleavage participates in the activation of DNA fragmentation and nuclear apoptosis (14). Thus, lamins are presently the only known caspase substrates known to be directly involved in the execution phase of apoptosis. Protein phosphorylation is probably important to regulate apoptosis (22). For instance, unscheduled activation of p34 cdc2 kinase, one of the lamin kinases (2), is associated with cytotoxic T lymphocyte-mediated apoptosis (23) and precedes CPT-and DNA damage-induced apoptosis in HL60 cells (24). In the present study, we investigated lamin B phosphorylation and degradation during apoptosis in response to camptothecin in HL60 cells and in a previously described cell-free system (6,(25)(26)(27). The identity of the lamin B protease is not known. Both caspases and the nuclear scaffold-associated serine protease have been suggested as candidate proteases (28,29). MATERIALS AND METHODS Chemicals, Drugs, and Antibodies-CPT, 7-hydroxystaurosporine (UCN-01), and flavopiridol were obtained from the NCI Drug Chemistry and Synthesis Branch. Drugs were freshly dissolved in dimethyl * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Anti-lamin B monoclonal antibody from mouse (101-B7) was purchased from Oncogene Research Products (Cambridge, MA) and antiprotein kinase C␣ (anti-PKC␣) polyclonal antibodies from rabbit was purchased from Santa Cruz Biotechnology Inc. (Santa Cruz, CA). Anti-PKC monoclonal antibody 1.9 and recombinant PKC␣ from baculovirus were purchased from Life Technologies, Inc. The horseradish peroxidase-conjugated anti-mouse immunogloblin secondary antibody was purchased from Amersham Pharmacia Biotech. Cell Culture, Drug Treatment, DNA, and Protein Labeling-Human promyelocytic leukemia HL60 cells were grown in suspension culture in RPMI 1640 medium supplemented with 10% fetal calf serum (Life Technologies, Inc.), 2 mM glutamine, 100 units/ml penicillin, and 100 g/ml streptomycin at 37°C in an atmosphere of 95% air and 5% CO 2 . For filter elution assays, HL60 cells were incubated with [ 14 C]thymidine for 1-doubling time (about 24 h). Cell cultures were then washed with fresh medium twice and chased in isotope-free medium overnight before drug treatment. Unless otherwise indicated, camptothecin treatments were with 5 M. For in vivo phosphorylation, HL60 cells were washed twice in phosphate-free RPMI 1640 medium containing 10% dialyzed fetal calf serum, resuspended in the same medium and incubated with 250 Ci of [ 32 P]orthophosphate/10 7 cells. Following 1-h incubation, the 32 P-labeled cells were washed twice and resuspended in phosphatefree RPMI 1640 with dialyzed serum for 30 min prior to drug treatment. Isolation of Nuclei and Cytosol for Reconstituted Cell-free System Studies-We followed our previously published procedure (25,26). Briefly, untreated and treated cells (5 M camptothecin for 3 h) were spun down, rinsed three times in cold PBS (phosphate-buffered saline), and resuspended at a density of approximately 10 7 cells/ml in nucleus buffer (1 mM KH 2 PO 4 , 150 mM NaCl, 5 mM MgCl 2, 1 mM EGTA, 0.1 mM AEBSF, 0.15 unit/ml aprotinin, 1.0 mM Na 3 VO 4 , 5 mM HEPES, pH 7.4, 10% glycerol), including 0.3% Triton X-100. After incubation at 4°C for 10 min and gentle agitation, cellular mixes were centrifuged at 2,000 ϫ g for 10 min, rinsed once by centrifugation/resuspension in nucleus buffer without Triton X-100, and used as nuclei suspensions at a density of 1-2 ϫ 10 7 nuclei/ml. Supernatants were centrifuged at 10,000 ϫ g for 10 min and used as cytosol. [ 14 C]Thymidine-labeled cells were used to prepare nuclei for filter elution assay. Filter Elution Assays for Measurement of DNA Fragmentation-DNA fragmentation related to apoptosis was measured by filter elution as described previously (25,30). Briefly, reaction mixtures were deposited onto protein-adsorbing filters (Metricel, Gelman Science, Ann Harbor, MI) and washed with 3 ml of nucleus buffer. This fraction (W) was collected. Lysis was performed with 5 ml of LS10 (2 M NaCl, 0.04 M Na 2 EDTA, 0.2% Sarkosyl, pH 10) followed by washing with 5 ml of 0.02 mM Na 2 EDTA, pH 10. The lysis (L) and EDTA (E) fractions were collected. All fractions (W, L, and E) and filters (F) were counted by liquid scintillation. DNA fragmentation was calculated as the percent of DNA eluting from the filter as: percent DNA fragmentation ϭ 100 ϫ (W ϩ L ϩ E)/(W ϩ L ϩ E ϩ F). All experiments were repeated at least two or three times. Immunoblotting for Lamin B-After drug treatment, cells were washed in PBS and resuspended in reducing loading buffer (62.5 mM Tris-HCl, pH 6.8, 6 M urea, 10% glycerol, 2% SDS, 0.003% bromphenol blue, 5% 2-mercaptoethanol) and sonicated for 20 s. The lysates containing 2.5 ϫ 10 5 cells were heated at 65°C for 15 min and then loaded in 12% SDS-polyacrylamide gels (precast gel from NOVEX, San Diego, CA). After electrophoresis, proteins were electrophoretically transferred from the gels to polyvinylidene difluoride membranes (Immobilon-P from Millipore Co., Bedford, MA) according to the manufacturer's protocol. Membranes were incubated at room temperature for 1 h in the primary antibody solutions after blocking in 5% non-fat dry milk solution for 1 h, followed by 1-h incubation with secondary antibody. Bands were visualized by enhanced chemiluminescence (SuperSignal, Pierce). In Vivo Lamin B Phosphorylation-After drug treatment, 32 P-labeled HL60 cells (10 7 cells/sample) were washed in phosphate-free RPMI 1640 medium without serum once, and the cell pellets were lysed in buffer A (PBS containing 1% Nonidet P-40, 1 g/ml leupeptin, 5 mM NaF, 1 mM Na 3 VO 4 , 2 mM AEBSF, 4 units/ml aprotinin, and 1% bovine serum albumin) with 0.4% SDS before sonication. The cell lysates were centrifuged at 14,500 rpm for 15 min, and the supernatants were mixed with 1.5 g of anti-human lamin B antibody and 20% protein G-Sepha- rose suspensions in lysis buffer followed by overnight mixing at 4°C. At the end of incubation, the immune complex was washed in buffer A and buffer A without bovine serum albumin. The immune complex was boiled for 10 min after adding 3 ϫ SDS loading buffer. Samples were analyzed in 12% SDS-PAGE. Protein gels were dried up and subjected to autoradiography after being dried up. In Vitro Lamin B Phosphorylation in Cell-free System-The nuclei from untreated HL60 cells (1.5 ϫ 10 6 nuclei) were incubated with cytosol in the presence of [␥-32 P]ATP. After incubation, buffer A supplemented with 0.4% SDS was added to the samples before brief sonication. Afterward the procedures were the same as for in vivo lamin B phosphorylation. Immunodepletion of PKC␣-Anti-PKC␣ antibody was incubated with protein G-Sepharose beads at 4°C for 3 h. The beads were collected by centrifugation. After removal of the supernatant, the beads were washed once with nucleus buffer and incubated with cytosol from CPT-treated cells (CPT-cytosol:antibody ϭ 5:1 (v/v)) overnight in a rotator at 4°C. The beads were subsequently pelleted by centrifugation at 10,000 ϫ g. The supernatant was subjected to immunoblotting for PKC␣ and was used as CPT-cytosol immunodepleted of PKC␣. Mock-depleted CPT-cytosol was made just using nuclei buffer to replace antibody. Lamin B Degradation and Phosphorylation in Apoptotic HL60 Cells-HL60 cells are remarkably sensitive to various apoptotic stimuli, including chemotherapeutic DNA-damaging agents (5, 31) such as the topoisomerase I inhibitor camptothecin, protein kinase inhibitors (25,32), and the Golgi poison, brefeldin A (33). Consistent with previous studies (31), Fig. 1A shows that camptothecin induces apoptotic DNA fragmentation in HL60 cells with rapid kinetics. Lamin B protein was cleaved with similar kinetics as the DNA fragmentation, yielding two cleavage bands (Fig. 1B) corresponding to 45-and 32-kDa polypeptides that were detected 3 h after the beginning of drug treatment. Since phosphorylation is critical for modulating lamin stability and the nuclear and chromatin structure both in mitosis and interphase (2), we studied lamin B phosphorylation during apoptosis in HL60 cells. As shown in Fig. 2, phosphorylation of the 69-kDa lamin B polypeptide increased rapidly during camptothecin treatment. Three hours after the beginning of treatment, the 32-kDa lamin B cleavage product was also phosphorylated ( Fig. 2A). These results indicate that lamin B phosphorylation occurs early during apoptosis and is associated with its degradation. Cytosol from Apoptotic HL60 Cells Also Induced Lamin B Cleavage and Phosphorylation in Vitro-We next used a cellfree system that we previously established to demonstrate the role of serine proteases in triggering apoptotic DNA fragmentation (25,26,34). Consistent with our previous results, the cytosol from apoptotic HL60 cells induced DNA fragmentation in nuclei from untreated HL60 cells (Fig. 3A). Cytosol from apoptotic cells also cleaved lamin B from naive nuclei to a 45-kDa product (Fig. 3B). Lamin B phosphorylation was then studied in the cell-free system after incubation of nuclei suspensions with cytosols from apoptotic or control cells in the presence of [␥-32 P]ATP. After immunoprecipitation with anti-lamin B antibody, samples were run on SDS-PAGE, and phosphorylated lamin B was analyzed by autoradiography and PhosphorImager (Molecular Dynamics). Fig. 4 shows that cytosol from apoptotic cells enhanced lamin B phosphorylation. Lamin B Cleavage, but Not Phosphorylation, in the Cellfree System Can Be Inhibited by the Serine Protease Inhibitor DCI-We observed previously that DNA fragmentation induced by apoptotic cytosol could be inhibited by the serine protease inhibitor DCI (6). Fig. 5 shows while DCI blocked DNA fragmentation induced by CPT, lamin B cleavage also was abolished. The result suggested that serine protease activation was required for both DNA fragmentation and lamin B degradation. We next asked whether lamin B phosphorylation could be inhibited by DCI. Fig. 5C shows that lamin B phosphorylation was not affected by DCI. This finding suggests that protease activation does not affect lamin B phosphorylation. Investigation of the Lamin B Kinase during Apoptosis in HL60 Cells-Cyclin B/Cdc2 (p34 cdc2 ) kinase is critical for lamin depolymerization during mitosis (2). We found that flavopiridol, a potent Cdk inhibitor (35,36) could not inhibit lamin B phosphorylation induced by cytosol from apoptotic cells even at high concentrations (100 M) (Fig. 6A). We also found that flavopiridol had not effect on either DNA fragmentation or lamin B cleavage in the cell-free system (Fig. 6, B and C). These observations suggested that p34 cdc2 kinase was not responsible for phosphorylation of lamin B by the apoptotic cytosol. Protein kinase C has also been shown to phosphorylate lamin B in interphase cells (37). We first used the PKC inhibitor UCN-01 (38) to test whether lamin B phosphorylation during apoptosis is related to PKC. Fig. 7A shows that lamin B phosphorylation was inhibited by UCN-01 in a dose-dependent manner. An anti-PKC monoclonal antibody, which acts near the active site of PKC and inhibits PKC activity by more than 80% (39), was used next. This antibody strongly suppressed lamin B phosphorylation induced by apoptotic cytosol. Recombinant PKC␣ restored lamin B phosphorylation (Fig. 7B). A third type of experiment was performed to test whether lamin B phosphorylation in apoptotic cells extracts could be linked to PKC␣. Fig. 7C shows that after immunodepletion of PKC␣ from apoptotic cytosol, lamin B phosphorylation was reduced by about 90%. The efficiency of the immunodepletion was tested (Fig. 7C, lower panel) and showed that under these condition PKC␣ protein levels were almost undetectable. Together, the results shown in Fig. 7 suggested that PKC␣ was critical for lamin B phosphorylation by cytosol from apoptotic HL60 cells. Protein Kinase C␣ Is Activated with a Similar Kinetics as Lamin B Phosphorylation during Camptothecin-induced Apoptosis in HL60 Cells-A recent study showed that PKC␣ is activated in cytosol from apoptotic HL60 cells (40). Now we tested whether camptothecin also induces PKC␣ activation in whole HL60 cells. As shown in Fig. 8, PKC␣ activity increased rapidly during the first hour after the beginning of camptothecin treatment. DISCUSSION The present study is the first report of lamin B phosphorylation during apoptosis. We found that lamin B is phosphorylated within 1 h after the addition of the apoptotic inducer (camptothecin) and that lamin B phosphorylation persists for several hours as lamin B is being cleaved, and DNA fragmentation and complete apoptosis take place (31). Various kinases are involved in lamin phosphorylation, including The C-terminal tetrapeptide referred to as the CaaX box (C ϭ Cys; a ϭ aliphatic amino acid; X ϭ any amino acid) is subject to three successive posttranslational modifications (farnesylation, proteolytic trimming, and carboxymethylation), which are required for association of nuclear lamins with the nuclear membrane (2). p34 cdc2 kinase, mitogen-associated kinase, PKC, PKA (2). In particular, in mitosis, Cdc2 kinase is believed to play a critical role in lamin phosphorylation and thus disassembly of lamin polymers (2,41). The present data suggest that PKC is critical for lamin B phosphorylation in HL60 cell during apoptosis for the following reasons. First, the PKC inhibitor, UCN-01 (38,42) effectively blocked the lamin kinase in vitro, while the cyclin kinase inhibitor flavopiridol (35) was inactive. Second, lamin B phosphorylation was inhibited by a monoclonal antibody directed against the active site of PKC. Third, immunodepletion of apoptotic cell extracts with anti-PKC antibody inhibited lamin B phosphorylation, while addition of excess PKC restored lamin B phosphorylation. Fourth and finally, total PKC␣ activity increased at the time of lamin B phosphorylation (40). The lamin B protein kinase C phosphorylation sites have been mapped to serines 395 and 405 in HL60 cells following PKC activation by bryostatin (43) (Fig. 9). These sites are adjacent to the highly conserved central ␣-helical rod domain, which is thought to be responsible for the formation of a highly stable coiled-coil dimer between two lamin molecules. It is also next to the nuclear translocation signal sequence (NLS) (Fig. 9). In the case of chicken lamin B2, phosphorylation by PKC in this region has been shown to alter recognition of this sequence and block nuclear import of newly synthesized lamin polypeptides (2). Recently, PKC-mediated lamin B phosphorylation during interphase has been shown to promote lamin B solubilization and nuclear lamina disassembly (37). Thus, PKC-mediated lamin B phosphorylation during apoptosis is likely to affect nuclear and chromatin structure. Proteolytic lamin degradation is a common and probably functionally important biochemical feature of apoptosis. It has been observed in all the apoptotic cell systems described to date, including HL60 cells treated with chemotherapeutic agents (5,6,34), activation-driven cell death of T cells (44), thymocyte apoptosis (9), and serum starvation in ras transformed embryo cells (11). Lamin degradation has also been reported in apoptosis induced by drICE in insect cells (45). Interestingly, a recent study of Rao et al. (14) demonstrated that overexpression of lamin A or B delayed nuclear apoptosis and DNA fragmentation in the case of p53-dependent apoptosis in rodent cells. These observations suggest that lamin cleavage plays an active role in the execution phase of apoptosis. Caspase 6 (Mch-2) has been shown to cleave lamin A at the conserved aspartic residue at position 230 (13,14), and the corresponding lamin B cleavage site has been mapped to Asp 231 (14). This site is located in the conserved ␣-helical rod domain (Fig. 9), and its cleavage would produce two polypeptides of 40.3 and 25.9 kDa, respectively. The observed 45-kDa fragment has been shown to correspond to the predicted 40.3 fragment (13,14). Another candidate lamin protease is the nuclear scaffold protease (46), which would be expected to cleave lamin B at tyrosine 377 and to yield two polypeptides of 43.2 and 23 kDa, respectively (Fig. 9). Therefore, it is possible that serine proteases (6) might also be involved in lamin B cleavage during apoptosis to produce the 32-kDa lamin B fragment that we observed in addition to the 45-kDa polypeptide in apoptotic HL60 cells. Since the observed sizes indicate that cleavage of lamin B during apoptosis occurs in the conserved ␣-helical rod domain, which is essential for lamin dimerization (Fig. 9), it is likely that lamin B cleavage should promote the dissolution of the nuclear lamina and affect nuclear condensation. This recent conclusion is consistent with the works of Rao et al. (14) and Lazebnik et al. (12), demonstrating an impairment of apoptotic chromatin condensation upon inhibition of lamin proteolysis. Chromatin condensation and DNA fragmentation might be related to the important function of the nuclear lamina as an anchorage structure for the chromatin scaffold-associated regions, which would organize the chromatin loop structures. Thus, it is possible that chromatin release from the nuclear lamina might facilitate the activity of nucleases and the cleavage and release of chromatin loops during apoptosis.
4,930.8
1998-04-10T00:00:00.000
[ "Biology", "Medicine" ]
Spatial variations of incoming sediments at the northeastern Japan arc and their implications for megathrust earthquakes The nature of incoming sediments is a key controlling factor for the occurrence of megathrust earthquakes in subduction zones. In the 2011 M w 9 Tohoku earthquake (offshore Japan), smectite-rich clay minerals transported by the subducting oceanic plate played a critical role in the development of giant interplate coseismic slip near the trench. Recently, we conducted intensive controlled-source seismic surveys at the northwestern part of the Pacific plate to investigate the nature of the incoming oceanic plate. Our seismic reflection data reveal that the thickness of the sediment layer between the seafloor and the acoustic basement is a few hundred meters in most areas, but there are a few areas where the sediments appear to be extremely thin. Our wide-angle seismic data suggest that the acoustic basement in these thin-sediment areas is not the top of the oceanic crust, but instead a magmatic intrusion within the sediments associated with recent volcanic activity. This means that the lower part of the sediments, including the smectite-rich pelagic red-brown clay layer, has been heavily disturbed and thermally metamorphosed in these places. The giant coseismic slip of the 2011 Tohoku earthquake stopped in the vicinity of a thin-sediment area that is just beginning to subduct. Based on these observations, we propose that post-spreading volcanic activity on the oceanic plate prior to subduction is a factor that can shape the size and distribution of interplate earthquakes after subduction through its disturbance and thermal metamorphism of the local sediment layer. INTRODUCTION The occurrence and magnitude of thrust earthquakes in subduction zones is closely linked to interplate seismic coupling. This coupling, in turn, is generally thought to be related to the surface topography and surface materials that form the incoming oceanic plate. Large geometrical irregularities like seamounts tend to hinder longrange coseismic rupture propagation (Wang and Bilek, 2014). In contrast, thick sediments can smooth out low seafloor relief and result in a homogenized interplate coupling (e.g., Ruff, 1989). Fault zone materials control the mechanical behavior of a plate boundary fault. For example, results from the Integrated Ocean Drilling Program (IODP) Expedition 343 after the 2011 M w 9 Tohoku earthquake (offshore Japan) showed that the giant coseismic slip near the trench (>50 m) occurred within a thin smectiterich clay layer at the plate boundary (Chester et al., 2013). Because smectite is an extremely weak mineral whose presence can dramatically change both the static and dynamic friction along a fault, the presence of an ultraweak smectite-rich clay layer is now thought to be a prerequisite for giant coseismic slip (Ujiie et al., 2013) like that observed at Tohoku. Incoming sediments of the northwestern Pacific plate are generally divided into three parts: the lowermost sediments are a chert unit, overlain by thin pelagic red-brown sediments, with the top unit a thick hemipelagic sediment layer (Shipboard Scientific Party, 1980;Moore et al., 2015). Mineralogical analyses of drilling cores from both IODP Expedition 343 (post-subduction) and Deep Sea Drilling Project Site 436 (pre-subduction) show that the origin of smectite at the plate boundary fault is from pelagic redbrown clay within the incoming sediments (Kameda et al., 2015;Moore et al., 2015). Thus, the composition of incoming sediments is also a key factor shaping the occurrence of megathrust earthquakes. In the past, due to relatively poor seismic coverage, spatial variations in incoming sediments have not been well constrained. Recently, we conducted intensive multichannel seismic (MCS) reflection surveys and wide-angle seismic reflection and refraction surveys on the northwestern part of the Pacific plate with the goal of revealing the nature of the subduction inputs to the northeastern Japan arc (Fig. 1A). In this study, we present an improved picture of the spatial variations in incoming sediments and discuss its implications for subduction zone earthquakes. DATA ACQUISTION Since 2009, we have conducted extensive controlled-source seismic surveys, along lines as much as several hundred kilometers long, that mainly focus on the impact of plate bending-related faulting prior to subduction (Fujie et al., 2013(Fujie et al., , 2018Kodaira et al., 2014) (Fig. 1A, thick black lines). MCS data were collected by towing a 6-km-long, 444-channel hydrophone streamer cable and using the large tuned airgun array of R/V Kairei of the Japan Agency for Marine-Earth Science and Technology (JAMSTEC; Yokohama, Japan) (total volume of 7800 in 3 ). The tow depths of the airgun array and the streamer cable were 10 and 12 m, respectively (see the GSA Data Repository 1 for methodology). After the 2011 Tohoku earthquake, we also conducted another type of seismic survey that focused on the traces of giant coseismic slip near the trench (Kodaira et al., 2012;Nakamura et al., 2013). We used a small-offset MCS system (1.2-km-long, 192-channel hydrophone streamer cable, total airgun array volume of 380 in 3 ) and collected data along >100 densely aligned short survey lines across the Japan Trench (Fig. 1A, thin black lines). Tow depths of the airguns and streamer cable were relatively shallow (5 m and 6 m, respectively) to better focus on the sedimentary structure. Despite significant differences in the survey configuration, spatial variations in sediment thickness are well constrained by both surveys after applying standard post-stack time migration (Figs. 2 and 3). Spatial Variations in Sediment Thickness We define the sediment thickness by its associated two-way traveltime between the seafloor and acoustic basement. Acoustic basement is a seismic reflector with a large amplitude beneath the seafloor, usually corresponding to the top of the basaltic crust. Our MCS data show that sediment thicknesses typically are 300-500 ms (Fig. 1B), corresponding to 240-400 m for an average seismic velocity within the sediments of 1.6 km/s (Shipboard Scientific Party, 1980). Typical sediment thicknesses are generally consistent with those determined in a previous study (Divins, 2003), but we found a few areas where the sediments are observed to be extremely thin, such as areas A and C in Figure 1B. As described above, the base of the sediments in this region is generally a chert unit formed of lithified pelagic siliceous sediments. The top and bottom interfaces of the chert unit are commonly reflective, and there commonly exist patchy reflective zones between them (Shipboard Scientific Party, 1980) (Fig. 2C). However, in the thin-sediment areas, we do not observe the characteristic appearance of the chert unit (Fig. 2D). The absence of the chert unit implies that the lower part of the sediments, including the pelagic red-brown clay, is missing because the chert unit is the lowermost part of the sediments and the the clay layer is located immediately above the chert (Moore et al., 2015). We carefully investigated all MCS profiles and mapped areas where we could clearly recognize the characteristic appearance of the chert unit. We confirm a good correlation between sediment thickness and the distribution of the chert unit, which suggests that the pelagic clay is missing in the thin-sediment areas (Fig. 1C). P-Wave Velocity Beneath the Acoustic Basement In thin-sediment areas, some reflectors beneath the acoustic basement are also observed (Fig. 2D). We interpret this to mean that the acoustic basement might not be the top of the intact basaltic oceanic crust in these regions. To further investigate the nature of the acoustic basement, we utilize wide-angle seismic survey data. In 2014 and 2015, we deployed 88 oceanbottom seismometers (OBSs) of JAMSTEC and GEOMAR (Kiel, Germany) at intervals of 6 km along line A4 (Fig. 1A) and fired the airgun array of R/V Kairei. We determined a two-dimensional (2-D) P-wave velocity (Vp) model by traveltime inversion (Fujie et al., 2013(Fujie et al., , 2016(Fujie et al., , 2018 using both OBS and MCS data (see the Data Repository). This Vp model indicates a simple layered oceanic plate structure (Fig. 2E). To show lateral variations within the crust, we extract one-dimensional (1-D) Vp-depth profiles from the 2-D Vp model every 10 km and categorize them as belonging to three possible segments: (1) the bend fault segment, where a horst-and-graben structure caused by bend faulting is observed; (2) the thin-sediment segment, where sediments appear to be extremely thin (area C of In general, the oceanic crust in the northwestern Pacific plate consists of upper crust (oceanic layer 2, with large Vp gradient) and lower crust (oceanic layer 3, with almost constant Vp). All 1-D Vp profiles are basically consistent with this general structure, but there are intriguing differences among segments. The oceanic crust in the thick-sediment segment is considered to be "standard" in this region because Vp and its gradient are consistent with those of flat-ocean-floor parts of nearby survey lines A2 and A3 (Fujie et al., 2018). In the bend fault segment, the Vp of crust and mantle are significantly lower than in the thick-sediment segment. The Vp reduction near the trench is observed in nearby survey lines A2 and A3, as well as at many other subduction trenches around the world (e.g., Van Avendonk et al., 2011;Shillington et al., 2015;Grevemeyer et al., 2018). This has been explained as a consequence of bend faulting. In the thin-sediment segment, lower-crustal Vp is basically the same as in the thick-sediment segment. In contrast, Vp immediately beneath the acoustic basement is the lowest of the three segments. In addition, the boundary between oceanic layers 2 and 3, represented by changes in Vp gradient, is a few hundred meters deeper than in other segments, indicating that the uppercrustal thickness is a little thicker than in the other segments. P-S Conversion Interfaces at Approximately the Depth of the Acoustic Basement We also calculated receiver functions (RFs) to investigate in more detail the structure immediately beneath the acoustic basement. RFs are an effective tool for detecting P-wave to S-wave (P-S) conversion interfaces (e.g., Vinnik, 1977). The advantage of applying RFs to controlled-source data is that we can choose the imaging target depth by limiting the offset distance. We chose an offset range of 9-25 km to highlight the depth of the sediment-crust boundary. In thick-sediment areas, a single P-S conversion interface was imaged at ∼2 s lag time (Fig. 2C). This is interpreted to be acoustic basement, corresponding to the top of the oceanic crust. In contrast, in the thin-sediment segment (Fig. 2D), we observed multiple P-S conversion interfaces between 0 and 2 s. The top P-S conversion interface is interpreted to be the acoustic basement, and the others appear to be located immediately beneath it. DISCUSSION Tectonic Processes Forming Thin-Sediment Areas In the northwestern Pacific, many young (1-10 Ma), small monogenetic volcanoes, called petit-spot volcanoes, have been found in clusters (Hirano et al., 2006) on the incoming plate where it approaches the trench. The thinsediment areas A and C (Fig. 1B) correspond to petit-spot cluster sites A and C of Hirano et al. (2006), respectively. This good correlation implies that apparent thinning of the sediments is likely to be associated with this post-spreading volcanic activity. Ohira et al. (2018) carefully investigated Vp models derived from airgun-OBS data near area C and showed that the low Vp beneath acoustic basement in area C cannot be explained by bend faulting or preexisting ancient tectonic features. Instead they concluded that the low Vp is associated with petit-spot volcanism. Hirano et al. (2006) HDSR171; Fig. 1B), it is much thinner. Figure 3. Time-migrated multichannel seismic (MCS) reflection profiles near the trench (northeastern Japan arc) obtained by the small-offset MCS system. Horizontal axis is CDP (Common Depth Point; CDP spacing is 3.125 m). The sediment is roughly 400-500 ms (two-way traveltime) along most lines, but in thin-sediment area A (profile compositions to those of the Hawaiian north arch where a >100-km-wide area is covered by extensive sheet flows of alkalic basalt, and proposed that sills and dikes have frequently intruded into sediments in petit-spot areas. A potential fossil outcrop of a petit-spot volcano in Central America supports this interpretation (Buchs et al., 2013). Based on these previous studies and our observations, we propose that the acoustic basement in thin-sediment areas is not the top of the basaltic crust, but instead an apparent basement related to recent petit-spot-related magmatic intrusions (Fig. 4). Multiple P-S conversion interfaces in the thin-sediment segment suggest pervasive magmatic intrusions within the sediments, with the topmost magmatic intrusion (the apparent basement) masking seismic reflections beneath it. We conclude that recent volcanic activity related to petit-spot volcanoes is the origin of the apparent thinness of the sediment layer in these regions. Implications for Subduction Zone Earthquakes Pervasive magmatic intrusions should alter the nature of sediments. First, feeder dikes would cut the layered sediments, and magmatic intrusions would disturb preexisting stratigraphy. This should cause the horizontal continuity of the sediments to be reduced. Because chert is a hard siliceous sediment that has a significantly different mechanical behavior from the soft sediments above it, magmas might be likely to intrude just above the chert unit and disturb the smectite-rich pelagic clay layer that is the origin of the smectite along the plate boundary fault. Second, such magmatism within sediments should promote thermal metamorphism of surrounding sediments. Because smectite easily transforms into illite at relatively low temperatures on the order of ∼100 °C (Pytte and Reynolds, 1989) and illite has a significantly larger friction coefficient than smectite (Saffer and Marone, 2003), subduction of petit-spot areas would induce regional variations in friction along the plate boundary fault through spatially patchy illitization of the smectite-rich pelagic clay layer. Area A (Fig. 1B), one thin-sediment area related to the petit-spot volcanism, is currently just entering into the Japan trench at ∼39°N. The giant near-trench coseismic slip of the 2011 Tohoku earthquake did not propagate beyond 39°N according to most coseismic slip distribution models derived from seismic and geodetic data (e.g., Ide et al., 2011;Iinuma et al., 2012;Lay, 2018). Based on this correlation (Figs. 1B and 1C), we propose that magmatic intrusions and thermal metamorphism associated with petit-spot volcanism disturbed the smectite-rich pelagic clay layer in incoming sediments, and that the subduction of this disturbed area in turn prevented giant near-trench interplate coseismic slip from propagating further northward. In other words, the decrease in smectite or disturbance of the smectite-rich pelagic clay layer by magmatic intrusions could have played a critical role in arresting coseismic slip propagation during the 2011 Tohoku earthquake. Because petit-spot clusters are considered to be ubiquitously distributed on the oceanic Pacific plate (Machida et al., 2015), there are likely many other already-subducted petit-spot sites. Although most smectite is transformed into illite at depths >∼20 km due to elevated temperature (Peacock and Wang, 1999), petitspot-related magmatic intrusions within the sediments are still expected to affect the nature of the plate interface. The size of petit-spot clusters in areas A and C roughly corresponds to the size of the rupture zones of M7-M8 interplate earthquakes here (Yamanaka and Kikuchi, 2004), implying the possibility that some M7-M8 interplate earthquakes may be associated with the subduction of petit-spot-altered sediments. For further observational insights, we need to investigate the nature of the subducting plate boundary in greater spatial detail. We suggest that the mechanical and alteration effects of post-spreading magmatism on the incoming plate could be a major co-factor that shapes the seismic nature of the megathrust seismic zone. ACKNOWLEDGMENTS We thank the captain, crew, and technicians of R/V Kairei, R/V Yokosuka, and R/V Kaiyo. This study was partly supported by a KAKENHI grant (15H05718) from the Japan Society for the Promotion of Science. We thank four anonymous reviewers and editor Mark Quigley for their constructive comments, which greatly helped to improve this manuscript.
3,606.4
2020-03-30T00:00:00.000
[ "Geology", "Environmental Science" ]
Neural Network-Based Beam Pumper Model Optimization Beam pumper is the earliest and most popular rod pumper driven by surface dynamic transmission devices. Drawing on modern theories and methods of industrial model design, the model optimization of beam pumper could promote the diversity, serialization, standardization, generalization, precision balance, and energy reduction of beam pumper design. Therefore, this study tries to optimize the model of beam pumper based on a neural network. Specifically, the system efficiency of beam pumper was decomposed, the surface and downhole working efficiencies were analyzed, and the model optimization flow was explained for beam pumper. Then, a radial basis function (RBF) neural network was established and trained by the sample data on beam pumper model. Besides, the mapping between model parameters and the optimization objective (system efficiency) was constructed. Moreover, the authors summed up the model optimization contents of beam pumper and predicted the relevant parameters of model optimization. The results demonstrate the effectiveness of our model. Introduction Beam pumper is the earliest and most popular rod pumper driven by surface dynamic transmission devices [1][2][3][4]. It has long attracted the attention of sci-tech researchers. In the past two decades, more and more new beam pumpers have been developed and put into production [5][6][7][8][9][10][11]. A key link of the research and development (R&D) for new beam pumpers is to fully consider the structural performance indices of the pumper, strike a balance between pumper model, pumper functions, and production efficiency, and maximize the economic value of the beam pumper [12][13][14][15][16]. Beam pumper designers are faced with a major task: optimize the model of beam pumper by modern theories and methods of industrial model design, without changing the working performance of beam pumper [17][18][19][20][21]. e model optimization could promote the diversity, serialization, standardization, generalization, precision balance, and energy reduction of beam pumper design. Considering the rods, tubes, and liquid columns of beam pumper, Zi-Ming et al. [22] established a three-dimensional (3D) dynamic model, which can be expressed as a set of partial differential equations. To make the model more reasonable, an experiment was carried out on South 1-2-22 Well in Daqing Oilfield, and the calculated torque curve was contrasted with the measured torque curve. e experimental results show that the 3D dynamic model is highly accurate and precise and applicable to the design and optimization of the structure and well operation of the beam pumper. Gu et al. [23] modeled and optimized beam pumper by an artificial neural network (ANN). e optimization was realized with Strength Pareto Evolutionary Algorithm 2 (SPEA2). In this way, the optimal set of operation parameters was obtained, which maximizes the oil production, and minimizes power consumption. Li et al. [24] proposed a new particle swarm optimization (PSO) algorithm based on chaotic search and introduced it into the optimal design of beam pumper. e new PSO takes the minimum peak torque factor of the stroke of the beam pumper as the objective function, providing a novel way for the design of complex structures. Based on the API-RP-11L design guidelines, Yang et al. [25] put forward a design optimization method for beam pumper. eir method not only offers an efficient and rapid optimization tool for the modular design of beam pumper but also provides an overall design formula that meets the API design specifications. Drawing on the domestic research of the modular design of beam pumper, Niu [26] suggested combining beam pumper modularization with parametric design. Following the function method, the main structural parameters were collected from the serial module division and subjected to parametric design. e overall performance of the design results was tested and analyzed from both kinematics and kinetics. e current research of industrial model design and optimization mostly targets cars, home appliances, and computer numerically controlled (CNC) machine tools. e studies on beam pumper mainly focus on shape optimization issues like kansei engineering, and the introduction of emotional factors. ere is no report that predicts and optimizes the model parameters of beam pumper based on the results of system efficiency analysis. erefore, this study attempts to optimize the model of beam pumper based on a neural network. e main contents are as follows: (1) e system efficiency of beam pumper was decomposed, and the surface and downhole working efficiencies were analyzed. (2) e model optimization flow was explained for a beam pumper. (3) A radial basis function (RBF) neural network was established and trained by the sample data on beam pumper model. Besides, the mapping between model parameters and the optimization objective (system efficiency) was constructed. (4) After summing up the contents of beam pumper model optimization, the authors predicted the relevant optimization parameters, selected the best optimization scheme, and evaluated the effect of the scheme through field test. e proposed model was proved effectively through experiments. System Efficiency Analysis e model structure of beam pumper is illustrated in Figure 1, where 1-12 represent skid base, motor, crank counterweight, connecting rod, crank, reduction box, latching beam, rear support, beam, horsehead, beam hanger, and bracket, respectively. e system efficiency of beam pumper is defined as the ratio of the energy consumed by the pumper to lift the oil to the total energy consumption of the pumper. Based on the system efficiency analysis, the model parameter optimization of beam pumper can be viewed as a multiobjective optimization problem. e traditional approximation model and optimization algorithm can effectively optimize the model parameters of some beam pumpers, but the optimization effect is dampened by their undesirable prediction accuracy. To predict and optimize the model parameters of beam pumper from the perspective of energy conservation, the first step is to decompose the system efficiency of beam pumper and analyze the energy-saving mechanism. rough stepwise analysis on efficiency decomposition of beam pumper, this study obtains the indices affecting the efficiency, finds the way to enhance the efficiency, and identifies the main indices for energy conservation of beam pumper. e balance and energy consumption of beam pumper were systematically analyzed, laying a solid theoretical basis for further improvement of beam pumper. e efficiency analyses are as follows: e system efficiency δ a of beam pumper is the product between the surface working efficiency δ b and the downhole working efficiency δ c : (1) e surface working efficiency of the pumper is the integrated efficiency of pumper components such reduction box, motor, and rod under normal conditions. e downhole working efficiency refers to the integrated efficiency of the core downhole components, such as sucker rod, tubing string, sealing device, and deep well pump. Among them, the motor efficiency is the input-output power ratio of the motor. It includes several forms of energy loss, such as copper loss, iron loss, and heat dissipation. e copper loss is equal to the product of the square of the current and the resistance. e iron loss is the loss of energy due to the resistance of ferrous materials during the change of magnetic field poles. e output efficiency of the belt of the reduction box is usually defined as the ratio of the output power of the reduction box to that of the motor, which is normally greater than 85%. e efficiency of the belt consists of wear power and deformation power. Let R y be the longitudinal curved elastic modulus of the belt; J be the crosssectional inertial moment of the belt; m be the rotation speed of the belt; β be the wrap angle of the belt. en, the wear power can be calculated by Let ΔT 2 be the power loss induced by belt deformation; u be the linear speed of the belt; X be the cross-sectional area of the belt; R K be the tensile elastic modulus of the belt; and G be the tension of the belt. en, the deformation power can be calculated by e efficiency of reduction box is composed of its bearing operation efficiency, and box member loss. Let H, u e , and g be the load, linear speed, and friction coefficient of the bearing, respectively. en, the frictional power loss T c of the supplementary bearing can be calculated by Let g be the friction coefficient; L be the coefficient; c be the diameter of the polished rod; f be the effective height of the sealing; T τ be the tubing pressure; T s be the power of the polished rod. en, the transmission efficiency of the sealing device can be calculated by e power loss of the deep well pump mainly covers three parts: the power loss induced by mechanical friction, the leakage power loss of the pump, and the hydraulic power loss of the pump. Let c 1 and k be the diameter and length of the plunger, respectively; Δt be the difference between upper and lower pressures of the plunger; ξ be the radial gap between the plunger and the steel drum; λ be the viscosity of the well liquid; σ be the eccentric ratio; ρ be the eccentricity. en, the power loss induced by mechanical friction of the deep well pump can be calculated by e leakage power loss of the pump can be calculated by Let c be the drag coefficient of the well fluid passing through the flow valve bank; φ be the density of the well fluid; W be the flow of the well fluid passing through the pump valve orifice; and χ be the size of the orifice. en, the hydraulic power loss of the pump can be calculated by e power loss of the tubing includes leakage power loss and hydraulic power loss of the tubing. Let Δt be the pressure difference between tubing and casing; W be the wellhead production of the oil well; and W be the displacement of the deep well pump. en, the leakage power loss of the tubing can be calculated by Let i be the grade number of sucker rod; μ be the friction factor of the tubing corresponding to grade i sucker rod; K i be the length of grade i sucker rod; c ic be the equivalent inner diameter of the tubing corresponding to grade i sucker rod; v i be the well fluid speed corresponding to grade i sucker rod; and W be the flow of the tubing. en, the hydraulic power loss of the tubing can be calculated by Let W be the daily fluid production of the pumper; F � F c + 1000(T τ − T r )/φh be the effective lift of the pumper; F c be the depth of the dynamic liquid depth; T τ and T r be the pressures of the tubing and casing, respectively. en, the effective power of the pumper can be expressed as follows: Considering the components of beam pumper, the system efficiency was considered as the integration of the above aspects. If the efficiency is too low in any aspect, the overall efficiency of beam pumper will be influenced. In terms of motor factors, it is necessary to consider the specification, power, working voltage, and power factor of the motor. In terms of belt rotation, it is necessary to consider the following issues: the size of the belt, the matching between wheels and grooves, the suitability of the degree of tightness, and the cleanness of the belt. In terms of reduction box operation, it is necessary to consider the sufficiency and quality of gear oil, the intactness of components, and the smoothness of the oilway. In terms of sealing device, it is necessary to consider the alignment of the horsehead, the matching between sealing method and wellhead, and the degree of airtightness. In terms of the sucker rod, it is necessary to consider the timely cleaning of wax, punctual adjustment/replacement of the centering device, and the utilization of large tubing. In terms of deep well pump and tubing, it is necessary to consider implementing maintenance on a timely basis. rough the above analysis on the balance and energy saving theory of the beam pumper, the following criteria were selected for evaluating practical application: changing the operating parameters of electrification, load torque, Computational Intelligence and Neuroscience power frequency, load rate, and power factor; changing the transmission mode of the pumper to reduce the oscillation of the net torque; and adopting a structural balancing device for the pumper. Figure 2 explains the flow of beam pumper model optimization. According to the calibration principle of steadystate system efficiency, the model optimization of beam pumper roughly contains four steps: model construction and scheme design, system efficiency calibration and correction, surface and downhole working condition calibration, and model optimization and verification. To improve the efficiency of the pumper, it is necessary to determine the order between efficiency correction and calibration according to the actual situation, or to perform them simultaneously. RBF-Based Optimization To enhance system efficiency by optimizing model parameters, this study obtains model samples through a reasonable design of beam pumper and train an RBF neural network based on these samples. en, the authors set up the mapping relationship between the model parameters of beam pumper, which are identified in the preceding section, and the optimization objective (system efficiency). e purpose is to lower pumper energy consumption and improve the system efficiency of beam pumper. e topology of RBF neural network is inspired by the perception ability of the retina. Figure 3 shows the structure of RBF neural network. Let a be the input; D i be the center of the receptive field of retinal neurons; H(.) be the radial basis activation function. en, the signal outputted by retinal neurons can be characterized by the following function: e transfer function between input layer and hidden layer is usually defined as a Gaussian distribution function. Let m be the number of input layer nodes, j � 1, 2, 3, . . ., m, l � 1, 2, 3, . . ., m; M be the number of output layer nodes; D i be the data at node i in the hidden layer; ε i be the width of the first node in the hidden layer, i � 1, 2, 3, . . ., t; t be the number of hidden layer nodes; a l − D i be the Euclidean distance between a l and D i ; θ ij be the weight from the i-th hidden layer node to the j-th output layer node. en, the transfer function from the input layer to the output layer of RBF neural network can be expressed as follows: e above analysis shows that RBF neural network mainly consists of D i , ε i , and θ ij . e hidden layer nodes of RBF neural network can be obtained through k-means clustering (KMC) through the following steps: Step 1. Select t random cluster heads D i from w samples of model parameters of beam pumper. For any sample a n , the class λ (n) of the sample can be calculated by Step 2. Define the class of the cluster head, which is the closest to sample a n , out of the t cluster heads as λ (n) . Let D i be that cluster head. Based on the calculation results of λ (n) , the head D i of the class can be recalculated by Repeat the above two steps until the cluster head changes very little, that is, the cluster head no longer changes, thereby determines the hidden layer nodes of the proposed network. Let ζ i be the minimum Euclidean distance between the ith hidden layer node and other nodes; υ be the overlapping coefficient. en, the node width ε i can be determined by After D i and ε i are determined, the weight θ ij can be obtained by the pseudo-inverse matrix method, according to the inputted model parameters and outputted optimized parameters of network training. In actual practice, the hidden layer nodes of the RBF neural network obtained through KMC face several problems: the clustering results are significantly affected by the initial nodes, the clustering may fall into the local optimal trap, and the training algorithm could not converge. To avoid these problems and diversify hidden layer nodes, this study introduces the niche-sharing technique to determine the t hidden layer nodes D i among the w model parameters. In the niche-sharing mechanism, the Euclidean distance between individuals a i and a j is denoted as ζ ij ; the niche radius is denoted as ε mic ; the adjustment constant of the sharing function is denoted as ϕ. For individuals a i and a j in the population, the sharing function can be defined as follows: For a niche containing l individuals, the sharing degree e i of individual a i in the population can be defined as follows: Let g i be the initial fitness of a i . en, the sharing fitness g i ′ of that individual can be expressed as follows: Based on the niche-sharing technique, the t hidden layer nodes D i can be determined as follows: To derive the sharing degree SD(ζ nm ) between any two model parameter samples a n and a m of beam pumper, the first step is to compute the Euclidean distance ζ nm between them: e sum Ψ of the Euclidean distances between a n and other samples can be calculated by Computational Intelligence and Neuroscience e niche radius ε mic can be calculated by SD(ζ nm ) can be calculated by For any sample a n , the sharing degree e n can be calculated by e sharing fitness g i ′ of sample a n can be calculated by Finally, g n ′ values are ranked in descending order, and the t samples with the greatest g n ′ are chosen as the hidden layer nodes D i of the proposed neural network. Table 1 summarizes the contents of model optimization for beam pumper. Table 2 lists the prediction results on the model optimization parameters. Experiments and Results Analysis According to the schemes in Table 2, the improved beam pumper had smaller torques and cyclic load coefficient than the original machine. e mean input power of the motor can be obtained by the formulas in the preceding sections. e saved electrical energy is the energy consumption corresponding to the mean active power reduced by the falling cyclic load coefficient. e following conclusions can be drawn from the analysis on the model optimization schemes of beam pumper: the power phase difference factor of the motor is inversely proportional to the periodic load coefficient, because the reduction of the factor lowers the difference. e various measures of model optimization all contribute to energy saving. e only difference lies in the hanging point load and the initial value. Model optimization schemes like adding a balancing device does not change the size of the rod, or the stroke of the pumper. But the other optimization measures can reduce the hanging point load, directly reducing the load of the pumper, and indirectly increasing the operation time and safety level of the machine. e three curves represent the mean torque, the maximum torque, and the minimum torque. According to the actual well condition specified by Yang et al. [25], the best choices are beam balancing, and adopting double horseheads. It can be observed that the beam pumper after the model optimization through system efficiency analysis differed slightly in performance from the original beam pumper, but the optimized beam pumper achieved more stable torque curves, and ideal torque conditions. is study prepares a training set of 500 parameter samples and a test set of 100 samples, which are related to beam pumper model. e small test set may lead to overfitting. To solve the problem, cross validation was carried out to optimize the model. Since system efficiency is the optimization objective, the error and precision of the output efficiency of our model were selected as the metrics of model prediction performance. Figure 6 compares the actual system efficiency and predicted system efficiency in different groups of test samples. e system efficiency was predicted well at any sample. Figure 7 compares the predicted errors of different test groups. Obviously, the mean absolute errors (MAEs) were mostly smaller than 1. A few test groups had a slightly large MAE because the system energy consumption varies with working conditions. But the overall errors are desirable. After finalizing the model of our neural network, the outliers were removed from the verification points of system efficiency. Figure 8 shows the errors between actual and predicted system efficiencies. It can be seen that the predicted system efficiencies on the test set were not very different from the actual values, and the output points were near the straight line of actual values. Hence, the proposed neural network boasts a good fitting effect, and a high prediction precision. e experimental results show that the proposed optimization scheme basically achieved the expected effect, especially on consumption reduction and efficiency improvement. However, the stability, reliability, and energyefficiency of the scheme in actual application need to be Computational Intelligence and Neuroscience 7 further tested. After the improved beam pumper operated stably for several hours, the current, power, and efficiency of the entire device were measured. e results show that the energy-efficiency of the improved device fell short of expectations, and only 11.5% of electricity was saved, compared to the original beam pumper. Besides, the rear balancing weight of the beam is quite heavy in the weight balancing structure. It adds to the difficulty for oil production engineers to complete the balancing operation, and slightly reduces security. Nevertheless, the proposed scheme is advantageous in terms of low retrofitting cost and high time efficiency. Conclusion is study puts forward an optimization approach for the model of beam pumper. First, the system efficiency of beam pumper was decomposed, the surface and downhole working efficiencies of beam pumper were analyzed, and the flow of beam pumper model optimization was clarified. Next, the model samples of beam pumper were adopted to train the proposed RBF neural network, and the mapping relationship was established between model parameters and the optimization objective (system efficiency). rough experiments, the authors summed up the contents of model optimization for beam pumper, predicted the model optimization parameters, and drew the torque curves of surface and downhole component optimization schemes, revealing that the optimized beam pumper achieved more stable torque curves, and ideal torque conditions. Finally, the system efficiencies were obtained at each test sample in different groups, the predicted errors of different test groups were compared, and the predicted system efficiencies were contrasted with the actual efficiencies. e results show that the proposed neural network boasts a good fitting effect, and a high prediction precision. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
5,331.2
2022-05-09T00:00:00.000
[ "Physics", "Engineering" ]
AxPcoords & parallel AxParafit: statistical co-phylogenetic analyses on thousands of taxa Background Current tools for Co-phylogenetic analyses are not able to cope with the continuous accumulation of phylogenetic data. The sophisticated statistical test for host-parasite co-phylogenetic analyses implemented in Parafit does not allow it to handle large datasets in reasonable times. The Parafit and DistPCoA programs are the by far most compute-intensive components of the Parafit analysis pipeline. We present AxParafit and AxPcoords (Ax stands for Accelerated) which are highly optimized versions of Parafit and DistPCoA respectively. Results Both programs have been entirely re-written in C. Via optimization of the algorithm and the C code as well as integration of highly tuned BLAS and LAPACK methods AxParafit runs 5–61 times faster than Parafit with a lower memory footprint (up to 35% reduction) while the performance benefit increases with growing dataset size. The MPI-based parallel implementation of AxParafit shows good scalability on up to 128 processors, even on medium-sized datasets. The parallel analysis with AxParafit on 128 CPUs for a medium-sized dataset with an 512 by 512 association matrix is more than 1,200/128 times faster per processor than the sequential Parafit run. AxPcoords is 8–26 times faster than DistPCoA and numerically stable on large datasets. We outline the substantial benefits of using parallel AxParafit by example of a large-scale empirical study on smut fungi and their host plants. To the best of our knowledge, this study represents the largest co-phylogenetic analysis to date. Conclusion The highly efficient AxPcoords and AxParafit programs allow for large-scale co-phylogenetic analyses on several thousands of taxa for the first time. In addition, AxParafit and AxPcoords have been integrated into the easy-to-use CopyCat tool. Background One of the basic questions in evolutionary analyses [1] is whether parasites (e.g., lice or Papillomaviruses) or mutu-alists have co-speciated with their respective hosts (e.g., mammals). The constant accumulation of DNA and AA sequence data coupled with recent advances in tree build-ing software, such as TNT [2], MrBayes [3], GARLI [4] or RAxML [5], allow for large-scale phylogenetic analyses with several hundred or thousand taxa [6][7][8][9][10][11][12]. Thus, largescale co-phylogenetic studies have also potentially become feasible. However, most common co-phylogenetic tools or methods such as BPA, TreeMap or TreeFitter (see review in [13]) are not able to handle datasets with a large number of taxa or have not been tested in this regard with respect to their statistical properties. Therefore, there is a performance and scalability gap between tools for phylogenetic analysis and meta-analysis. The capability to analyze large datasets is important to infer "deep co-phylogenetic" relationships which could otherwise not be assessed [14]. Parafit [15] implements statistical tests for both overall phylogenetic congruence as well as for the significance of individual associations. Extensive simulations have shown that the Parafit tests are statistically well-behaved and yield acceptable error rates. The method has been successfully applied in a number of biological studies [16][17][18][19]. In addition, the Type-II statistical error of Parafit decreases with the size of the dataset (see [15]), i.e., this approach scales well on large phylogenies of hosts and associates. Due to these desirable properties, recent work on CopyCat [14] focused on improving the usability of Parafit via a Graphical User Interface (GUI) and automation of the analysis pipeline which transforms phylogenetic trees to patristic (tree-based) distance matrices, converts distance matrices to matrices of eigenvectors using DistPCoA [20], invokes Parafit, and parses input, intermediate, as well as output files. However, co-phylogenetic analyses with CopyCat can not be conducted on large datasets due to the excessive run time requirements of Parafit and DistPCoA, which represent the by far most compute-intensive part of the CopyCat analysis pipeline. Here we present AxParafit and AxPcoords which are highly optimized and parallelized versions of Parafit and DistPCoA respectively. As outlined by the case-study on smut fungi on page 6 these accelerated programs allow for more thorough large-scale co-phylogenetic analyses and extend the applicability of the approach by 1-2 orders of magnitude, thus closing the aforementioned performance gap concerning current phylogenetic meta-analysis tools. Coupled with the easy-to-use CopyCat tool AxParafit/ AxPcoords facilitate statistical co-phylogenetic analyses on the largest trees that can currently be computed. Implementation For programming convenience and portability as well as due to the structure of the original Fortran code we reimplemented Parafit and DistPCoA in C from scratch. Sequential Optimization The sequential C code was optimized by reducing unnecessary memory allocations for matrices in AxPcoords/ AxParafit and using a faster method to permute matrices in AxParafit. Thereafter the compute-intensive for-loops in AxParafit/ AxPcoords were manually tuned. After those initial optimizations we profiled both programs and found that the run-times were now largely dominated (over 90% of total execution time) by a dense matrix-matrix multiplication in AxParafit and the computation of eigenvectors/eigenvalues in AxPcoords respectively. To further accelerate the programs we integrated function calls to the highly optimized matrix multiplication of the BLAS (Basic Linear Algebra Package [21]) package and eigenvector/eigenvalue decomposition in LAPACK (Linear Algebra PACKage [22]). For BLAS we assessed the usage of ATLAS BLAS (Automatically Tuned Linear Algebra Software, math-atlas.sourceforge.net) as well as the ACML BLAS (AMD Core Math Library [23]) libraries on a 2.4 GHz AMD Opteron CPU. The ACML package showed slightly faster speeds (≈ 7-9%). However, AxParafit also provides an interface to the INTEL MKL (Math Kernel Library) and ATLAS BLAS implementations. AMD ACML, INTEL MKL, and ATLAS are all freely available for academic use. AxParafit can also be compiled without BLAS and rely on a manually tuned matrix multiplication which is approximately 4 times slower. AxPcoords can use either the LAPACK functions implemented in the AMD ACML or INTEL MKL libraries. In addition, AxPcoords can also make use of the GNU scientific library [24] for eigenvector/eigenvalue computations. The tuned programs were designed to yield exactly the same results as Parafit and DistPCoA. Note however, that in contrast to AxPcoords we observed numerically unstable results for DistPCoA on datasets with large association matrices, containing more than 4,096 entries. This is due to some well-known problems with the stability of eigenvector/eigenvalue decomposition [25][26][27] on large datasets and due to the fact that the original Parafit code uses the algorithm from [28]. Therefore, the integration of the thoroughly tested LAPACK routines, apart from speed benefits, also yields increased numerical stability. We integrated AxPcoords and AxParafit into CopyCat [14]. Figure 1 provides a screen-shot of CopyCat whit a drop-down menu that allows the user to select AxParafit/AxPcoords for executing the analyses. Parallelization AxPcoords requires less than 24 hours of run-time on a single CPU, even for distance matrices with several thousands of taxa. Therefore, we exclusively focused on the parallelization of AxParafit which requires run-times of several days or weeks on large datasets. The execution time of Parafit depends on the sizes of input matrices A, B, and C with dimensions n 1 n 2 , n 4 n 1 , and n 3 n 2 respectively (for details see [15]). The complexity is roughly O(nonZero(A)n 3 n 4 n 1 p). The term n 3 n 4 n 1 is the complexity of the dense matrix multiplication in AxParafit. The variable p is the user-specified number of permutations that shall be executed (typically 99-9,999, not counting the original permutation) and nonZero(A) is the number of non-zero elements in the binary association matrix A. The program executes two main steps: the global test of co-speciation with complexity O(n 3 n 4 n 1 p) and the individual tests with complexity O(nonZero(A)n 3 n 4 n 1 p). Since in real-world analyses nonZero(A) Ŭ 1 we only parallelized all individual tests of co-speciation which typically generate over 99% of the total computational load. Our approach represents a trade-off between the amount of programming effort required for the parallelization and the expected performance gains. Thus, initially the global test of co-speciation must be executed using the sequential version of AxParafit. The sequential program provides an option to conduct the global test, write a binary output file that can be used to start the parallel computation of individual host-parasite links, and then exit. The statistical test of individual associations has been parallelized with MPI (Message Passing Interface) via a master-worker scheme. The parallelization is straight-forward since all tests of individual associations are independent from one another and can thus be computed independently on individual workers. Moreover, each individual test has approximately the same execution time, such that there are no problems due to load imbalance. The maximum number of CPUs that can be used by our parallelization is thus nonZero(A). However, this can be improved by using the ACML or MKL BLAS implementations that exploit fine-grained loop level parallelism on SMP (Symmetric Multi-Processing) architectures. This allows for a more efficient utilization of hybrid supercomputer architecture. Moreover, it might help to improve performance on huge datasets where SMP implementations can profit from super-linear speedups due to increased cache efficiency. Results and Discussion The current Section is split into two parts: Part 1 describes the computational results while Part 2 outlines the substantial benefits of using AxParafit for large-scale empirical co-phylogenetic studies. Computational Performance Here we provide performance data regarding the purely computational aspects of AxParafit. Experimental Setup To conduct computational experiments we used an unloaded system of 36 4-way AMD 2.4 GHz Opteron processors with 8 GB of main memory per node which are interconnected by an Infiniband switch. Parafit and Dist-PCoA were compiled using g77 -ffixed-line-length-0 -ff90intrinsics-delete -03. AxParafit and AxPcoords were compiled with -03 -fomit-frame-pointer -funroll-loops and linked with the AMD ACML library. We also assessed additional compiler optimizations (-fomit-frame-pointer, -funroll-loops, -m64, -march = k8) with g77 for Fortran, which actually lead to performance decrease of Parafit and DistPCoA (data not shown). In order to assess performance of AxParafit we extracted subsets from a large empirical dataset with more than Figure 1 Screen-shot of AxParafit/AxPcoords Option in Copy-Cat. This screen-shot shows the CopyCat drop-down menu that allows the user to select AxParafit/AxPcoords for executing the analyses and to switch between the U and W modes of branch length computation. Screen-shot of AxParafit/AxPcoords Option in CopyCat 30,000 host-associate links (collected from entries in the EMBL database [29]), which we are currently analyzing with our tools. We sampled square association matrices A, i.e., n 1 = n 2 of dimensions 128, 256, 512, 1,024, and 2,048. The number nonZero(A) was 128, 256, 512, 1,024, and 2,048 respectively. The number of permutations p was set to 99, 99, 9, 2, and 2 respectively. A complete test on the dataset of size 4,096 was not conducted with Parafit due to the extremely long run-times on n 1 = n 2 = 2, 048 which already amounts to 19.9 days compared to 7.7 hours required by AxParafit. To test AxPcoords we used the same compiler switches as indicated above and a subset of the square association matrices with nonZero(A) amounting to 512, 1,024, 2,048, and 4,096 respectively. Results In Figure 2 we provide the sequential run-time improvement of AxParafit over Parafit. The acceleration obtained by AxParafit increases with growing dataset size and attains a factor of 61.86 on the association matrix of size 2,048. The increase of the performance improvement with growing dataset size is mainly due the larger efficiency of both our own optimizations as well as the cache blocking strategies used in the BLAS implementations. Figure 3 provides the memory use of AxParafit and Parafit in MB for quadratic A-matrices of sizes 128, 256, 512, 1,024, 2,048, and 4,096 (note that the dataset of size 4,096 was not run to completion). To test AxPcoords we used distance matrices of sizes 512, 1,024, 2,048, and 4,096. Run-time improvements range from 8.8 to 25.74. The run on 4,096 with DistPCoA apparently terminated but did not write a results file, most probably due to numerical instability (Pierre Legendre, personal communication). Figure 4 shows the run-time improvement of AxPcoords over DistPCoA for quadratic distance matrices of sizes 512, 1,024, 2,048, and 4,096. As already mentioned, the run on 4,096 with DistPCoA did not write a results file. Tests on smaller distance matrices e.g., of size 128 and 256 were omitted due to the low execution times Run Time Improvement Sequential AxPcoords versus DistP-CoA which were below 10 seconds. On the largest matrix AxPcoords terminated within only 399 seconds as opposed to 10,268 seconds required by DistPCoA. We assessed scalability of parallel AxParafit using the association matrix A of size 512 on 4, 8, 16, 32, 64, and 128 processors with p = 99. Figure 5 provides the speedup with respect to the number of worker processes. We indicate speedup values for the parallel part (SpeedupIndividual, computation of individual host-parasite links) as well as for the sequential plus the parallel part of the program (SpeedupWhole), i.e., we added the sequential computation time for the global test to the parallel execution time. On 128 processors the computation took only 50 seconds. An analysis of this dataset with the sequential version of Parafit would take approximately 20 hours. A Real-World Example In order to provide an example for the substantial benefits of performing a large-scale co-phylogenetic analysis with AxParafit we provide a real-world study on smut fungi and their host plants. Experimental Data We collected a large sample of associations of smut fungi and their host plants. Smut fungi comprise more than 1,500 species of obligate phytoparasites and are arranged in the taxa Entorrhizomycetes, Microbotryales, and Ustilaginomycotina. These parasites cause syndromes such as dark, powdery appearance of the mature spore masses or may even lead to plant deformation in some cases [30,31]. The Ustilaginomycotina also comprise obligate plant parasites with distinct morphology [30]. With a few exceptions, hosts of smut fungi belong to the Angiosperms [30]. For economically important hosts, such as barley and other cereals, smut fungi may cause considerable yield losses (see e.g., [32]). Phylogeny and taxonomy of genera and higher ranks has been derived from sound molecular and ultrastructural data in recent years (see [30] and references therein). However, apart from the work presented in [14], co-phylogenetic analysis of smut fungi have so far been restricted to single genera with comparatively few species [33,34]. Including synonyms, our data set contained 3,912 different fungus-plant associations. In order to retrieve taxon IDs and to construct taxonomy trees for hosts and parasites [14], we used the NCBI taxonomy release of September 01, 2007. For host and parasite species names that were not found in the NCBI taxonomy, the search was repeated after reducing the taxon name to the respective genus. In this way, a total of 2,362 different associations could be identified that covers 413 smut fungi and 1,400 host plants. Thus, the dataset assembled was more than three times larger than the one recently analyzed in [14], which contained 645 associations, corresponding to 140 smut fungi and 437 host plants. The Parafit analysis of this comparatively small dataset took already more than a week. For both hosts and parasites, two trees were constructed, one tree with branch lengths corresponding to the "true" (denoted as W for Weighted) taxonomical distance [14] and one with all branch lengths set to 1 (denoted as U for Un-weighted/Uniform). As outlined on page 4 the computational complexity of AxParafit is O(nonZero(A)n 3 n 4 n 1 p) and thus the execution time requirements for this larger dataset increase significantly. Inference with AxParafit Production runs with Parafit and AxParafit on an initial version of our dataset were started on August 29, 2007. While the Parafit inferences with 99 permutations on this initial dataset were still running at the time of writing this manuscript(September 9, 2007), the parallel AxParafit run with 99 permutations terminated within less than 480 seconds on 128 CPUs of the Infiniband cluster. This made the results available immediately and allowed us to identify a bug in the data collection script. The buggy version of this script did not take the presence of non-unique scientific taxon names, (e.g.,Setaria (Magnoliophyta, Poales) and Setaria (Nematoda, Filarioidea)) into account to identify NCBI taxon IDs. Such errors are unfortunately typical and frequent in Bioinformatics analysis pipelines. As a typical example of such errors consider the retraction of "Measures of Clade Confidence Do Not Correlate with Accuracy of Phylogenetic Trees" by Barry G. Hall due to an error in a perl script [48]. Speedup of Parallel AxParafit In addition to the rapid detection of input data errors, the significant performance gains obtained by sequential optimization and parallelism allow for the assessment of different program parameters and analysis options, such as trees with different patristic distances (U and W trees) as well as the impact of the number of permutations on the results (AxParafit was run with 99/999/9,999 permutations on the U and W data), i.e., a significantly more thorough and detailed analysis. The absolute execution times for AxParafit on 128 CPUs for 99/999/9,999 permutations are indicated in Table 1. Essentially, 99 permutations could be conducted within 7 minutes, 999 permutations in much less than 2 hours, and 9,999 permutations overnight in about 12 hours such that the whole study, including the detection of the script error and the analysis of the results could be completed in less than a week. As indicated in Table 2 there are a number of links (max. 48 out of 2,362 ≈ 2%) that are not uniformly significant or uniformly insignificant at low pvalues between analyses with a distinct number of permutations. AxParafit therefore allows for rapid and much more thorough computation and analyses of large cophylogenetic datasets. The results indicate that U-based analyses are in general more sensitive to the number of permutations than W-based runs. Note that the number of host/parasite eigenvectors for U (1,390/411) was higher than for W (1,200/372), which explains the longer execution times and potentially the larger differences in significance values. Biological Interpretation of Results In the following, we focus on the results obtained with 9,999 permutations and branch lengths scaled in terms of taxonomical distances (W-labeled results). The global test indicates a highly significant co-phylogenetic relationship (p = 0.0001). An overview of the results for individual host-parasite links based on the smut fungi genera is provided in Figure 6. Major taxonomic groups of host and parasites are indicated according to the NCBI taxonomy release used. Based on a significance threshold of p = 0.05 and the ParafitLink1 statistics [15], a total of 578 insignificant and 1,784 significant associations is obtained. As in our earlier study [14], genera of smut fungi are rather uniform with respect to their significance values, which facilitates the identification of a general distribution pattern with respect to significant and insignificant links, i.e., the "deep co-phylogeny" of smut fungi. The single most important factor appears to be whether the hosts belong to the monocots (i.e.,Liliopsida) or not. Entorrhiza species, which are taxonomically isolated, mostly are linked with monocots (Poales) and do thus not contribute significantly to the overall fit between host and parasite phylogenies. In the case of Microbotryales, the majority of taxa are pathogenic of core eudicots, resulting in significant links. Fewer associations with monocots (mostly Poales) are present, which are considered insignificant. The same pattern can be observed in the class Exobasidiomycetes within Ustilaginomycotina: A minority of hostparasite links is within monocots (Poales, but also other orders), which are considered insignificant, whereas the associations with other hosts (Selaginellales, basal Magnoliophyta, magnoliids, and stem and core eudicots) are significant. Inverse relationships are present in the class Ustilaginomycetes within Ustilaginomycotina. Here, most species infect monocots, mainly Poales, significantly increasing the congruence between host and parasite taxonomy trees, whereas the associations with core eudicots appear to be insignificant. Accordingly, the current analysis that is based on a considerably larger empirical sample (e.g., 66 instead of 25 included genera of smut fungi) confirms earlier results [14]. Therefore, we can generalize the observation that the difference between Poales and non-Poales hosts is crucial for the distribution of significance values to the distinc- tion between monocot and non-monocot hosts. We also observe a small number of exceptions from this general pattern. For instance, in Urocystis (Ustilaginomycetes), which occurs on a variety of host groups, the links with stem eudicots (species of Ranunculaceae) are significant, and a single link with monocots (PACCAD clade within Poaceae) is judged as insignificant. Thus, rather subtle details of the host-parasite relationships, such as the presence of Urocystis on several closely related Ranunculaceae hosts and its presence on distantly related hosts within Poaceae, are recognized by the AxParafit algorithm, and the uniform overall pattern does not merely reflect the relatively low topological resolution present in the taxonomy trees. Some of the results obtained may also be due to flaws in the taxonomy of the species included, particularly in the nomenclature of the parasites. For instance, Entorrhiza isoetis is most likely conspecific with Ustilago isoetis [31]. At present it is even doubtful whether this species belongs to smut fungi (R. Bauer, personal communication). Thus, the associations with Isoetes (Lycopodiophyta) mentioned in Scholz and Scholz [44], which show different significance values than the majority of hosts links in either Ustilago or Entorrhiza, are dubious. Likewise, the exceptional associations of Entyloma with monocots are probably due to species names that would need to be recombined into genera of the Georgefischeriales [37]. Whereas these flaws have to be corrected by considering more comprehensive lists of species and synonyms in monographs and in future releases of the NCBI database, it is apparent that neither the highly significant overall co-phylogenetic relationship nor the general pattern regarding individual host-smut fungus links would be affected by the removal of the doubtful associations. Rather, their influence is overcome by the large total sample size; for each parasite genus dubious links are few relative to the total number of links or not present at all. Likewise, there are few differences in the significance between analyses with a distinct number of permutations (see Table 2). Discrepancies between U and W are also comparatively small (see Table 3). With 9,999 permutations, they are restricted to four genera of smut fungi and only affect hosts, such as Urocystis on monocots in Asparagales and Dioscoreales (details not shown), with an intermediate taxonomic position. The analysis process presented here underlines the advantage of the large-scale approach to co-phylogenetic tests, that is enabled by AxPcoords/AxParafit. Furthermore, because many problems are more easily recognized after conducting preliminary runs, re-analysis after applying corrective measures may be necessary for many empirical datasets. Thus, efficient implementations and parallelism are of great practical importance for the analysis pipeline. Conclusion We have produced highly optimized and efficient implementations of the two most compute-intensive components for P. Legendre's statistical test of host-parasite cospeciation. The parallel implementation of AxParafit scales well up to 128 CPUs on a medium-size dataset. AxParafit and AxPcoords have been integrated into the CopyCat tool and are freely available for download as open source code. Future work will mainly cover large-scale production runs with AxParafit.
5,315.8
2007-10-22T00:00:00.000
[ "Biology", "Computer Science" ]
Scaling of Droplet Breakup in High-Pressure Homogenizer Orifices. Part I: Comparison of Velocity Profiles in Scaled Coaxial Orifices : Properties of emulsions such as stability, viscosity or color can be influenced by the droplet size distribution. High-pressure homogenization (HPH) is the method of choice for emulsions with a low to medium viscosity with a target mean droplet diameter of less than 1 µ m. During HPH, the droplets of the emulsion are exposed to shear and extensional stresses, which cause them to break up. Ongoing work is focused on better understanding the mechanisms of droplet breakup and relevant parameters. Since the gap dimensions of the disruption unit (e.g., flat valve or orifice) are small (usually below 500 µ m) and the droplet breakup also takes place on small spatial and time scales, the resolution limit of current measuring systems is reached. In addition, the high velocities impede time resolved measurements. Therefore, a five-fold and fifty-fold magnified optically accessible coaxial orifice were used in this study while maintaining the dimensionless numbers characteristic for the droplet breakup (Reynolds and Weber number, viscosity and density ratio). Three matching material systems are presented. In order to verify their similarity, the local velocity profiles of the emerging free jet were measured using both a microparticle image velocimetry ( µ -PIV) and a particle image velocimetry (PIV) system. Furthermore, the influence of the outlet geometry on the velocity profiles is investigated. Similar relationships were found on all investigated scales. The areas with the highest velocity fluctuations were identified where droplets are exposed to the highest turbulent forces. The Reynolds number had no influence on the normalized velocity fluctuation field. The confinement of the jet started to influence the velocity field if the outlet channel diameter is smaller than 10 times the diameter of the orifice. In conclusion, the scaling approach offers advantages to study very fast processes on very small spatial scales in detail. The presented scaling approach also offers chances in the optimization of the geometry of the disruption unit. However, the results also show challenges of each size scale, which can come from the respective production, measurement technology or experimental design. Depending on the problem to be investigated, we recommend conducting experimental studies at different scales. the continuous phase. The scaling factor for the viscosity of the disperse phase Introduction Emulsions have a wide field of application and can frequently be found, among others, in the chemical, pharmaceutical and food industry. Their properties, such as stability, viscosity or color, can be influenced by the droplet size distribution (DSD) which is set during the production process. Emulsions with a low to medium viscosity of 1-200 mPa·s are mostly produced with the high-pressure homogenization (HPH) process [1], during which droplet diameters of less than 1 µm can be achieved. A high-pressure homogenizer consists of a high-pressure pump and a subsequent disruption unit. During the process, a pre-emulsion that still has large droplets is pumped through the disruption unit at a pressure of several hundred bar. In the disruption unit, the emulsion's flow is strongly accelerated by a reduction of the cross-section where the droplets are exposed to shear and elongational strain stresses [2], which then deform the droplets to either ellipsoids or thin threads [3], depending on the process conditions before leaving the disruption unit as a limited free jet at the outlet. The deformed droplets are exposed to turbulent viscous and turbulent inertia forces [4,5] in the turbulent free jet, where finally the droplet breakup takes place [3,6]. Bisten et al. [2] and Kelemen et al. [7] already investigated the flow pattern in modified orifice-type HPH disruption units. They located the areas with the highest strains in the inlet region and the highest velocity fluctuations at the outlet. Comparable investigations in scaled high pressure homogenizer flat valves were made by Håkansson et al. [8] and Innings et al. [9]. The breakup of the droplets has already been investigated in detail under stationary laminar conditions [10][11][12][13]. However, these findings cannot be transferred 1:1 to the complex flow field in a high-pressure homogenizer. The stresses on the droplets are continuously changing during their passage through the disruption unit and thus no stationary conditions are achieved. The influence of the process parameters and geometry on the resulting stress history as well as the influence of this stress history on the droplet breakup itself are the focus of current research [14][15][16][17][18]. Kelemen et al. [3] and Bisten et al. [2] have visualized the droplet deformation and breakup in modified optically accessible orifices. Innings et al. [6] have performed comparable investigations in an optically accessible high-pressure homogenizer flat valve. Since the dimensions of the disruption unit is in the micrometer scale and the droplet breakup takes place in the nanometer scale at short time scales, the resolution limit of current measuring systems is reached. Furthermore, high pressures and high velocities restrict the inside view of the process. Therefore, scaling of the process is a promising approach to get a better understanding of the droplet breakup mechanism in high-pressure homogenizers (HPH). First investigations on the scalability of breakup processes were made by Walzel [19]. As there is no widely accepted approach on which dimensionless numbers have to be kept constant when scaling the droplet break in an HPH, a number of approaches were used in the past. Kolb et al. [20] only maintained the geometrical similarity and the Reynolds number in the smallest gap of the orifice. Budde et al. [21] further included the viscosity ratio, the density ratio and the Weber number. Innings et al. [5] additionally included turbulent quantities, for example, the Kolmogorov length, in their scaling approach. Although the three investigations used different viscosity ratios and Reynolds numbers, they all showed similar droplet behavior downstream from the constriction. The deformed droplets were further deformed into complex basketlike shapes when entering the turbulent mixing area of the free jet and subsequently broke up in many small droplets [5]. However, the scaled system of Kolb et al. [20] resulted in a relaxation of the deformed droplet before leaving the constriction. In contrast to that, the scaled system of Budde et al. [21] resulted in significant higher deformation of the droplets when leaving the orifice compared to the other investigations. The differences of the droplet deformation might be caused by differences in the geometry of the disruption unit and in the process parameters used, for example, the viscosity ratio. Håkansson [22] has already shown that variations in droplet breakup in differently scaled high-pressure homogenizers are presumable if the similarity of the process parameters is not obtained. Therefore, complete physical and geometrical similarity of the homogenization system should be achieved. Using a similarity approach, this investigation concentrated on scaling factors for droplet breakup mechanisms of oil-in-water emulsions in high pressure orifices. By using this similarity approach, the dimensionless numbers characteristic for the droplet breakup (Reynolds number Re, Weber number We, density ratio κ, viscosity ratio λ) were maintained in order to preserve all processes involved in the droplet breakup. Two scaling factors were chosen, namely 5 and 50. The original scale orifice (scaling factor of 1) had dimensions that correspond to typical laboratory high-pressure homogenizers. The scaling approach used by Budde et al. [21] was used here to determine the geometry dimensions and the material properties while maintaining all relevant quantities. Flow patterns and the transient droplet breakup were visualized in detail. Flow patterns were characterized in more detail and compared on a nondimensional basis, following the hypothesis that droplets should move in equal flow fields in all scales to guarantee equal conditions for droplet breakup. Droplet Breakup Scaling Theory When investigating the droplet breakup, a droplet with a viscosity η d , a density ρ d , an interfacial tension γ and a diameter x 0 is considered. The droplet flows with a velocity u through a disruption unit with a diameter of the smallest cross-section d in a surrounding fluid with a viscosity η c and a density ρ c . To simplify the investigation, the Buckingham π theorem [23] is used to reduce the variables to dimensionless numbers that describe physical relations of the droplet breakup. The seven influencing variables can be written in the following equation: As the rank of the dimensional matrix is three (length, time, weight), four dimensionless numbers can be formed. These are the Reynolds number Re, the Weber number We, the density ratio κ and the viscosity ratio λ. These dimensionless numbers have to be kept constant when scaling the droplet breakup to achieve physical similarity of the original scale (index O) and the scaled system (index M). Physical similarity : Furthermore, the ratios of the geometry dimensions need to be kept constant. Geometrical similarity : Here, D in is the width of the inlet channel and D exit is the width of the outlet channel, respectively. A scaling factor Ψ is introduced to describe the ratio of the droplet diameters of the original and the scaled system. Analogically, the scaling factors Ψ ρ for the density, Ψ γ for the interfacial tension and Ψ η for the viscosity are introduced. The density and the interfacial tension should be kept constant when scaling the droplet breakup to reduce the degrees of freedom of the system of equations, which results in Ψ ρ = Ψ γ = 1. When using Equations (6) and (8), the scaling factor Ψ η can be written as: With Equation (9) the desired viscosity of the scaled system can easily be calculated according to the scaling factor of the system. The resulting Sauter mean diameter ratio x 32 x 0 should remain constant with the scaling approach used. Budde et al. [21] has shown with a similar approach that the dwell time is extended with a scaling factor of Ψ 3 2 . As cavitation and the diffusion of the emulsifier, for example, cannot be scaled, cavitation was always suppressed by applying a backpressure [24]. As the usage of an emulsifier is sometimes necessary to ensure droplet stability during the experiments, some deviations on the transient surface properties may appear. Materials An overview of the ingredients and the properties of the dispersed and continuous phases of the three scales can be found in Table 1. The continuous phase for velocity measurements of the original scale system (Ψ = 1) consisted of five components in total: 65.7 wt % demineralized water with 34 wt % sucrose form the basis. To this, 0.2 wt % potassium sorbate (VWR International GmbH, Darmstadt, Germany) and 0.1 wt % citric acid (Carl Roth GmbH + Co. KG, Karlsruhe, Germany) were added to inhibit the growth of molds during storing between experiments; 0.00125 wt % Nile red coated polystyrene particles with a diameter of 1.97 µm (microparticles GmbH, Berlin, Germany) were added to the continuous phase to visualize the streamlines. These particles have a comparable density to the continuous phase and are small enough to follow the streamlines of the flow while not influencing it. This composition was found suitable to achieve an adequate particle concentration in the interrogation areas of the particle image velocimetry (PIV) algorithm [2] that was applied. For later droplet visualization experiments within the original scale system (Ψ = 1), the continuous phase consisted of 65.5 wt % demineralized water and 34 wt % sucrose, and 0.5 wt % polysorbate 20 (Tween 20 ® , Carl Roth, Karlsruhe, Germany) that prevents coalescence of droplets. For this mixture, Newtonian flow behavior was determined with a rotational rheometer (Anton Paar Physica MCR 301, Graz, Austria) at a temperature of 20 • C in the shear rate range of 0.1-100 s −1 . A dynamic viscosity of 0.00425 Pa·s was measured. As the disperse phase fraction was below 1 wt % for all experiments, it was expected that Newtonian flow behavior was also present at higher shear rates during the process [25,26]. The density of the continuous phase was determined with the density determination set DIS 11 (DCAT11, dataphysics, Filderstadt, Germany) to be 1145.3 kg/m 3 at 20 • C. A mixture of middle-chain two triglycerides was used as disperse phase. The oils Miglyol 810 ® (IOI Oleo GmbH, Witten, Germany) and Miglyol 840 ® (IOI Oleo GmbH, Witten, Germany) were mixed in ratio of 41:59. Added to these was 0.012 wt % of the fluorescence color Nile red (9-(diethyl-amino)benzo[a]phenoxazin-5(5H)-one, Sigma-Aldrich Chemie GmbH, St. Louis, MO, USA), which was dissolved in the oil mixture and stirred overnight. Any undissolved Nile red crystals were removed by filtering the next day. Subsequently, the dynamic viscosity of the disperse phase was measured with a rotational rheometer (Anton Paar Physica MCR 301, Graz, Austria) at a temperature of 20 • C to 0.0149 Pa·s with Newtonian flow behavior. A Wilhelmy plate (DCAT11, dataphysics, Filderstadt, Germany) was used to measure the interfacial tension between the continuous (for droplet visualization) and the disperse phase. For interfacial tension, a value of 4.316 mN/m was determined after a measuring time of 2 h at a temperature of 20 • C, while a density of 928.33 kg/m 3 was measured for the disperse phase with the density determination set DIS 11 (DCAT11, dataphysics, Filderstadt, Germany) at 20 • C. Five-Fold Scaled System The continuous phase for velocity measurements of the 5-fold scaled system (Ψ = 5) consisted of 58.823 wt % glycerol (purity 99.5%, SuboLab GmbH, Pfinztal-Söllingen, Germany) and 40.877 wt % demineralized water, to which 0.2 wt % potassium sorbate (VWR International GmbH, Darmstadt, Germany) and 0.1 wt % citric acid (Carl Roth GmbH + Co. KG, Karlsruhe, Germany) were added as preservatives. Supplementary, 0.00255 wt % Nile red coated polystyrene particles with a diameter of 12 µm (microparticles GmbH, Berlin, Germany) were added. For this composition, Newtonian flow behavior was determined. A dynamic viscosity of 0.00942 Pa·s was measured with a rotational rheometer (Anton Paar Physica MCR 301, Graz, Austria) at a temperature of 20 • C. A middle-chain triglyceride Miglyol 812 ® (IOI Oleo GmbH, Witten, Germany) with 0.012 wt % Nile red (9-(diethyl-amino)benzo[a]phenoxazin-5(5H)-one, Sigma-Aldrich Chemie GmbH, St. Louis, MO, USA) was used as the disperse phase of the 5-fold scaled system. The Nile red was dissolved using the same procedure as in the original scale system. The dynamic viscosity of the disperse phase was 0.02947 Pa·s with Newtonian flow behavior, which again was measured with a rotational rheometer (Anton Paar Physica MCR 301, Graz, Austria) at a temperature of 20 • C. Parallel to the originally scaled system, a Wilhelmy plate (DCAT11, dataphysics, Filderstadt, Germany) was used to measure the interfacial tension between the continuous (for droplet visualization) and the disperse phase, which was determined to 3.986 mN/m after a measuring time of 2 h at a temperature of 20 • C. The density of the disperse phase was 920 kg/m 3 according to the supplier's datasheet. Newtonian flow behavior of this formulation was identified. A dynamic viscosity of 0.0314 Pa·s was measured with a rotational rheometer (Anton Paar Physica MCR 301, Graz, Austria) at a temperature of 20 • C, and a density of 1145.4 kg/m 3 at 20 • C was determined with the density determination set DIS 11 (DCAT11, dataphysics, Filderstadt, Germany). The silicone oil WACKER ® AK 100 (Wacker Chemie AG, Stuttgart, Germany) was used as disperse phase, which had a density of 960 kg/m 3 according to the supplier's datasheet. Its dynamic viscosity was again measured with a rotational rheometer (Anton Paar Physica MCR 301, Graz, Austria) at a temperature of 20 • C to 0.1066 Pa·s. Experimental Setup The experimental setup of the original scale system with an optically accessible orifice is presented in Figure 1. A nitrogen pressurized gas cylinder (a) was used to pump the continuous phase into the pressure vessel (b) through the pipe system. Due to the pressurized gas cylinder, no pressure fluctuations were expected, which allowed velocity measurement under constant conditions. A filter unit behind the pressure vessel's exit was used to avoid blocking of the orifice by dirt particles. The pressure loss of the orifice ∆p = p in − p bp was measured with two digital pressure sensors (Wika S-20 (0-160 bar), Klingenberg, Germany). Their current signal was converted into a voltage signal, which was then recorded with a USB-6210 device (National Instruments Germany GmbH, München, Germany) and Labview 2019 (National Instruments, Austin, TX, USA). The needle valve (d) was used to adjust the backpressure p bp to the desired Thoma number Th = p in/p bp = 0.3, under which conditions no cavitation was visible on the images that were recorded with the high-speed camera of the microparticle image velocimetry (µ-PIV) measurement system. For the duration of a complete experiment, the mass flow was determined by continuous reading off the scale with Labview while the Reynolds number in the smallest cross-section of the orifice was kept constant. Experimental Setup The experimental setup of the original scale system with an optically accessible orifice is presented in Figure 1. A nitrogen pressurized gas cylinder (a) was used to pump the continuous phase into the pressure vessel (b) through the pipe system. Due to the pressurized gas cylinder, no pressure fluctuations were expected, which allowed velocity measurement under constant conditions. A filter unit behind the pressure vessel's exit was used to avoid blocking of the orifice by dirt particles. The pressure loss of the orifice Δ = in − bp was measured with two digital pressure sensors (Wika S-20 (0-160 bar), Klingenberg, Germany). Their current signal was converted into a voltage signal, which was then recorded with a USB-6210 device (National Instruments Germany GmbH, München, Germany The experimental setup of the 5-fold scaled system with an optically accessible orifice is presented in Figure 2. As in the original scale setup, the pressure vessel (b) was pressurized with a nitrogen pressurized gas cylinder (a). The continuous phase was filtered before passing the volume flow (VSI 0,2/16 EPO 12V-32W15/4, VSE Volumentechnik GmbH, Neuenrade, Germany) and temperature sensor. The volume flow and the current temperature of the continuous phase were recorded with a USB-6210 device (National Instruments Germany GmbH, München, Germany) and Labview 2019 (National Instruments, Austin, TX, USA). The inlet pressure in of the orifice and the backpressure The experimental setup of the 5-fold scaled system with an optically accessible orifice is presented in Figure 2. As in the original scale setup, the pressure vessel (b) was pressurized with a nitrogen pressurized gas cylinder (a). The continuous phase was filtered before passing the volume flow (VSI 0,2/16 EPO 12V-32W15/4, VSE Volumentechnik GmbH, Neuenrade, Germany) and temperature sensor. The volume flow and the current temperature of the continuous phase were recorded with a USB-6210 device (National Instruments Germany GmbH, München, Germany) and Labview 2019 (National Instruments, Austin, TX, USA). The inlet pressure p in of the orifice and the backpressure p bp at the outlet channel were measured with two digital pressure sensors (Wika S-20 (0-160 bar), Klingenberg, Germany) and again recorded with the USB-6210 device and Labview. The continuous phase was additionally pumped through a bypass towards the droplet generator (d), which was not in use when velocity measurements were performed. After flowing through the droplet generator, the fluid was reinjected in front of the orifice with a stainless-steel capillary. The needle valve (f) was used to achieve the desired backpressure p bp for the Thoma number Th = 0.3. bar), Klingenberg, Germany) and again recorded with the USB-6210 device and Labview. The continuous phase was additionally pumped through a bypass towards the droplet generator (d), which was not in use when velocity measurements were performed. After flowing through the droplet generator, the fluid was reinjected in front of the orifice with a stainless-steel capillary. The needle valve (f) was used to achieve the desired backpressure bp for the Thoma number ℎ = 0.3. The experimental setup of the 50-fold scaled system with an optically accessible orifice is presented in Figure 3. The continuous phase is circulated by a centrifugal pump (Grundfos, Bjerringbro, Denmark). The centrifugal pump can be controlled continuously by a frequency converter, so that the inlet pressure and thus the flow velocity in the plant can be adjusted. The measuring section is made of glass or acrylic glass so that there is extensive optical access. The inlet channel upstream of the orifice has a circular cross-section while the outlet channel downstream of the orifice has a square cross-section. The lowpressure side of the orifice plate is atmospherically vented so that there is no back pressure. The pressure sensors for measuring the pressure drop of the orifice are placed at the inlet of the inlet channel and at the end of the outlet channel, respectively. The distance between the measurement positions is about 1 m. The outlet channel dimension can be altered by placing smaller square channels in the outlet channel that are sealed to the outlet wall of the orifice. The dissipated energy of the pump, which would result in a heating of the fluid in the test site, is removed with a heat exchanger to ensure constant process conditions. The experimental setup of the 50-fold scaled system with an optically accessible orifice is presented in Figure 3. The continuous phase is circulated by a centrifugal pump (Grundfos, Bjerringbro, Denmark). The centrifugal pump can be controlled continuously by a frequency converter, so that the inlet pressure and thus the flow velocity in the plant can be adjusted. The measuring section is made of glass or acrylic glass so that there is extensive optical access. The inlet channel upstream of the orifice has a circular cross-section while the outlet channel downstream of the orifice has a square cross-section. The low-pressure side of the orifice plate is atmospherically vented so that there is no back pressure. The pressure sensors for measuring the pressure drop of the orifice are placed at the inlet of the inlet channel and at the end of the outlet channel, respectively. The distance between the measurement positions is about 1 m. The outlet channel dimension can be altered by placing smaller square channels in the outlet channel that are sealed to the outlet wall of the orifice. The dissipated energy of the pump, which would result in a heating of the fluid in the test site, is removed with a heat exchanger to ensure constant process conditions. Figure 4 shows the geometry of both the original and 5-fold scaled orifice (a), and the 50-fold scaled orifice (b). The square inlet channel has a constant width of 2 mm (Ψ = 1) and 10 mm (Ψ = 5). In contrast, the inlet channel of the 50-fold scaled orifice has a circular cross-section with a diameter of 100 mm (Ψ = 50). The conical inlets of the original and 5-fold scaled orifices each have an angle of 60 • , while the inlet of the 50-fold scaled orifice was designed with a radius of 20 mm. This was done since the orifice with round edges resulted in better optical accessibility, as shown in Figure 4d. The round edges did not cause a different velocity profile at the outlet compared to a conical inlet [27]. The smallest cross sections, being circular, have a diameter of d = 0.2 mm (Ψ = 1), d = 1 mm (Ψ = 5) and d = 10 mm (Ψ = 50), respectively. The length-diameter ratio l/d = 2 for the smallest cross-section was kept constant for all scales. The square outlet channel of the original scale orifice has widths of 2, 3 and 4 mm, while the 5-fold scaled one has a diameter of 10 mm. The 50-fold scaled orifice again shows a square outlet channel with widths of 50, 100, 150 and 200 mm. The exit of each of the orifices is always in the center of the outlet channel. Therefore, the inlet channel of the original scale orifice needs to be lower than the surface by a distance d c . This distance is compensated by the acrylic glass cover plate to maintain the square cross-section of the inlet channel. ChemEngineering 2021, 5, x FOR PEER REVIEW 8 of 25 Figure 4 shows the geometry of both the original and 5-fold scaled orifice (a), and the 50-fold scaled orifice (b). The square inlet channel has a constant width of 2 mm (Ψ = 1) and 10 mm (Ψ = 5). In contrast, the inlet channel of the 50-fold scaled orifice has a circular cross-section with a diameter of 100 mm (Ψ = 50). The conical inlets of the original and 5fold scaled orifices each have an angle of 60°, while the inlet of the 50-fold scaled orifice was designed with a radius of 20 mm. This was done since the orifice with round edges resulted in better optical accessibility, as shown in Figure 4d. The round edges did not cause a different velocity profile at the outlet compared to a conical inlet [27]. The smallest cross sections, being circular, have a diameter of = 0.2 mm (Ψ = 1), = 1 mm (Ψ = 5) and = 10 mm (Ψ = 50), respectively. The length-diameter ratio / = 2 for the smallest cross-section was kept constant for all scales. The square outlet channel of the original scale orifice has widths of 2, 3 and 4 mm, while the 5-fold scaled one has a diameter of 10 mm. The 50-fold scaled orifice again shows a square outlet channel with widths of 50, 100, 150 and 200 mm. The exit of each of the orifices is always in the center of the outlet channel. Therefore, the inlet channel of the original scale orifice needs to be lower than the surface by a distance c . This distance is compensated by the acrylic glass cover plate to maintain the square cross-section of the inlet channel. Table 2. The original scale and the 5-fold scaled orifice are made of stainless steel, the inlet channel and the orifice with the outlet channel being produced separately. Steel construction makes it impossible to make either the conical inlet or the smallest cross-section optically accessible. Both parts are screwed together and sealed with glue (Loctite 3472, Henkel AG & Co. KGaA, Düsseldorf, Germany). After a polishing step, the orifice is then sealed with an acrylic glass plate that has a protrude bar with a height , as depicted in Table 2. The original scale and the 5-fold scaled orifice are made of stainless steel, the inlet channel and the orifice with the outlet channel being produced separately. Steel construction makes it impossible to make either the conical inlet or the smallest cross-section optically accessible. Both parts are screwed together and sealed with glue (Loctite 3472, Henkel AG & Co. KGaA, Düsseldorf, Germany). After a polishing step, the orifice is then sealed with an acrylic glass plate that has a protrude bar with a height d c , as depicted in Figure 4c, which, depending on the outlet geometry, fits in the inlet channel and creates a quadratic inlet channel. An overview of all dimensions can be found in Table 2. µ-PIV Measurements A microparticle image velocimetry measurement system (µ-PIV) was used to determine the velocity field in the scales Ψ = 1 and Ψ = 5. A CCD camera (FlowSense 4M Camera Kit, Dantec Dynamics, Skovlunde, Denmark) with 12-bit resolution and 2048 × 2048 pixels was mounted to an inverse microscope (Dantec HiPerformance Microscope, Skovlunde, Denmark) and used to take double pictures. The laser beam of a double-pulsed neodymium-doped yttrium aluminum garnet (ND:YAG) laser (Dual-Power 30-15 of Dantec Dynamics, Skovlunde, Denmark), which was operated at 30 mJ/pulse at a wavelength of 532 nm, was conducted to the microscope with a light guide. The laser and the camera were then synchronized. For the original scale orifice (Ψ = 1), a camera adapter with a 0.5× magnification combined with an objective lens (C PLAN, Leica Microsystems Wetzlar GmbH, Wetzlar, Germany) with a 10× magnification and a numerical aperture of NA = 0.22 was used. The same camera adapter with a 0.5× magnification was combined with an objective lens (HC Plan Fluotar 2.5×/0.07, Leica Microsystems Wetzlar GmbH, Wetzlar, Germany) with a 2.5× magnification to record the experiment using the 5-fold scaled orifice (Ψ = 5). This resulted in a 5× magnification for the original scale and in a 1.75× magnification for the 5-fold scaled system. The visual field of the original scale setup has an area of about 3 × 3 mm 2 , which corresponds to a spatial resolution of 1.5 µm/px. Compared to the 5-fold scaled setup, the visual field of the original scale setup has an area of about 12 × 12 mm 2 , which corresponds to a spatial resolution of 6 µm/px. A minimum of 2000 double pictures was taken in each measurement run for statistical convergence. The pictures were processed in commercial software Dynamic Studio 6.10 (Dantec Dynamics, Skovlunde, Denmark), an average picture calculated and subsequently subtracted from the images to increase the signal to noise ratio. Afterwards, an adaptive particle image velocity (PIV) algorithm was used to calculate the velocity vector map for the original scale orifice. The grid size distance was set to 8 pixel and the interrogation area was set in the range of 128 × 128 pixel to 16 × 16 pixel to achieve a minimum of 5 particles within. The interrogation area of the second frame of the double picture was moved according to the velocity gradient. A vector was assumed to be valid if the signal to noise ratio during the cross-correlation was larger than 7. In a next step, the velocity vector maps were combined to an average velocity vector map whereby only valid velocity vectors were used for the calculation. Outlier vectors were detected in a 3 × 3 neighborhood and substituted by the neighborhood median vector. Between 350 and 450 valid vectors were found on every grid point. The velocity vector map of the 5-fold scaled orifice was calculated with a self-developed particle tracking algorithm in MATLAB 2019b (Mathworks, Nantucket, MA, USA), based on the nearest neighbor principle, as described by Ohmi and Li [28]. The found vectors were binned in 10 × 10 pixel areas with a grid size of 10 pixel. This resulted in a vector spacing of about 60 µm. Within the bins, vectors that were outside one standard deviation were treated as outliers and subsequently removed from the calculation. In the next step, the average velocity and the standard deviation were recalculated on every grid point. Between 50 and 100 valid vectors were found on every grid point. The correct level of the z-plane was determined by measuring the velocity map at the outlet of the orifice at several levels with a distance of 0.1·d. The one layer where the highest velocity and the thinnest turbulent shear layer were present was set as the measuring z-plane for future measurements. This procedure was repeated for every orifice geometry while all measurements were performed in the x-y plane. The orifice was moved relative to the objective lenses with a SCAN IM 120 × 100 −2 mm (Märzhäuser Wetzlar GmbH & Co. KG, Wetzlar, Germany) to measure the velocity map farther downstream from the orifice exit. The orifice itself was moved a distance of 5·d between two measurement runs to ensure a sufficient overlay of the measurement area. The velocity maps of the sections were interpolated on a new grid and, wherever overlapping, averaged with MATLAB 2019b (Mathworks, Nantucket, MA, USA). PIV Measurements Particle image velocity (PIV) measurements were carried out for the measurements of the velocity fields in scale 50. Six sCMOS cameras (pco.edge 5.5, PCOAG, Kehlheim, Germany) with a 16-bit resolution and 2160 × 2560 pixel (pixel size: 6.5 µm × 6.5 µm) were used for visualization synchronously in double image mode, so that a field of view with a large aspect ratio could be recorded (34 mm × 6.9 mm). The optical magnification is about 4.7. The cameras were equipped with Makro planar lenses with a focal length of 100 mm (Carl-Zeiss, Oberkochen, Germany) Hollow glass spheres with an average particle size of 16 µm were used as seeding particles. The density of the glass spheres was adjusted to water. The illumination of the measurement was performed by an Evergreen 200 Nd:YAG double pulse laser (Quantel, Lannion, France) with a pulse energy of up to 200 mJ and focused through lenses on a light sheet of about 1 mm thickness. Two thousand double images were recorded, with a recording frequency of 7 Hz. The evaluation of the double images was performed with the commercial software Davis 8.4 (LaVision, Göttingen, Germany) with an iterative evaluation algorithm. The final interrogation window size was 16 × 16 pixel with 50% overlap. The vector resolution was about 4.1 px/mm. The postprocessing and stitching of the individual camera field of views was done with MATLAB 2019b (Mathworks, Nantucket, MA, USA). Matching of Orifice Dimensions and Material System Two material systems and the dimensions of the related orifices were adapted to achieve the required scaling factors. The material systems had to allow µ-PIV and PIV measurements as well as droplet visualization. Transparent fluids were necessary to monitor the movement of the seeding particles and the deformation of the droplets. Both the disperse and continuous phase need to have Newtonian flow behavior. Furthermore, the fluorescence color Nile red had to be soluble in the disperse phase while being insoluble in the continuous phase of the 5-fold scaled system (Ψ = 5). These specifications resulted in a water-glycerin system for the 5-fold scaled system (Ψ = 5). The dimensions and the parameters achieved for the material system are presented in Table 3. As can be seen, an actual scaling factor of five was reached for all dimensions of the orifice. Furthermore, a scaling factor of 2.22 could be achieved for the viscosity of the continuous phase. The scaling factor for the viscosity of the disperse phase is 1.978, whereas the interfacial tension of the scaled system(Ψ = 5) with a measured value of 3.986 mN/m resulted in a scaling factor of 0.9247. The scaling factors achieved for all dimensions (d, D, l) allow complete geometric similarity. Furthermore, the physical similarity of the viscosity, density and interfacial tension was achieved with minor deviations of the scaling factors. For the 50-fold scaled system (Ψ = 50), a water-sucrose system was found to be suitable. The resulting dimensions and material parameters are presented in Table 4. As with the 5-fold scaled system (Ψ = 5), the target scaling factor was reached for all dimensions of the orifice. The scaling factor reached for the viscosity of the disperse phase is 7.154. Furthermore, a scaling factor of 1.00 for the density of the continuous phase and a scaling factor of 1.03 for the density of the disperse phase were achieved accordingly. Scaling the viscosity of the continuous phase resulted in a value of 7.39. The water-sucrose system with AK100 oil resulted in an interfacial tension of 20.074 mN/m, entailing an interfacial tension scaled by a factor of 4.6571. With these values for the dimensions of the orifice (d, D, l), complete geometric similarity could be reached while difficulties arose when attempting to scale the interfacial tension properly. The use of an emulsifier to lower the interfacial tension in this system results in an accumulation of the disperse phase, preventing any measurement. The higher interfacial tension compared to the target value results in a smaller We number, which may influence the droplet breakup in the 50-fold scaled orifice. Any results from this scaled experimental setup therefore need to be interpreted with caution. It is presumed, though, that the diffusion time of the emulsion to the newly created surface during droplet deformation is much higher than the breakup process itself [29,30]. Hence, the influence of the higher interfacial tension on the droplet may be moderate. Table 5 shows the resulting dimensionless numbers of all three scales based on a Reynolds number Re = 2000 in the gap of the disruption unit. When looking at the density ratio κ and the viscosity ratio λ, it can be noticed that the 5-fold and 50-fold scaled systems result in only slightly diverging dimensionless numbers. Closer inspection of the table, however, shows that the abovementioned failed scaling of the interfacial tension of the 50-fold scaled system results in only 23% of the target Weber number of the original scale system. Compared to that, the 5-fold scaled system results in a Weber number that is very close to the target value. As discussed before, the results from the 50-fold experimental setup therefore need to be interpreted with caution. Comparison of the Flow Pattern in the Orifices At the exit of the orifice, the fluid flows with a uniform velocity in the almost quiescent surrounding fluid. A free jet is formed that consists of a potential core and an enclosing boundary layer. The potential core represents the area where the velocity remains constant. The enclosing boundary layer, which is also called shear or mixing layer, grows with increasing distance from the orifice. Vortices are formed at the edges of the core region, which results in velocity fluctuations. Simultaneously, the diameter of the potential core is decreasing until it completely vanishes [31,32]. It is hypothesized that the normalized velocity on the center axis of all three scales is equal if the Reynolds number Re is kept constant. Furthermore, it is hypothesized that all three scales result in a comparable velocity fluctuation field. , normalized velocity field at Re = 5700 (c) and normalized velocity fluctuation at Re = 5700 (d), for all three orifice scales. All velocity fields were normalized with the theoretical velocity u Re according to Bernoulli's principle [33]: Comparison of the Normalized Velocity and the Normalized Velocity Fluctuations ChemEngineering 2021, 5, x FOR PEER REVIEW 14 of 25 In general, the normalized velocity fields of all three scales are similar, apart from some small deviations of the potential core length that can be caused by the production inaccuracies in the small scales. Furthermore, it can be concluded that the absolute values of the velocity fluctuations are increasing while the normalized velocity fluctuations remain constant when increasing the Re number. Comparison of the Normalized Velocity on the Center Axis In the following, the normalized velocity on the center axis at the exit of the orifice is determined at the three different scales for two different Re numbers. Furthermore, the restrictions of this scaling approach are highlighted. The following section illustrates the comparability of the flow pattern of the 50-fold scaled orifice with the original scale. The normalized velocity remains constant for both scales in the region of 0 ≤ ≤ 3.7 at a Reynolds number Re = 5700 in Figure 6a. This area represents the potential core of the free jet where the flow develops [32]. The absolute values of the normalized velocity differ slightly in the different scales, showing a value of 1.01 in the original scale (Ψ = 1), while it is at 0.95 in the 50-fold scaled orifice (Ψ = 50). Figure 6a also reveals a steep decrease of the normalized velocity in the region > 3.7. The absolute value of the slope at which the normalized velocity declines decreases with increasing distance from the orifice outlet ( = 0). Comparing the results farther downstream ( > 3.7), it can be observed that the slope is similar for both scales. The normalized velocity decreases to a value of about 0.14 at a normalized distance of = 32 in both orifices. The original scale orifice (Ψ = 1) shows higher normalized velocities than the scaled orifice (Ψ = 50) within the whole investigated region. Figure 6b shows the experimental data on the normalized velocity on the center axis of the 50-fold scaled and the original scaled orifice in the area 0 ≤ ≤ 32 at Re = 2000. The normalized velocity remains constant in the region 0 ≤ ≤ 8 for both scales, which The measurements show similar velocity fields for the three scales, although the 5-fold scaled system uses a deviating outlet channel diameter ratio D exit /d = 10 instead of D exit /d = 20. The influence of the outlet channel diameter ratio is discussed in Section 3.2.3. The region with a constant normalized velocity of about 1 represents the potential core, see Figure 5a,c. A shear layer surrounds the potential core where the velocity is decreased by the ambient fluid. This region with lower velocities grows with increasing distance from the orifice exit, which results in a smaller diameter of the potential core. The energy of the jet is dissipated in the emerging turbulent eddies in the shear layer. Furthermore, the outer diameter of the jets increases downstream from the orifice by entrainment of the ambient fluid. The normalized velocity fluctuations in Figure 5b The higher velocity fluctuations in the shear layer in the original and 5-fold scaled orifice may be caused by the change in the measurement system. The µ-PIV measurements were performed at lower seeding particle density compared to the PIV measurements in the 50-fold scaled orifice to keep the background noise low, which is caused by out of focus seeding particles. Therefore, less valid vectors could be calculated. Besides, slow particles of the ambient can thus distort the velocity in the shear layer if the interrogation window of the adaptive PIV algorithm is partially in the shear layer and partially in the ambient fluid. Furthermore, remaining burr at the orifice outlet edge can cause stronger and earlier velocity fluctuations in the shear layer of the original scale and 5-fold scaled orifice. In general, the normalized velocity fields of all three scales are similar, apart from some small deviations of the potential core length that can be caused by the production inaccuracies in the small scales. Furthermore, it can be concluded that the absolute values of the velocity fluctuations are increasing while the normalized velocity fluctuations remain constant when increasing the Re number. Comparison of the Normalized Velocity on the Center Axis In the following, the normalized velocity on the center axis at the exit of the orifice is determined at the three different scales for two different Re numbers. Furthermore, the restrictions of this scaling approach are highlighted. The following section illustrates the comparability of the flow pattern of the 50-fold scaled orifice with the original scale. The normalized velocity remains constant for both scales in the region of 0 ≤ x d ≤ 3.7 at a Reynolds number Re = 5700 in Figure 6a. This area represents the potential core of the free jet where the flow develops [32]. The absolute values of the normalized velocity differ slightly in the different scales, showing a value of 1.01 in the original scale (Ψ = 1), while it is at 0.95 in the 50-fold scaled orifice (Ψ = 50). Figure 6a also reveals a steep decrease of the normalized velocity in the region x d > 3.7. The absolute value of the slope at which the normalized velocity declines decreases with increasing distance from the orifice outlet (x = 0). Comparing the results farther downstream ( x d > 3.7), it can be observed that the slope is similar for both scales. The normalized velocity decreases to a value of about 0.14 at a normalized distance of x d = 32 in both orifices. The original scale orifice (Ψ = 1) shows higher normalized velocities than the scaled orifice (Ψ = 50) within the whole investigated region. Figure 6b shows the experimental data on the normalized velocity on the center axis of the 50-fold scaled and the original scaled orifice in the area 0 ≤ x d ≤ 32 at Re = 2000. The normalized velocity remains constant in the region 0 ≤ x d ≤ 8 for both scales, which indicates the potential core of the free jet [32]. The original scale orifice (Ψ = 1) reaches a value of about 1.02 in this region while the scaled orifice (Ψ = 50) reaches a value of about 0.95. In the region of x d > 8, the normalized velocity within the scaled orifice differs from the original scale as the velocity decreases faster in the original scale orifice. However, the measurements show equally normalized velocities for both orifice scales in the region of 20 ≤ x d ≤ 32. The higher velocity in the original scale orifice at both Re numbers might either be caused by a slightly smaller diameter d of the orifice due to production inaccuracy or could be a result of a measurement error of the pressure sensors. These sensors have a higher measurement error at very low and very high pressures, potentially leading to errors, especially for the low pressure p bp at the outlet. The lower value of the normalized velocity of 0.95 at the outlet of the 50-fold scaled orifice might be a result of the pressure sensor arrangement as the pressure loss was measured from the inlet of the inlet channel to the outlet of the outlet channel. This could result in higher pressure losses compared to the pressure loss just over the orifice. The length of the potential core of fully turbulent free jet should be about 5 ·d long [32]. In contrast to the literature, a potential core length of x d = 3.7 was found at Re = 5700. This might indicate that the jet is evenly confined in the ChemEngineering 2021, 5, 7 16 of 23 wide outlet channel geometry. As the length of the potential core at Re = 2000 is double the length stated by Rajaratnam [32], it can be assumed that the jet in the outlet is not fully turbulent and rather in the transitional flow regime. jet should be about 5 • long [32]. In contrast to the literature, a potential core length of = 3.7 was found at Re = 5700. This might indicate that the jet is evenly confined in the wide outlet channel geometry. As the length of the potential core at Re = 2000 is double the length stated by Rajaratnam [32], it can be assumed that the jet in the outlet is not fully turbulent and rather in the transitional flow regime. Overall, these results indicate that scaling was successful for the 50-fold scaled orifice. The development of the normalized velocity in the center axis is comparable for both scales apart from the possible errors caused by the pressure sensors in both scales. Moreover, the results show that minimal production inaccuracies in the original scale can result in large deviations in the velocity profile. Overall, these results indicate that scaling was successful for the 50-fold scaled orifice. The development of the normalized velocity in the center axis is comparable for both scales apart from the possible errors caused by the pressure sensors in both scales. Moreover, the results show that minimal production inaccuracies in the original scale can result in large deviations in the velocity profile. Figure 7a compares the normalized velocity on the center axis of the 5-fold scaled orifice with the velocity profile of the original and the 50-fold scaled orifice at Re number of 5700. The normalized velocity of the 5-fold scaled orifice is about 0.95, with some fluctuations in the region 0 ≤ x d ≤ 5, which represents the potential core of the emerging jet. The original scale orifice shows a normalized velocity of about 1.00 at the exit of the orifice. Unexpectedly, the velocity starts to decrease immediately downstream from the exit of the orifice, which indicates that there is no potential core. The original scale orifice shows significantly lower normalized velocities compared to the two scaled orifices in the whole investigated area. This large deviation might be affected by a slightly tilted drilling hole in the original orifice. The unsteady steps in the velocity development are an indication for this explanation. Followed by the potential core, the velocity in the 5-fold scaled orifice declines parallel to the 50-fold scaled orifice. The normalized velocity in the 5-fold and 50-fold scaled orifice are equal at a normalized distance larger than x d = 5. The normalized velocity decreases to a value of about 0.14 at a normalized distance of x d = 32 in both scaled orifices, whereas the original scale orifice leads to a decrease of the normalized velocity to a value of about 0.06 at a normalized distance of x d = 32. and of a 50-fold scaled orifice (Ψ = 50) with an outlet channel diameter ratio of exit ⁄ = 10. Influence of Confinement on the Normalized Velocity on the Center Axis The following section illustrates the influence of the confinement on the flow pattern downstream the original and in the 50-fold scaled orifice depending on the Reynolds number. In conclusion, the measurements reveal that the confinement of the free jet in the original scale orifice is not pronounced if the outlet diameter ratio is larger than exit ⁄ > 15. The rising and decreasing normalized velocity in the potential core of the 5-fold scaled orifice can by caused by the optical distortion of the images. Due to the still small dimension of the orifice, it was not possible to perform an image dewarping. Taken together, these results suggest that normalized velocity profiles of the 5-fold and the 50-fold scaled orifice are almost equal and should result in comparable droplet breakup positions. The original scale is prone to production inaccuracies (tilted drilling hole, diameter deviations, remaining burr) and the 5-fold scaled orifice shows some deviations due to optical distortion. Besides, the 5-fold scaled system shows more fluctuations that are caused by the lower number of valid vectors of the PTV measurements, due to the low seeding density compared to PIV measurements of the 50-fold scaled system. Influence of Confinement on the Normalized Velocity on the Center Axis The following section illustrates the influence of the confinement on the flow pattern downstream the original and in the 50-fold scaled orifice depending on the Reynolds number. The orifice with an outlet channel diameter ratio of exit ⁄ = 10 shows smaller values of the normalized velocity at > 28 at a Reynolds number Re = 2000 compared to orifices with a wider outlet channel ratio. Likewise, the orifice with an outlet channel diameter ratio of exit ⁄ = 10 shows smaller values of the normalized velocity at > 24 at a Reynolds number Re = 5700. Furthermore, the decline of the normalized velocity is more pronounced at the higher Reynolds number. As Figure 9 shows, there is a significant difference of the normalized velocity between the orifice with an outlet channel diameter ratio of exit ⁄ = 5 and the orifices with an outlet channel diameter ratio of exit ⁄ ≥ 10. At a Reynolds number Re = 2000, the normalized velocity decreases stronger at a normalized distance > 20 compared to the wider outlet geometries. At a Reynolds number Re In conclusion, the measurements reveal that the confinement of the free jet in the original scale orifice is not pronounced if the outlet diameter ratio is larger than D exit/d > 15. In Figure 9, to orifices with a wider outlet channel ratio. Likewise, the orifice with an outlet channel diameter ratio of D exit/d = 10 shows smaller values of the normalized velocity at x d > 24 at a Reynolds number Re = 5700. Furthermore, the decline of the normalized velocity is more pronounced at the higher Reynolds number. As Figure 9 shows, there is a significant difference of the normalized velocity between the orifice with an outlet channel diameter ratio of D exit/d = 5 and the orifices with an outlet channel diameter ratio of D exit/d ≥ 10. At a Reynolds number Re = 2000, the normalized velocity decreases stronger at a normalized distance x d > 20 compared to the wider outlet geometries. At a Reynolds number Re = 5700, the normalized velocity decrease starts to diverge from the other outlet geometries at a normalized distance of x d > 13. This decrease of the normalized velocity stagnates at a Reynolds number Re = 5700 in the orifices with an outlet channel diameter ratio of D exit/d ≥ 10 and reaches a constant value of 0.024. results show that the influence of the confinement on the free jet is more pronounced at higher Reynolds numbers as the velocity development in the orifice with an outlet diameter ration of exit ⁄ = 10 differs more strongly from the wider outlet geometries at a Reynolds number Re = 5700, as is the case at Re = 2000. Concluding Section 3.2.3, the 50-fold and the original scale both showed no influence of the confinement on the normalized velocity on the center axis if exit ⁄ ≥ 15. Besides, further comparison of the original scale with the 50-fold scales orifice at smaller outlet channel diameter was not possible as the original scale is vulnerable to production inaccuracies that prevented usable measurements. Even slight geometry deviations have a large impact on the velocity profile in the original scale. In addition, the 5-fold scale orifice did not allow any change of the outlet channel geometry due to the limited working distance of the microscope. Therefore, it is recommended to use the 50-fold scaled orifice for further investigation on the influence of the confinement on the velocity profile. Discussion It was hypothesized that the droplet breakup in high-pressure homogenizer orifices can be scaled with the theory of similarity. Essential for this assumption is the similarity of all droplet-relevant dimensionless numbers (Re, We, κ, λ). The current investigation showed that it is possible to reach complete geometrical similarity. The approach may be The most striking result to emerge from the data is that the influence of the confinement in the 50-fold scaled orifice starts at an outlet diameter ratio of D exit/d < 15 as well and therefore shows similar behavior to that in the original scale orifice. Furthermore, the results show that the influence of the confinement on the free jet is more pronounced at higher Reynolds numbers as the velocity development in the orifice with an outlet diameter ration of D exit/d = 10 differs more strongly from the wider outlet geometries at a Reynolds number Re = 5700, as is the case at Re = 2000. Concluding Section 3.2.3, the 50-fold and the original scale both showed no influence of the confinement on the normalized velocity on the center axis if D exit/d ≥ 15. Besides, further comparison of the original scale with the 50-fold scales orifice at smaller outlet channel diameter was not possible as the original scale is vulnerable to production inaccuracies that prevented usable measurements. Even slight geometry deviations have a large impact on the velocity profile in the original scale. In addition, the 5-fold scale orifice did not allow any change of the outlet channel geometry due to the limited working distance of the microscope. Therefore, it is recommended to use the 50-fold scaled orifice for further investigation on the influence of the confinement on the velocity profile. Discussion It was hypothesized that the droplet breakup in high-pressure homogenizer orifices can be scaled with the theory of similarity. Essential for this assumption is the similarity of all droplet-relevant dimensionless numbers (Re, We, κ, λ). The current investigation showed that it is possible to reach complete geometrical similarity. The approach may be somewhat limited by the material system. Limitations of the measurement systems of the 50-fold scaled orifice prevented an accordingly scaled interfacial tension; thus, physical similarity could not be fully reached. This mismatch may cause differences when investigating the droplet deformation and breakup in detail at the 50-fold scale. Since it is assumed that the diffusion of the emulsifier in the original scale process is much slower than the breakup process [6,29,30], the influence of the emulsifier on the breakup is, however, expected to be moderate, which would still allow the comparison of the droplet breakup in all three scales. Comparing the normalized velocity on the center axis of scaled orifices with an original scale orifice showed that the development of the normalized velocity is similar on all three scales. Caution should still be exercised when working at the original scale. Here, higher values for the normalized velocity were measured compared to the upscaled setups. The higher values could be attributed to production inaccuracies or measurement errors of the pressure sensor of the original scale. Moreover, the placement of the pressure sensor farther downstream and upstream from the 50-fold scaled orifice can cause higher measurement values of the pressure drop over the orifice, which would cause lower normalized velocities. Attention should therefore be paid to identical experimental setups over the scales, even in such details as in the mounting of the sensors. The original scale is also more prone to production inaccuracies compared to the scaled systems as even small deviations of the geometry can cause large deviations in the velocity profile. The patterns of the normalized velocity fluctuations show that there are some differences between the original and the 50-fold scaled orifices, which can be caused by the less accurate measurements of the fluctuation with the µ-PIV system. The original scale and the 5-fold scaled orifice resulted in large deviations of the velocity fluctuation field compared to the original and the 50-fold scaled orifice. The limited seeding density in the original and the 5-fold scaled system, in the general high velocities, the high velocity gradients of the free jet and the ambient fluid, and the production inaccuracies impeded especially the velocity fluctuation measurements. Therefore, it can be concluded that a higher scale-up factor in combination with PIV measurements is advantageous and reasonable for investigating the turbulent free jet downstream from an HPH orifice. Due to production inaccuracies in the original scale and due to the limited working distance of the microscope of the 5-fold scaled system, the influence of the confinement on the velocity profile of the free jet was investigated in detail in the 50-fold scaled system. The confinement of the free jet in the outlet channel starts to influence the velocity decrease if the outlet channel diameter ratio is equal or smaller than D exit/d ≤ 10. It results in a stronger velocity decrease in the area where the highest normalized velocity fluctuations are located. The influence of the confinement was more pronounced at higher Reynolds numbers. A stronger confinement may result in a better droplet stabilization as the droplet contact time should be reduced by the stronger turbulence that causes the velocity decrease. Conclusions In total, it can be concluded that the scaling approach used in this study was successful in terms of showing relevant scaling strategies. It also revealed limitations of working on the different scales. For example, a small inaccuracy in the production process-which can hardly be avoided on very small scales (Ψ = 1)-has an effect especially at the beginning of the free jet area. Here, velocity fluctuations in the peripheral areas of the free jet were not in agreement with data from the different scales. This could affect later evaluations of the stress history acting on the droplets in this area. The middle scale approach (Ψ = 5) resulted in problems when measuring the velocity fluctuations due to a limited seeding particle density. In addition, the velocity measurements were influenced by the optical distortion, which could not be corrected due to the still small dimensions. The measurements in the original and 5-fold scaled orifice did not allow time-resolved measurements, due to the still high velocities (up to 40 m s ), which are necessary for investigating the forces acting on the droplets in the turbulent eddies in the shear layer of the free jet. When working on very large scales (Ψ = 50), on the other hand, one has to struggle with problems caused by the resulting very large volumes that have to be handled. Not only does the experimental setup become large, details such as temperature control become more complex. The continuous phase must also be circulated, as the continuous phase is a mixture of chemicals and cannot be discarded after one loop. This leads on the one hand to microbiological problems. On the other hand, for later work with droplets, a droplet separator must be provided, which removes even the smallest daughter droplets, which would otherwise disturb the signal/noise ratio of the PIV measuring system. In general, however, the 50-fold scaled approach is the most promising scale for precise measurement of the velocity and the velocity fluctuations due to the highest number of valid velocity vectors that can be calculated. The results of this study can be used to determine the turbulent stress history of droplets following a defined trajectory. It is also possible to link the location of droplet breakup with the turbulent forces acting on the droplet at this location. In this way, it will be possible in the future to study droplet breakup in detail under transient, rapidly changing conditions. In addition, the scaling approach presented allows for a more targeted development of optimized disruption units, e.g., by working on details of the inlet flow. Investigations into droplet breakup mechanisms on larger scales will also simplify or enable analyzing, for example, the influence of emulsifiers or droplet-internal flow. Future studies will therefore focus on the visualization of the droplet breakup at the different scales (Ψ = 1, 5 and 50). Acknowledgments: The authors thank Dennis Scherhaufer and Heinz Lambach of the Institute for Micro Process Engineering (Micro Apparatus Engineering-FAB) for manufacturing of the original scale orifice. We also thank Jürgen Kraft, Markus Fischer and Annette Berndt for design of the test site and assistance during the experiments. We thank Ralf Dorsner (wbk Institute of Production Science) and Wolfgang Schäfer (Institute for Applied Materials-Materials Science and Engineering) for polishing the orifices. The authors thank Dieter Waltz and Peter Fischer of the Institute of Physical Chemistry for manufacturing the 5-fold scale orifice. The authors also thank Thomas Fuchs (Institute of Fluid Mechanics and Aerodynamics, Faculty of Aerospace Engineering, Universität der Bundeswehr München) for providing the PTV algorithm. Furthermore, the authors thank the IOI Oleo GmbH for providing the oil. The authors also thank Peter Walzel for fruitful discussions on the concept of this study. We acknowledge support by the KIT-Publication Fund of the Karlsruhe Institute of Technology. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
14,600.6
2021-02-07T00:00:00.000
[ "Physics" ]
Physical Basis of Magnetic Resonance Spectroscopy and its Application to Central Nervous System Diseases Magnetic Resonance Spectroscopy is based on the chemical shift property of the atom nuclei when a magnetic field is applied. This technique offers invaluable information about living tissues with special contribution to the diagnosis and prognosis of the central nervous system diseases. Concentration of several metabolites can be assessed in a reproducible manner by means of modern clinical scanners. N-acetyl-aspartate is regarded as a neuronal marker and its levels reflect the neuronal density with significant decreases in degenerative disease such as Alzheimer’s disease. Cholinecompounds reflect the cell’s membrane turnover and degradation. Myo-inositol has emerged as a glial marker with increases in degenerative diseases. The major usefulness of MRS has been reported in brain tumors, degenerative disorders, myelination defects and encephalopathies. In this review we report the physical basis and the contribution of MR spectroscopy to the diagnosis and prognosis of several diseases of the Central Nervous System. Kew words: Central nervous system diseases, magnetic resonance spectroscopy, pathophysiology Brief history and physical basis of magnetic resonance imaging: Imaging human internal organs with exact and non-invasive methods is very important for medical diagnosis, treatment and follow-up as well as for clinical research. Today, one of the most important tools for this purpose is the MRI. MRI scanners are based on the discovery of the nuclear magnetic resonance (NMR) phenomenon that was detected independently by Bloch and Purcell in 1946. They discovered that an atomic nucleus with unpaired protons in a strong magnetic field rotates with a frequency depending on the strength of the magnetic field and the nature of the atom. If it is submitted to a radio frequency (RF) field of this particular frequency, which is the resonance frequency, it absorbs energy and when the RF field is removed this energy is emitted through an electromagnetic wave of the resonance frequency. For this discovery Bloch and Purcell were awarded the Nobel Prize in Physics in 1952. The MR phenomenon was initially used mainly for studies of the chemical structure of substances. The first two-dimensional magnetic resonance (MR) images were reconstructed in 1973 by Lauterbur. By introducing gradients in the magnetic field he made it possible to determine the origin of the emitted RFsignals. The same year, independently of Lauterbur, Mansfield and Grannell demonstrated the Fourier relationship between the spin density and the NMR signal acquired in the presence of a magnetic field gradient. These discoveries were groundbreaking and led to the currently used application of MR in medical imaging. It also led to the Nobel Prize in Medicine for Lauterbur and Mansfield in 2003. Even more Nobel Prizes have been attributed for discoveries in the field of MR imaging. Richard Ernst was awarded the Nobel Prize in chemistry 1991 for his contributions on further development of the methodology of high-resolution nuclear MR spectroscopy in 1975. The MRI medical scanners have been available since 1980 and since then the use of MR scanners has rapidly increased, in 2002, there were approximately 22,000 in use worldwide and more than 60 million MRI examinations were performed. Compared with other imaging modalities MR has many advantages, first of all it is non-invasive and to present knowledge has no secondary effects. It provides an amazingly strong imaging contrast between tissues and it can, as we will see in this chapter, be adapted to image other physical phenomena. Today, the most frequently used MRI method in medicine is the anatomical MRI designed to differentiate tissue structures. It is used for basically any part of the body, brain, knees, arms etc. Another more recent imaging method is the functional MRI (fMRI) for mapping of activation patterns in the brain. This is an important modality for better understanding of function. When a brain region is activated new energy must be transported to this region which leads to an increased blood flow in this part of the brain. This can be imaged by repetitive MR scans and detected by appropriate signal processing methods. A brief description of the principle governing the generation of MRI is presented. The magnetic resonance phenomenon can be described by both Am. J. Applied Sci., 3 (5): 1836-1845, 2006 1837 classical and quantum mechanical approaches. In this paper, the classical approach is used for the task of simplicity, although NMR can be more accurately analyzed by quantum mechanics. Physical principles of MRI: Protons, neutrons and electrons have an angular momentum known as spin. Each spin can have the values ±1/2, 3/2, 5/2. Since spins in atomic nuclei with pair number of protons will cancel each other, only atoms with an odd number of protons have a net spin, which is necessary for being NMR-active. The most typical nucleus to use in the NMR experiment is the hydrogen nucleus, H, that has the spin states ±1/2. Since the signal of one spin is impossible to measure, spins are in general considered as an ensemble and are described in terms of precession around a spin magnetization vector, M. When no external field is applied the spins are randomly distributed between the spin up (+1/2) and spin down(1/2) position and the net spin of the ensemble of spins therefore equals zero. In the presence of an external magnetic field B0 (a polarizing field), the spin magnetization vector M will align itself with the field and the spins start processing around B0. The frequency of precession is the natural resonance frequency of the spin system. This resonance frequency is known as the Larmor frequency ω0 =γ B0 where γ is the gyromagnetic ratio specific for each kind of nucleus. When considering a spin system we will define a laboratory frame in which M appears to be stationary and aligned with B0. The axis, along which B0 acts, is the longitudinal z-axis and the plane orthogonal to the z-axis is the transverse xy-plane. For a more detailed description. To obtain a measurable signal from the experiment the system must absorb energy that can later be emitted and measured. The absorption of energy is made by exciting the system with another time-varying rotating magnetic field, an RF pulse B1, acting perpendicular to B0 and oscillating with the Larmor frequency, ω0. The RF-pulse tilts M away from the z-axis and M starts processing about the rotating B1-field. The tip-angle between M and the z-axis is dependent on the duration of the RF-pulse. For the time, τ, the angle is given by α= B1τ . The cases α=90o and α=180o are the most commonly used in NMR-imaging and are called a 90o pulse and 180o pulse respectively. When the RF-pulse is removed, M will fall back to its initial position aligned with B0, this process is called relaxation. It is during the relaxation that the energy that has been added to the system by the RF pulse is emitted and produces a RF signal, the free induction decay (FID). This signal can be measured by an antenna or receiver coil and interpreted, as we will see later, to generate the image. The relaxation process contains two kinds of relaxation: The spin-lattice (longitudinal) relaxation and the spin-spin (transverse) relaxation. The spin-lattice relaxation process involves the exchange of energy between the spin-system and the surroundings. The equilibrium state is when the magnetization vector M is aligned with the longitudinal B0. The time for the system to reach equilibrium is the spin-lattice relaxation time and is described by the time T1. The spin-spin relaxation is the process where spins come to thermal equilibrium with themselves, this is also called the transverse relaxation and it is described by the time T2. The difference in the physical properties of the different tissue types is reflected in the relaxation times. It is this mechanism that generates the contrast between different tissue types in imaging (T1, T2 etc). The problem of the evolution of the magnetization under the influence of the sum of a constant and a rotating field with simultaneous relaxation was first solved by Bloch. He proposed a set of equations, which describe how a spin system evolves, Brief history and physical basis of magnetic resonance imaging: Imaging human internal organs with exact and non-invasive methods is very important for medical diagnosis, treatment and follow-up as well as for clinical research. Today, one of the most important tools for this purpose is the MRI. MRI scanners are based on the discovery of the nuclear magnetic resonance (NMR) phenomenon that was detected independently by Bloch [1] and Purcell [2] in 1946. They discovered that an atomic nucleus with unpaired protons in a strong magnetic field rotates with a frequency depending on the strength of the magnetic field and the nature of the atom. If it is submitted to a radio frequency (RF) field of this particular frequency, which is the resonance frequency, it absorbs energy and when the RF field is removed this energy is emitted through an electromagnetic wave of the resonance frequency. For this discovery Bloch and Purcell were awarded the Nobel Prize in Physics in 1952. The MR phenomenon was initially used mainly for studies of the chemical structure of substances. The first two-dimensional magnetic resonance (MR) images were reconstructed in 1973 by Lauterbur [3] . By introducing gradients in the magnetic field he made it possible to determine the origin of the emitted RFsignals. The same year, independently of Lauterbur, Mansfield and Grannell demonstrated the Fourier relationship between the spin density and the NMR signal acquired in the presence of a magnetic field gradient [4] . These discoveries were groundbreaking and led to the currently used application of MR in medical imaging. It also led to the Nobel Prize in Medicine for Lauterbur and Mansfield in 2003. Even more Nobel Prizes [1] have been attributed for discoveries in the field of MR imaging. Richard Ernst was awarded the Nobel Prize in chemistry 1991 for his contributions on further development of the methodology of high-resolution nuclear MR spectroscopy in 1975. The MRI medical scanners have been available since 1980 and since then the use of MR scanners has rapidly increased, in 2002, there were approximately 22,000 in use worldwide and more than 60 million MRI examinations were performed. Compared with other imaging modalities MR has many advantages, first of all it is non-invasive and to present knowledge has no secondary effects. It provides an amazingly strong imaging contrast between tissues and it can, as we will see in this chapter, be adapted to image other physical phenomena. Today, the most frequently used MRI method in medicine is the anatomical MRI designed to differentiate tissue structures. It is used for basically any part of the body, brain, knees, arms etc. Another more recent imaging method is the functional MRI (fMRI) for mapping of activation patterns in the brain. This is an important modality for better understanding of function. When a brain region is activated new energy must be transported to this region which leads to an increased blood flow in this part of the brain. This can be imaged by repetitive MR scans and detected by appropriate signal processing methods. A brief description of the principle governing the generation of MRI is presented. The magnetic resonance phenomenon can be described by both classical and quantum mechanical approaches. In this paper, the classical approach is used for the task of simplicity, although NMR can be more accurately analyzed by quantum mechanics. Physical principles of MRI: Protons, neutrons and electrons have an angular momentum known as spin. Each spin can have the values ±1/2, 3/2, 5/2. Since spins in atomic nuclei with pair number of protons will cancel each other, only atoms with an odd number of protons have a net spin, which is necessary for being NMR-active. The most typical nucleus to use in the NMR experiment is the hydrogen nucleus, 1 H, that has the spin states ±1/2. Since the signal of one spin is impossible to measure, spins are in general considered as an ensemble and are described in terms of precession around a spin magnetization vector, M. When no external field is applied the spins are randomly distributed between the spin up (+1/2) and spin down(-1/2) position and the net spin of the ensemble of spins therefore equals zero. In the presence of an external magnetic field B 0 (a polarizing field), the spin magnetization vector M will align itself with the field and the spins start processing around B 0 . The frequency of precession is the natural resonance frequency of the spin system. This resonance frequency is known as the Larmor frequency ω 0 =γ B 0 where γ is the gyromagnetic ratio specific for each kind of nucleus. When considering a spin system we will define a laboratory frame in which M appears to be stationary and aligned with B 0 . The axis, along which B 0 acts, is the longitudinal z-axis and the plane orthogonal to the z-axis is the transverse xy-plane. For a more detailed description [5] . To obtain a measurable signal from the experiment the system must absorb energy that can later be emitted and measured. The absorption of energy is made by exciting the system with another time-varying rotating magnetic field, an RF pulse B 1 , acting perpendicular to B 0 and oscillating with the Larmor frequency, ω 0 . The RF-pulse tilts M away from the z-axis and M starts processing about the rotating B 1 -field. The tip-angle between M and the z-axis is dependent on the duration of the RF-pulse. For the time, τ, the angle is given by α= B 1 τ . The cases α=90º and α=180º are the most commonly used in NMR-imaging and are called a 90º pulse and 180º pulse respectively. When the RF-pulse is removed, M will fall back to its initial position aligned with B 0 , this process is called relaxation. It is during the relaxation that the energy that has been added to the system by the RF pulse is emitted and produces a RF signal, the free induction decay (FID). This signal can be measured by an antenna or receiver coil and interpreted, as we will see later, to generate the image. The relaxation process contains two kinds of relaxation: The spin-lattice (longitudinal) relaxation and the spin-spin (transverse) relaxation. The spin-lattice relaxation process involves the exchange of energy between the spin-system and the surroundings. The equilibrium state is when the magnetization vector M is aligned with the longitudinal B 0 . The time for the system to reach equilibrium is the spin-lattice relaxation time and is described by the time T 1 . The spin-spin relaxation is the process where spins come to thermal equilibrium with themselves, this is also called the transverse relaxation and it is described by the time T 2 . The difference in the physical properties of the different tissue types is reflected in the relaxation times. It is this mechanism that generates the contrast between different tissue types in imaging (T 1, T 2 etc). The problem of the evolution of the magnetization under the influence of the sum of a constant and a rotating field with simultaneous relaxation was first solved by Bloch [3] . He proposed a set of equations, which describe how a spin system evolves, The chemical shift: Other important NMR parameter that can distinguish spins in a particular environment are: the self-diffusion coefficient D, the isotropic chemical shift d and the hyperfine splitting J. In a real spin system, all nuclei atoms and molecules have associated electrons. If a magnetic field is applied, the surrounding electron clouds tend to circulate in such a direction as to produce a field, which opposes that applied, causing a small chemical shift. The nucleus experiences a total field ( ) Where d is the shielding. This shielding perturbation results in a shift of the resonant frequency for nuclei in different environments and this resultant effect is very useful in NMR spectroscopy. The chemical shift may be expressed as 6 10 TMS TMS f f f δ − = Where δ is in parts per million (ppm), f is the resonant frequency of the species of interest and TMS f is the resonant frequency of a reference substance (TMS: tetramethylsilane). The effect of chemical shift is observed in images where more than one chemical is present. The value of is very small, usually on the order of a few parts per million and is dependent on the local chemical environment in which the nucleus is located. Fat (CH 2 ) is a well-known example of a chemically dependent component, which is chemically shifted about a 3.35 ppm in Larmor frequency from water (H 2 O) protons. A large range of δ values exist for biological objects giving rise to many resonant frequency range of a spin system can expressed as max 0 2 ω ω ω − ≤ Image formation and k-space: As previously mentioned, the Larmor frequency is dependent on the external field, B 0 . By using a polarizing gradient field G z linearly variable along the z-axis but constant in time, the Larmor frequencies will change depending on its position. Due to this effect, a slice selection can be made by letting B 1 oscillate with different frequencies dependent on the choice of slice to excite. Linear field gradients along the x and y-axis, G x and G y , are applied to determine position in the transverse xy-plane. Generally, G y is first applied and introduces a phaseshift in the FID-signal dependent on the position along the y-axis. The phase-shift is due to the difference in frequency that varies with position. It is determined relative to the phase introduced by B 0 , γ B 0 δ. The phase-shift is given by: ϕ= γ δ G y y, where is γ the gyro-magnetic ratio, δ is the length of time over which G y is applied and y the position. When the field gradient is removed the frequencies will return to their initial value but the phase-shifts between nuclei remain at different positions on the yaxis. Then the field gradient, G x is applied and the frequencies will change again, dependent on their position along the x-axis. It is normally during the application of G x that the signal is detected. The resulting signal after successively applying G z , G y and G x corresponds to the Fourier transform of the transversal magnetization M xy . In order to make the Fourier relation between the signal and the magnetization more obvious a reciprocal spatial frequency space, known as k-space, is introduced. The measured signal for a set of gradients, G x , G y and G z , produces a single line in k-space. Applying these gradients in different combinations leads to different samplings of k-space. Once k-space has been sampled the MR image is obtained by applying the inverse Fourier transform. The above description samples kspace for one slice along the z-axis at a time ( Fig. 1), but several different techniques for sampling the 3D volume exists [5] . sequence for sampling k-space. The z-gradient is responsible for the slice selection and the xand y-gradients are responsible for frequency and phase encoding respectively MAGNETIC RESONANCE SPECTROSCOPY By means of this technique we can study the chemical composition of living tissues. This technique is based on the chemical shift propriety of atoms. The concentration of some metabolites is determined from spectra that may be acquired in several ways. Generally, two different approaches are used for proton spectroscopy of the brain: single-voxel methods based on the stimulated echo acquisition mode (STEAM) or point resolved spectroscopy (PRESS) pulse sequences and spectroscopy imaging (SI) (also known as chemical shift imaging (CSI) studies usually done in two dimensions using a variety of different pulse sequences (spin-echo (SE), PRESS). The basic principle underlying single-voxel localization techniques is to use three mutually orthogonal slice selective pulses and design the pulse sequence to collect only the echo signal from the point (voxel) in space where all three slices intersect. In STEAM, three 90º pulses are used and the stimulated echo is collected. All other signals (echoes) should be dephased by the large crusher gradient applied during the so-called mixing time. Crusher gradients necessary for consistent formation of the stimulated echo and removal of unwanted coherences. In PRESS, the second and third pulses are refocusing (180º) pulses and crusher gradients are applied around these pulses to select the desired SE signal arising from all three RF pulses and dephasing unwanted coherences. STEAM and PRESS are generally similar but differ in a few key respects: * Slice profile (i.e. sharpness of edges of voxel): STEAM is somewhat better because it is easier to produce a 90º pulse with a sharp slice profile than a 180º pulse. * SNR: Provided that equal volumes of tissue are observed and using the same parameters (repetition time (TR), TE, number of averages, etc.), PRESS should have approximately a factor of two better SNR than STEAM, because the stimulated echo is formed from only half the available equilibrium magnetization. * Minimum TE: STEAM should have a shorter minimum TE than PRESS, since it uses a TM time period and shorter 90º than 180º pulses may be possible. * Water suppression: STEAM may have slightly better water suppression factors, because water suppression pulses can be added during the TM period (this period does not occur in PRESS). Also, STEAM may have less spurious water signal from the 90º slice selective pulses than the 180º pulses in PRESS. * Coupled spin systems and zero-quantum interference: The complex phenomena that can occur in coupled spin systems (e.g. Lac, Glu, etc), namely modulation of the echo signal by scalar couplings and/or the creation of zero-or multiplequantum coherences, may occur with both sequences. However, the detailed dependence of these compounds signal on TE and other experimental parameters will be different for STEAM and PRESS. STEAM is more susceptible for the creation of (usually unwanted) zeroquantum coherence because it uses 90º pulses. APPLICATIONS OF MRS TO THE STUDY OF CENTRAL NERVOUS SYSTEM DISEASES MRS is a non-invasive method that provides metabolic/biochemical information about the brain. It has been widely used in daily practice. In this work, the technique is discussed and the most frequent applications of MRS are shown. We offer a practical approach to the method presenting the spectra of the most common neurologic entities. Our aim is to provide knowledge about MR spectroscopy not only to doctors but also to scientist interested in this field. As a non-invasive method providing metabolic information about the brain, MRS enables tissue characterization on a biochemical level surpassing that of conventional magnetic resonance imaging (cMRI). MRS is also able to detect abnormalities that are invisible to cMRI, because metabolic abnormalities often precede structural changes. MRS does not replace cMRI but complements the information as a prognostic indicator, while following the progression of the disease and evaluating the response to treatment. The most frequent used spectroscopy is that originated from Hydrogen nucleus (proton 1 H-MRS). This technique is based on the differences of resonance obtained from the hydrogen nuclei depending on the surrounding atoms (chemical shift). Each metabolite being assessed discloses different frequency of hydrogen resonance and appears in a different site of the spectrum. The most frequently evaluated metabolites are N-acetyl-aspartate (NAA), myo-inositol (mI), choline (Ch), creatine (Cr) and glutamine (Glx). The position of the metabolite signal is identified on the horizontal axis by its chemical shift, scaled in units referred to as parts per million (ppm). With the appropriate factors considered, such as the number of protons, the relaxation times and so forth, a signal can be converted to a metabolite concentration by measuring the area under the curve. As long as water is the main component of living beings and its concentration is much higher than that of metabolites, it becomes necessary to suppress the signal of resonance from the hydrogen of water [6,7] . A plot showing peak amplitudes and frequencies is obtained. Each spectrum shows peaks corresponding to the different metabolite values: Myo-inositol (mI), 3.56 and 4.06 ppm; Choline compounds (Ch), 3.23 ppm; Creatine (Cr), 3.03 and 3.94 ppm; y N-acetil-aspartate (NAA), 2.02; 2.5 and 2.6 ppm; Glx-glutamine and glutamate, 2.1-2.55 ppm and 3.8 ppm . Ratios between metabolites and creatine are also of great value as they counteract the systematic errors of measurements. Other peaks observed are evaluated in visual way: lactate (Lac), 1.33 (peakdoublet) and 4.1(2nd peak) ppm; lipids (Lip), 0,8-1.3 ppm (Fig. 2). We also can see other peaks such as alanine at 1.48 ppm, Scylloinositol at 3.36 ppm, etanol (triplet resonance) at 1.16 ppm, macromolecules at 0.5 to 1.8 ppm and acetate at 1.92 ppm. Myo-inositol has been regarded as glial marker located in the astrocytes, as product of myelin degradation and the most important osmolyte or cell volume regulator. Choline is a marker of the phospholipid metabolism and cellular membrane turnover marker, reflecting cellular proliferation. Creatine is used as internal reference value, since it is the most stable cerebral metabolite. It has a role in the energetic system of the brain and in the osmoregulation. NAA is a marker of neuronal and axonal viability and density. Lactate peak indicates anaerobic glycolysis in tumors. Lipids peak indicates necrosis and/or disruption of myelin sheath [8] . Currently the spectra may be acquired with univoxel (SV) or multivoxel techniques. SV technique is readily available on most scanners. Voxels must be positioned away from sources of susceptibility artifacts and lipids. For diffuse processes, a 2 x 2 x 2 cm (8cm3) voxel is routinely used. For local lesions, the SV can be reduced in volume. Univoxel technique has the advantages of better spatial location, more homogeneity, better water suppression and quicker. However only one spectrum can be get per acquisition. Conversely, multivoxel technique (MV) makes it possible to obtain simultaneously multiple spectra per acquisition and to assess a greater area of the brain but with smaller spectral resolution. Up to date the univoxel technique is still superior to MV technique on the grounds of reproducibility [9][10][11][12] . For both SV and MV techniques, the MR scanner employs a process known as shimming to narrow peak linewidths within the spectra. For SV studies, improving the field homogeneity is performed with basic, zero-ordered shimming on clinical MR scanners. For MV, shimming to simultaneously produce uniform field homogeneity in multiple regions requires higher order shimming. To have high-quality spectra, we should avoid blood products, air, fat, necrotic areas, cerebrospinal fluid, metal, calcification and bone. In such areas differing magnetic susceptibility results in a non-homogenous field that hinders the production of diagnostic quality spectra. With regard to the mode of acquisition, PRESS can be performed with short and long TE and there is complete recovery of signal. STEAM can be performed with very short echo times (TE), but there is incomplete recovery of signal and a precise volume element (voxel) is formed. The PRESS mode is more used than STEAM because it increases the signal/noise ratio and is less sensitive to movement artifacts [6] . Echo Times have not yet standardized so far in MRS. In degenerative, demyelinating and vascular disease a short TE is advocated. In brain tumors the optimal TE is under debate. A short TE (20-40 ms) allows us to increase the signal/noise ratio and to visualize most metabolite peaks with the inconvenience of some degree of overlapping of peaks. Intermediate TE (135-144 ms) inverts the lactate peak to better distinguish it from lipids peak. Long TE (270-288 ms) gives worse signal/noise ratio but allows better visualization of some peaks (NAA, Choline and Creatine), because suppress the signal of others (myo-inositol, alanine, glutamate-glutamine) [7,12,13] . Majós et al. [14] demonstrated that a short TE yields better performance in the classification of tumors than long TE. In the clinical practice time does matter so short TE are preferable [12] . In our experience with a 1.5T GE Signa Horizon-clinical scanner a TE of 30 ms and a TR of 2500 ms have proven valuable [15] . Recently a TEaveraged PRESS technique yield highly simplified spectra with better suppression of signals not pertaining to the assessed metabolites such as that of macromolecules. TE is increased from 35 ms to 355 ms in steps of 2.5 ms with two acquisitions per step [16] . DEVELOPMENTAL DISORDERS In the newborn spectrum mI is the predominant metabolite whereas choline increases in the first days of life. NAA and creatine concentrations are lower than in adults. During the first weeks an increase of Cr and NAA is observed, as well as a decrease in Ch and mI [17] . Spectral abnormalities at this age can be of help for diagnosis and monitoring purposes, especially before myelination is completed. In Table 1 and 2 are reported the spectroscopic values found in brain cortex and white matter in healthy individuals by age groups. NAA increases progressively with age in the white matter until the 20-25s, decreasing thereafter. In the cortex NAA decreases progressively from birth to old age. In congenital metabolopathies there may be specific and unspecific findings. In hyperammonemia glutamine is reportedly elevated as well as NAA in Canavan disease [8,17] . In familial leucodistrophies such as adrenoleucodystrophy and metachromatic leucodystrophy choline compounds may be elevated due to the presence of inflammation [7] . In autistic children a decrease of NAA was reported in temporal cortex and cerebellar white matter [18] . In other studies with autistic children variable levels of Ch and Cr have been observed in different cortical areas [19] . WE did not see changes in the white matter of autistic children in the centrum semiovale in comparison to healthy controls. However we observed increased values of NAA/Cr ratios in this area of children with attention-deficit/hyperactivity disorder [20] , which could suggest increased mitochondrial metabolism as it happens in the Asperger syndrome (autistic symptoms plus obsesive-compulsive behavior), or possibly to an increased synthesis of neurotransmitters. The interpretation of this finding is difficult because the exact function of NAA in the white matter remains elusive. In a small series of children with isolated developmental delay we found decreased NAA/Cr ratios in the white matter in comparison to healthy children supporting the hypothesis of hypomielynation or delayed myelination [21] . Hypoxic encephalopathy: In the first year of life there is a significant correlation between spectroscopic values and neurological function with the lipids representing a metabolic marker of hypoxia. A decrease in the NAA/Cr ratio indicates a grim prognosis. In neonatal hypoxia occipital cortex and basal ganglia are predominantly affected with decreased NAA and increased mI, glutamate, lactate and lipids. Similar alterations are seen in the white matter [22] . Other affected cortical areas are the frontal and parietal ones [16] . Lipids peak is the main metabolic marker of hypoxia in the neonate. We have found cortical decreases of NAA and the presence of lipids and lactate in anoxic encephalopathy due to status epilepticus with radiological signs of laminar necrosis [23] . In adults we can see interesting findings such as decreased creatine in the hippocampus of patients with sleep apnea [24] . In cardiorespiratory arrest decreases of NAA in cortex and cerebellun may be found with important prognostic value [25] . Tumors: Usually we perform spectroscopy in tumors after contrast administration for better placement of the voxel, as long as gadolinium has no effect on metabolite levels [26][27][28] . It has proved useful for histologic grading purposes and for following the outcome after therapy. Choline-compounds (phosphoril-choline and glycerophosphoril-choline) are elevated in glial tumors in relation to accelerated cellular membrane turnover and degradation. Choline elevations can be seen also in pseudotumoral forms of multiple sclerosis, although in gliomas the aspartate levels are lower and perfusion better [29,30] . Low levels of Creatine are inconstant and tend to indicate metastases from primitive tumors not containing creatine-kynase (kidney, lung, breast, lymphoma, prostate). Reductions or absence of NAA indicate the lack of neurons. This happens in tissues whose cells do not contain aspartate (gliomas, meningiomas, craneopharyngiomas and metastates). A typical meningioma is characterized by an absence of NAA, decrease of Cr, prominent peak of choline and a peak of alanine and glutamine [31] . Under normal conditions lactate is absent in the brain. It appears in cysts, necrotic tissues and in tissues with anaerobic glucolysis. Lipids peak indicates necrosis in malignant tumors either before or after treatment. Marked elevations of lipids are suggestive of lymphoma [32] . Elevation of mI, decrease of NAA and mild elevation of choline [33] characterize low-grade gliomas (benign). High-grade gliomas (malignant) present with marked increase of choline, decrease of NAA and the presence of lactate and lipids peak [34][35][36][37][38][39][40][41][42] . In our experience with gliomas and metastases, a Ch/Cr ratio equal or higher than 1,56 and lactate peak predict malignancy at 88,9% sensitivity and 91,7% specificity [15] . In Fig. 3 is reported an example of spectrum corresponding to a malignant tumor. Radionecrosis: After chemotherapy and/or radiotherapy spectroscopic findings should be interpreted with caution during the first 6 months because radiotherapy elevates choline levels and there may be still tumoral cells. After 6 months choline elevation suggests tumoral recidive or therapy failure. In the event that no metabolites are found and lactate and lipids are present, then it suggests radionecrosis. Postradiotherapy demyelination appears after 6-8 months of treatment and may progress for two tears and its spectrum contains increases of choline compounds and mI due to gliosis and decreases of NAA [12] . Neurocutaneous syndromes: In neurofibromatosis decrease of NAA and increase of mI/Cr ratio have been observed and these findings are useful for monitoring of progression purposes. Norfray et al. (5) evaluated 19 patients with NF-1 and defined three spectral patterns based on the Cho/Creatine (Cr) ratio, indicating hamartoma (<1.5), undetermined lesion (transitional spectrum 1.5-2) and glioma (>2). In this study, Cho/Cr ratio was less than 1.0 in control subjects. Tuberous sclerosis may be significant elevations of mI on short echo proton MRS corresponding to gliosis, mild increase in choline and N-acetyl aspartate (NAA) levels. White matter disease: In mulltiple sclerosis the most common findings are decreased NAA/Cr ratios and increased Ch/Cr and mI/Cr ratios. Progression of the disease is marked by a reduction of the NAA/Cr ratio. NAA is currently regarded as marker of axonal damage [43] . The active plaque may show the presence of lactate and lipids peak, as well as increased Ch/Cr ratio and mI. Inactive plaques show increased mI/Cr ratio. In Schilder's disease, a rare demyelinating disease, we observed increased Ch/Cr ratio, decreased NAA/Cr ratio and the presence of lactate peak [44] . Brain Ischemia: After a vascular occlusion a rapid elevation of lactate is detected with a trend to normalization after reperfusion. Conversely, there is a progressive decrease of NAA levels after ischemia with dramatic reductions or disappearance of NAA in the event of necrosis [45] . It has been postulated that the early changes seen in MRS could be a marker of ischemic penumbra when conventional MRI does not yield alterations and that it could be of help to better select patients for thrombolysis [46] . Degenerative diseases: Most degenerative diseases share the fact of NAA decrease. In Alzheimer's disease the most frequent findings are the decrease of NAA levels and NAA/Cr ratios as well as increases of mI and mI/Cr ratios in cortical areas [47] . Furthermore this technique has proven useful to detect patients with isolated memory loss at high risk of conversion to Alzheimer's disease. A NAA/Cr ratio in the left parasagital occipital lobe predicted conversion at 100% sensitivity and 75% specificity, with an accuracy of classification of 88% [48] . If these results are replicated MRS would become an important biomarker of the disease with implications in early detection and treatment. Infectious diseases: Focal lesions showing a ring-shape contrast enhancement could generate problems of differential diagnosis between tumors and abscesses. MRS has demonstrated to be a valuable tool for this purpose. Lipids and lactate peaks as well as elevations of several amino acids (leucine, isoleucine, valine, alanine) have been reported in bacterian abscesses and cysticercosis [49,50] (Fig. 4). In tuberculous abscesses only peaks of lactate and lipids are observed, without elevation of amino acids [51,52] . In the Reye´s syndrome the main abnormality found on spectroscopy of the parietal white matter and/or occipital cortex is increased Glx levels, arising from glutamine and suggesting hyperammonibemiainduced encephalopathy. It is possible to quantify the Glx elevation by comparing Glx and NAA peak heights; if the Glx peak height is greater than one third of the NAA, there is an increase in Glx levels [53] mI levels might be reduced because of the osmolytic effects arising from hyperammoniemia. Lactate peak is common and there may be lipids. In HIV+ patients the NAA/Cr is decreased in AIDS cases in relation to patients without AIDS. These changes are seen even in patients with normal appearing brains on conventional MRI and could serve to anticipate appropriate treatment [54] . In HIVassociated pathologies of the brain abnormalities have also been described. MRS is especially useful in focal lesions [55] . In toxoplasmosis all metabolites are decreased (NAA, Cr, Ch and mI), but marked peaks of lactate and lipids. In the CNS lymphoma NAA and Cr are decreased, Ch is increased with marked peaks of lactate and lipids. Progressive multifocal leucoencephalopathy tends to show low levels of NAA and Cr but high levels of Ch and mI; there may be lipids and lactate peaks but less marked than in the other lesions. In cryptococal abscesses all of the metabolites are decreases but with the presence of lipids peak. Temporal epilepsy: Correct voxel positioning, including most of the hippocampus is vital (Fig. 5). Incorrect positioning (too anterior or posterior) may lead to susceptibility artifacts that disrupt metabolic ratios and consequently result in an inaccurate interpretation. NAA/Cho + Cr is the most useful parameter. The reduction in this ratio is considered pathologic if less than 0.71 [56] . An asymmetry index of =/> 15% is used for lateralization of the epileptogenic focus, with very good correlation to electroencephalogram findings. Symmetric placement of voxels makes it possible to observe significant decrease of the NAA/Cr ratio in the damaged lobe. In hippocampal sclerosis we can see reduction of NAA levels in relation to Ch, lactate peak and either normal or increased Ch/Cr ratio [3] . It has been advocated that all epileptic syndromes in children should be evaluated with MRS [17] . In Table 3 are presented the main metabolite abnormalities according to the different syndromes [57] . CONCLUSION MRS has added important information to the understanding of pathophysiology of many CNS diseases. It has also demonstrated significant value for diagnostic purposes, especially for early diagnosis. As long as this technique is more widely used the contributions to the medical literature are increasing with more applications than expected years ago.
9,070.6
2006-05-31T00:00:00.000
[ "Medicine", "Physics" ]
Real-World Visual Experience Alters Baseline Brain Activity in the Resting State: A Longitudinal Study Using Expertise Model of Radiologists Visual experience modulates the intensity of evoked brain activity in response to training-related stimuli. Spontaneous fluctuations in the restful brain actively encode previous learning experience. However, few studies have considered how real-world visual experience alters the level of baseline brain activity in the resting state. This study aimed to investigate how short-term real-world visual experience modulates baseline neuronal activity in the resting state using the amplitude of low-frequency (<0.08 Hz) fluctuation (ALFF) and a visual expertise model of radiologists, who possess fine-level visual discrimination skill of homogeneous stimuli. In detail, a group of intern radiologists (n = 32) were recruited. The resting-state fMRI data and the behavioral data regarding their level of visual expertise in radiology and face recognition were collected before and after 1 month of training in the X-ray department in a local hospital. A machine learning analytical method, i.e., support vector machine, was used to identify subtle changes in the level of baseline brain activity. Our method led to a superb classification accuracy of 86.7% between conditions. The brain regions with highest discriminative power were the bilateral cingulate gyrus, the left superior frontal gyrus, the bilateral precentral gyrus, the bilateral superior parietal lobule, and the bilateral precuneus. To the best of our knowledge, this study is the first to investigate baseline neurodynamic alterations in response to real-world visual experience using longitudinal experimental design. These results suggest that real-world visual experience alters the resting-state brain representation in multidimensional neurobehavioral components, which are closely interrelated with high-order cognitive and low-order visual factors, i.e., attention control, working memory, memory, and visual processing. We propose that our findings are likely to help foster new insights into the neural mechanisms of visual expertise. INTRODUCTION Visual experts refer to individuals with superior behavioral performance in visual recognition in specific domains (Curby and Gauthier, 2010). To become a visual expert, it requires visual learning with at least hundreds of cases of samples (Clark et al., 2012). A few real-world visual expertise models have been used to study the neural substrate underlying this behavioral expertise (Rossignoli-Palomeque et al., 2018). Increased level of activation was found in the left superior frontal gyrus and left cingulate cortex in radiologists, which is responsible for better working memory capability (Haller and Radue, 2005). Harel et al. (2010) demonstrated enhanced neuronal activity in the right precuneus of a group of car experts related to better memory retrieval strategies. Song et al. (2021) reported increased activation in the right anterior cingulate gyrus, but decreased activation in the superior parietal lobule in chess players (Song et al., 2020), which is closely related to improved visual processing and better attention control. These results derived from cross-sectional studies demonstrated that real-world visual experience alters the strength of evoked brain activity across widely distributed regions, which are supportive of high-order cognitive, such as attention control, working memory, and memory, and low-order visual components, such as visual processing (Khader et al., 2005;Cavanna and Trimble, 2006;Schipul and Just, 2016). Low-frequency spontaneous fluctuations (0.01-0.1 Hz) of the restful brain play an important role in maintaining the ongoing internal brain representations (Oldfield, 1971;Tang et al., 2010;Evans et al., 2011), which are involved in the coding of previous experience and support learned skills (Dong et al., 2014). Particularly, patterns of spontaneous activity within the resting brain are shaped by experience-dependent changes in neural plasticity (Chakraborty, 2006). However, less attention has been paid to analyze how real-world visual experience alters the patterns of resting-state brain activity using longitudinal experimental design. In this regard, the baseline spontaneous neuronal activity reflects cortical excitability (Logothetis et al., 2001;Boly et al., 2007), and the level of cortical excitability may smear the spatial patterns of evoked brain activity (Di et al., 2013;Dong et al., 2015). We propose that the level of baseline brain activity is fundamentally important; therefore, this study aimed at investigating how short-term real-world visual experience modulates baseline neuronal activity in the resting state. The amplitude of low-frequency fluctuations (ALFF) serves as an indicator of cortical excitability (Duff et al., 2008). Previous studies have used ALFF to measure the level of baseline brain activity in healthy subjects Dong et al., 2015). In our study, a group of 32 radiology interns were recruited from our collaborative hospital. The resting-state MRI data were collected before and after 1 month of training in the X-ray department, and ALFF was calculated to quantify the level of baseline brain activity. To better capture the subtle changes in the strength of neuronal activity, a novel but sensitive machine learning analytical framework, support vector machine (SVM), was employed (Xu et al., 2019). We expected to see an altered level of activity in brain regions related to the multidimensional neurobehavioral component that supports visual recognition, such as attention control, working memory, memory extraction, and visual processing (Humphreys et al., 1999). To the best of our knowledge, this study is the first to investigate baseline neurodynamic alterations in response to short-term real-world visual experience using longitudinal experimental design. Subjects The subjects in this study consisted of a cohort group of radiology interns, who were medical students in the undergraduate program in national medical schools. They were recruited from the First Affiliated Hospital of Medical College, Xi'an Jiaotong University. Thirty-two radiology interns [15 male participants, age: 22.47 ± 1.02 years old, mean ± standard deviation (SD)]. The participants were aware of the purpose of the study and the reason why they were recruited. All the subjects are medical students in the undergraduate program in national medical schools, and they underwent a 4-week rotation in the First Affiliated Hospital of Medical College, Xi'an Jiaotong University. The subjects worked in the X-ray department during the rotation and reviewed approximately 35 cases per day, 6 days a week. Their daily training included interpreting X-ray images and drafting reports for each case. Senior radiologists were assigned to these interns as mentors and provided response to their clinical reports each day. The intern radiologists reviewed a minimum of 831 cases during the rotation period, as recorded in the Picture Archiving and Communication System (PACS). Moreover, all the subjects had no neurological or psychiatric brain diseases, had no history of head trauma, and had not taken recreational drugs or drugs that influence brain function during the time window of this study (Oldfield, 1971). Behavioral Tasks This study employed a longitudinal experimental design, which is rare in visual expertise studies. Basically, the subjects underwent the behavioral assessment (including prescreening tasks and behavioral tasks) and MRI scanning before training, and behavioral assessment (only behavioral tasks) and MRI scanning after a 4-week visual training. Note that the purpose of prescreening and the behavior tasks were different. The prescreening procedures were conducted before MRI scanning, aiming to exclude confounding factors such as visual expertise from other known domains (e.g., cars, chess, birds, and mushrooms). Specifically, we used questionnaires to ensure that the subjects had no visual expertise of other known domains, such as aircrafts, animals, and plants. The behavioral tasks were conducted after MRI scanning, aiming to quantify the level of face expertise and radiological expertise, using the Cambridge face memory test (CFMT) (Duchaine and Nakayama, 2006) and radiological expertise task (Evans et al., 2011) respectively, as introduced in our previous studies (Wang et al., 2021;Zhang et al., 2022). A standard behavioral task, i.e., radiological expertise task (Evans et al., 2011), was used to quantify the radiological expertise of the subjects before and after radiological training. The images selected for RET were identical for both tests. Basically, the subjects were shown 100 standard lung X-ray images and were asked to render a diagnostic decision (e.g., tumor present or absent) and prognosis (e.g., malignant vs. benign) for each image in RET. The 100 standard lung X-ray images were carefully chosen from the X-ray image library in the Medical Imaging Department of the First Affiliated Hospital of the College of Medicine under the guidance of three independent senior radiologists with more than 15 years of diagnostic radiology experience. These 100 images used for RET consisted of three ascending levels of difficulty with a portion of 50, 30, and 20%, respectively. Each lung X-ray image contained 0∼N nodules, and there was no mention of a diagnosis unrelated to pulmonary nodules. Sixty-five X-ray images containing only 1∼3 nodules were selected as positive cases, and 35 X-ray images without tumors were selected as negative cases. The pathologies in the images were carefully examined and reconfirmed by these three experts. The detailed procedure of CFMT and RET was introduced in our previous study (Zhang et al., 2022). MRI Data Acquisition Before MRI scanning, all subjects underwent complete physical and neurological examination. Note that, to eliminate the potential influence of behavioral tasks on central representation, the behavioral task took place after MRI data acquisition. The MRI scanning was performed on the 3 Telsa MRI system (EXCITE, General Electric, Milwaukee, Wisc.) at the First Affiliated Hospital of Medical College, Xi'an Jiaotong University in Xi'an, China. To eliminate the time-of-day effect, the scanning was performed from 8:30 to 12:30 a.m. (Hasler et al., 2014). A resting scan and a structural scan were conducted. A standard birdcage head coil was used, along with restraining foam pads to minimize head motion and to diminish scanner noise. Prior to the scan, subjects were instructed to close their eyes, keep their heads still, and stay awake during the scanning process. After scanning, the subjects would be asked whether they had fallen asleep during the process. MRI Data Preprocessing Statistical Parametric Mapping (SPM12) 1 and the Data Processing Assistant for Resting-State fMRI (DPARSF 4.5) 2 were used for MRI data preprocessing. The first 10 images were deleted to eliminate non-equilibrium effects of magnetization and allow the participants to adapt to the experimental environment. The images were corrected for the acquisition delay between slices, motion corrected and co-registered to the subject's anatomical images in native space. Two subjects had head motion exceeding the threshold of 0.2 mm (frame-wise displacement, i.e., Power FD). For the remaining 30 subjects, a two-sample t-test was used to verify that there was no significant difference in head movement between the two groups for the remaining subjects. Next, all the functional images were normalized to the MRI space using the deformation field maps obtained from structural image segmentation, following the segmentation routine in SPM12. The normalized images were resampled to 3 mm isotropic voxels, which were then spatially smoothed with a 6-mm full width-at-half-maximum Gaussian kernel. Finally, the linear trend was removed (Dale et al., 2000), and temporal filtering (0.01-0.08 Hz) was performed on the time series of each voxel to reduce the effect of low-frequency drifts and high-frequency noise (Zou et al., 2008). Generation of Voxel-Wise Amplitude of Low-Frequency Fluctuations Map Resting-State fMRI Data Analysis Toolkit (REST) 3 was used to compute ALFF (Song et al., 2011). ALFF measures the level of intrinsic or spontaneous neuronal activity in a given voxel (Jiang et al., 2004). The ALFF serves as an indicator of cortical excitability (Duff et al., 2008), and the volume of regional cerebral blood flow is correlated with ALFF in the brain region from the resting-state data ; therefore, it is taken as the index for the level of baseline brain activity. To calculate ALFF, after preprocessing, a fast Fourier transform (FFT) was used to transform from time domain to frequency domain for a given voxel, and the specific parameters are as follows: the taper percentage was zero, and the FFT length was set to short. Then, the square root of the power spectrum at each frequency was calculated, and the average value was taken in the range of 0.01-0.08 Hz. The average square root of a given voxel was taken as ALFF (Jia et al., 2020). To minimize the impact of variability among participants and reduce noise interference, we divided the ALFF of a given voxel by the average ALFF value of whole brain voxels to obtain the standardized value. Generation of Region-Wise Amplitude of Low-Frequency Fluctuations Map The voxel-wise ALFF map was averaged into a region-wise ALFF map. The Brainnetome atlas was used to divide the ALFF map into 246 regions of interest (ROIs) (Fan et al., 2016), and the average ALFF value of each region was obtained by averaging the ALFF value in this region . Mean ALFF values from the 246 ROIs then served as the input vector to the classification procedure. Feature Selection Feature selection is necessary in MRI data analysis to avoid dimension disaster (Mladenić, 2006), reduce training time, and increase classification performance (Jiang et al., 2004;Dosenbach et al., 2010). A two-stage feature selection procedure was conducted, identifying features with the highest discriminative power. For the first-level selection, the paired sample t-test was performed between the region-wise postand pre-training ALFF maps in a leave-one-out fashion. The combined region-wise features that survived the statistical threshold (p < 0.05) from each iteration were used as the input for a second-level feature elimination. Note that the remaining ALFF was regressed against the outcome of CFMT individually to remove the confounding effect from other domains of visual expertise, i.e., face in this study. Second, a recursive feature elimination-support vector machine (RFE-SVM) approach was used. This process recursively eliminates the least useful features until further elimination reduces the accuracy (Ding et al., 2015). The specific steps were as follows: 1. The training set was regressed against the outcome of CFMT. 2. The resulting beta-maps were normalized across all brain feature data between 0 and 1 through normalization of mean variance. 3. RFE reduced the dimension of features again and used the classifier itself to discard irrelevant features (Figure 1). Our implementation of RFE is described by the following pseudo-code: a. Input all training samples and class labels, train SVM classifier, calculate the classification accuracy of the model accuracy 0 ; b. Sequentially subtract one feature, inputting the other into LOOCV-SVM, calculating the classification accuracy i of the model, finding all accuracy i greater than or equal to accuracy 0 , and determining the corresponding removed feature feature i ; c. Delete these features and update the feature set; and d. Repeat the above steps until further elimination reduces the accuracy. As a result, we were able to identify a set of brain regions of the highest discriminative power. Support Vector Machine Basically, SVM is a binary classification model (Cortes and Vapnik, 1995). The basic idea is to find the separation hyperplane with the largest interval in the feature space to make the data binary classification efficiently (Li et al., 2007). Linear SVM is often used in neuroimaging data in that it produces interpretable results (Rasmussen et al., 2011). Therefore, this study adopted the linear SVM classifier model of soft interval separation and hinge loss function. LIBSVM toolbox 4 was used in this study (Chang and Lin, 2011). The leave-one-out cross-validation (LOOCV) was used to assess the performance of the classifier (Dai et al., 2012). In LOOCV, each sample was designated as the test sample, while the remaining samples were used to train the multi-classifier. To quantify the performance of the classifier, according to the prediction results of LOOCV, the accuracy, sensitivity, and specificity were defined as follows: where TP, FN, TN, and FP denoted the number of samples correctly predicted, the number of trained subjects classified as untrained ones, the number of untrained subjects correctly predicted, and the number of untrained subjects classified as trained ones, respectively. In this study, the area under the curve (AUC) was also used to represent the classification ability of SVM. A greater AUC value also represented a better classification ability. Statistical Analysis The non-parametric permutation test (Filgueiras et al., 2014) was used to evaluate the statistical significance of the classification results. The features with the highest discriminative power were used in this step, i.e., the 10 features after feature selection. Each subject was treated as an independent sample. For a given sample, the label was randomly set to 1/-1 (1: post-training data, -1: pretraining data), while the label of the testing sample remained unchanged to determine the outcome of SVM. The procedure was repeated 1,000 times. Accordingly, the statistical significance of the original accuracy was calculated as the probability that the SVM classification result was greater than or equal to the original accuracy in the 1,000 replacement. The average accuracy was obtained in all permutations, and the p-value was calculated as a proportion larger than the average accuracy obtained by our method. The threshold of p < 0.05 was used to determine the significance. Regression Analysis To assess the relationship between behavioral measurement and brain activity, Pearson's correlation analysis was conducted between alterations in outcomes of CFMT and RET and alterations in region-wise ALFF. The significance level was set at p < 0.05 after multiple comparison correction (false discovery rate, FDR). Results of Behavioral Tasks During 1 month of training, the subjects reviewed at least 831 cases (926 ± 73, mean ± SD). As shown in Table 1, after 1 month of training, the performance of the radiologist interns significantly improved as revealed by higher scores in RET (p < 0.001, Mann-Whitney U-test) and shorter response time FIGURE 1 | The pipeline of data analysis. After the resting-state fMRI data were preprocessed, voxel-wise and region-wise amplitudes of low-frequency fluctuations were extracted and used for feature selection, which consisted of two steps, including region-wise paired t-test and recursive feature elimination embedded in a leave-one-out cross-validation framework, resulting in 10 features of highest discriminative power. These features were used for SVM modeling with LOOCV. ALFF, amplitude of low-frequency fluctuations; RFE, recursive feature elimination; LOOCV, leave-one-out cross-validation. in RET (p < 0.001, Mann-Whitney U-test). Whereas the level of face expertise remained the same after 1 month of training in the domain of radiology (p = 0.19, Mann-Whitney U-test) (Figure 2). Performance of Support Vector Machine After feature selection, 10 features remained corresponding to the highest accuracy (Figure 1). The classification accuracy of SVM after LOOCV reached 86.7% (Figure 3A), and the AUC was 0.8244 ( Figure 3B). The specificity and sensitivity of SVM after LOOCV were 80.00 and 83.33%, respectively. The classification results were tested 1,000 times, and no repetition reached the classification accuracy of 86.7%. Thus, the statistical significance was p < 0.001, indicating that the results of our study were significantly higher than the chance value. As for the brain regions, 10 regions were identified with the highest discriminative power, including the left cingulate cortex (CG_L_7_4), the right cingulate cortex (CG_R_7_2), the left superior frontal gyrus (SFG_L_7_2), the right precentral gyrus (PrG_R_6_4), the left precentral gyrus (PrG_L_6_4), the right superior parietal lobule (SPL_R_5_4), the right superior parietal lobule (SPL_R_5_1), the left superior parietal lobule (SPL_L_5_4), the right precuneus (PCun_R_4_4), and the left precuneus (PCun_L_4_3) (Figure 4 and Table 2). Results of Regression Analysis No significant correlations were found between alterations in the outcomes of CFMT and RET and alterations in region-wise ALFF after multiple comparisons. DISCUSSION The acquisition of visual expertise requires at least hundreds of cases of training within a specific domain (Annis and Palmeri, 2018). In real-world visual learning, several behavioral components, including high-order cognitive, such as memory (Viggiano et al., 2006), attention (Rose et al., 2004), and working memory (Ennaceur, 2010), and low-order visual factors, such as visual processing (Binder and Desai, 2011), are required. Existing neuroimaging studies demonstrated differentiated patterns of brain response in visual experts under tasks, which are modulated by their accumulated experience in a given domain. Resting-state spontaneous brain fluctuations actively encode previous learning experience. However, few studies have considered how real-world visual experience alters the level of baseline brain activity in the resting state. This study aimed to investigate how short-term real-world visual experience modulates baseline neuronal brain activity in the resting state using the amplitude of low frequency (<0.08 Hz) and a group of intern radiologists (n = 32). The resting-state fMRI data and the behavioral data regarding their level of visual expertise in radiology and face recognition were collected before and after 1 month of training in the X-ray department. A novel machine learning analytical method, i.e., recursive feature elimination SVM embedded in LOOCV, was used to identify subtle changes in the level of baseline brain activity (Figure 1). With a superb classification accuracy of 86.7% (Figure 3A), the results demonstrated that the left posterior cingulate cortex (CG_L_7_4), the right anterior cingulate cortex (CG_R_7_2), the left superior frontal gyrus (SFG_L_7_2), the bilateral precentral gyrus (PrG_L_6_4 and PrG_R_6_4), the bilateral superior parietal lobule (SPL_R_5_4, SPL_R_5_1, and SPL_L_5_4), the bilateral precuneus (PCun_R_4_4 and PCun_L_4_3) showed highest discriminative power after shortterm visual learning (Figure 4 and Table 2). To the best of our knowledge, this study is the first to investigate the baseline neurodynamic alterations in response to real-world visual experience using longitudinal experimental design. Our findings may help develop new insights into the neural mechanism of visual experts and provide new ideas for the cultivation of visual experts. Increased Level of Activity in Brain Regions Supporting Working Memory Working memory (WM) supports the online maintenance and manipulation of information without external stimulation (Baddeley, 1987). The capacity of WM serves as a reliable predictor for the performance of visual experts (Sohn and Doane, 2004). In this study, after training, the radiology interns had increased ALFF in the anterior cingulate gyrus, the posterior cingulate gyrus, and the superior frontal gyrus (Figure 4 and Table 2). Jonides (2004) reported deactivation in the anterior cingulate gyrus, which supported increased WM load under task condition. Duan et al. (2012) found that the activation of posterior cingulate gyrus was enhanced in professional chess players in the game, which was related to enhanced requirement in the WM. Teresa et al. (2018) found increased activation in the superior frontal gyrus under the visual tasks, which required online monitoring and manipulation of task-related information. In sum, all these regions, i.e., the anterior cingulate gyrus, the posterior cingulate gyrus, and the superior frontal gyrus, are closely related to the WM process. The increased level of baseline brain activity in these regions might reflect tuning with training, which in turn decreases the need for executive control in the maintenance of task-relevant information. We propose that these alterations during expertise acquisition might support more automated encoding and maintenance of objects in their expert domain, indicating a more efficient mechanism subserving visual expertise. Decreased Level of Activity in Brain Regions Underlying Memory Extraction In our study, compared with the pre-training condition, the radiology interns had decreased level of ALFF in the bilateral precuneus (Figure 4 and Table 2). Visual recognition intensively depends on the retrieval of conceptual knowledge (Binder and Desai, 2011). The difference in memory extraction predicts the performance difference between visual experts and novices (Binder and Desai, 2011). Assaf et al. (2013) reported the involvement of the right precuneus in memory extraction using the visual expertise model of car experts. While in the restingstate study, Duan et al. reported the reduction of default mode network activity, including left precuneus in the professional chess players, which is closely related to episodic memory extraction. In this study, the bilateral precuneus explicated decreased level of activity after short-term visual training. Given the fact that the resting-state brain activity is involved in the coding of expected sensory stimuli (Jin et al., 2017), we propose that the tuning in these regions is likely to reflect the optimal internal coping mechanism that supports the redistribution of cognitive resources into more demanding brain process (Fox et al., 2005). Decrement in the Level of Activity in Brain Regions Underlying Attention Control Visual attention is a critical component in visual recognition, which facilitates subjects to focus on the target objects in a more efficient way when dealing with complex visual scenes and gives priority to the target visual objects to ensure task completion (Cohen and Lefebvre, 2005). Therefore, the difference in the brain representation underlying attention control may serve to distinguish the brain states of experts and novices (Memmert et al., 2009). In this study, the radiology interns had decreased ALFF in the superior parietal lobule after training (Figure 4 and Table 2). Reilhac et al. (2013) reported deactivation in the right superior parietal lobules, which was closely related to visual attention in radiologists. Ouellette et al. (2020) also found deactivation in the left SPL in radiologists, which was attributed to more efficient control of visual attention supported by accelerated eye-tracking data. We propose that decreased ALFF in SPL also reflects a similar trend. After visual training, the attention control is more efficient, which gives the subjects more flexibility in manipulating attentional resources, so that the resource allocated to attention before training might be allocated later to other brain regions supporting more demanding tasks. Enhanced Level of Activity in Brain Regions Supporting Visual Recognition In our study, the bilateral precentral (the PrG_L_6_4 and the PrG_R_6_4) showed enhanced ALFF after short-term FIGURE 4 | Brain regions with highest discriminative power pre-and post-training. The color bar indicates the weight of the feature. Note that positive weights refer to higher level of ALFF after training, and negative weights refer to lower level of ALFF after training. CG, cingulate cortex; SFG, superior frontal gyrus; PrG, precentral gyrus; SPL, superior parietal lobule; PCun, precuneus; L, left; R, right. visual training (Figure 4 and Table 2). Activations were found in the bilateral anterior central gyrus when visual stimuli were shown to subjects (Marks et al., 2019) and the level of brain activity increased with the number of stimuli (Mechelli et al., 2014). Studies using car experts reported an increase in gray matter volumes in this region (Gilaie-Dotan et al., 2012) and an increased level of evoked brain response to expertise-related visual stimuli in this region (Bentin, 2010). We suggest that our finding also reflects similar changes, but the exact nature of the alteration remained to be elucidated. LIMITATIONS Several issues should be mentioned when the findings from this study are considered. First, the sample size is not optimal. Given the longitudinal design and the COVID pandemic, the current size is the best that can be achieved. Second, given the ratio between the number of discriminative features and the number of samples, this study faced an overfitting issue, which is quite common in MRI studies using a machine learning analytical framework. But it should be noted that three steps were taken to minimize the possibility of overfitting in our study. Particularly, a region-wise feature extraction strategy was used, which decreased the number of features from tens of thousands to 246. Then, a two-step feature selection was conducted, which decrease the number of features from 246 to 25. At last, an RFE-SVM analytical framework was employed to cut off the number of features to an optimal level, resulting in 10 features, i.e., 10 brain regions. Taken together, we do recommend further studies to repeat the current findings using larger samples. Third, for the behavioral tasks, only visual tasks were used. Tasks for WM, visual attention, and memory should be taken into consideration in future studies. CONCLUSION Our results suggest that real-world visual experience alters the resting-state brain representation in multidimensional neurobehavioral components, which are closely interrelated with high-order cognitive and low-order visual factors, i.e., attention control, WM, memory, and visual processing. We propose that our findings are likely to help foster new insights into the neural mechanisms of visual expertise. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of the First Affiliated Hospital of Xi'an Jiaotong University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS GS, CJ, and MD: conception and study design. HW, GS, and MD: data collection and acquisition. XZ and ZZ: statistical analysis. JS and GS: interpretation of results. JS, JW, and MD: drafting the manuscript work and revising it critically for important intellectual content. All authors approval of final version to be published and agreement to be accountable for the integrity and accuracy of all aspects of the work.
6,552
2022-05-25T00:00:00.000
[ "Psychology", "Biology" ]
Restricted Maximin surfaces and HRT in generic black hole spacetimes The AdS/CFT understanding of CFT entanglement is based on HRT surfaces in the dual bulk spacetime. While such surfaces need not exist in sufficiently general spacetimes, the maximin construction demonstrates that they can be found in any smooth asymptotically locally AdS spacetime without horizons or with only Kasner-like singularities. In this work, we introduce restricted maximin surfaces anchored to a particular boundary Cauchy slice $C_\partial$. We show that the result agrees with the original unrestricted maximin prescription when the restricted maximin surface lies in a smooth region of spacetime. We then use this construction to extend the existence theorem for HRT surfaces to generic charged or spinning AdS black holes whose mass-inflation singularities are not Kasner-like. We also discuss related issues in time-independent charged wormholes. Introduction As is by now well established [1,2], in AdS/CFT the Ryu-Takayangi [3,4] and Hubeny-Rangamani-Takayanagi (HRT) [5] prescriptions generally describe the von Neumann entropy of CFT regions A in terms of the area of an appropriate bulk surface. In particular, where ext(A) is the smallest extremal surface satisfying ∂(ext(A)) = ∂A and with ext(A) homologous to A. When there is more than one such surface with minimal area, the HRT surface is ambiguous. Such situations arise at HRT phase transitions, when the HRT surface jumps discontinuously as one varies the region A. Now, there are spacetimes in which HRT surfaces fail to exist or where those that do exist do not correctly compute the von Neumann entropy [6]. However, known spacetimes M 0 with the latter issue are λ → 0 limits of spacetimes M λ in which the HRT prescription succeeds, but where the correct (smallest) extremal surface recedes to the future or past singularity as λ → 0. Similarly, known spacetimes M 0 where extremal surfaces fail to exist are again λ → 0 limits of spacetimes M λ where HRT succeeds but in which all extremal surfaces recede in this way. One thus expects that HRT surfaces do in fact correctly compute the entropy in contexts such recessions are forbidden; i.e., where extremal surfaces are guaranteed to exist as surfaces in smooth regions of the bulk. The maximin construction of [7] shows this to be the case in asymptotically locally-AdS (AlAdS) spacetimes without horizons or where the future and past boundaries consist only of Kasner-like singularities 1 . Ref. [7] also shows in this context that HRT surfaces satisfy strong subadditivity. However, the full array of possible spacetimes have not yet been explored. Of particular interest are charged or rotating black holes. As is well known, stationary such black holes generally contain Cauchy horizons (see figure 1 for the AdS-Reissner-Nordström [AdS-RN] case). But this structure is unstable, and perturbations transform the Cauchy horizons into null mass-inflation singularities which are not Kasner-like [8][9][10][11][12][13][14]; see figure 2. As discussed in the above references, generic black holes are believed to contain singularities of this type. We show below that HRT surfaces exist in such spacetimes as well. Our method of proof extends the maximin arguments of [7]. As defined in [7], a maximin surface is a codimension-2 surface anchored to ∂A and satisfying the homology constraint, and minimizing area within some bulk Cauchy surface Σ ⊃ A, but which Figure 2: Perturbed one-sided (left) and two-sided (right) AdS-RN black holes. The null parts are mass-inflation singularities. A spacelike piece of the singularity forms whenever caustics arise on a null singularity. Such caustics always arise in the one-sided case, and also occur for strong enough perturbations (as shown here) in the two-sided case. The resulting spacelike singularities should be Kasner-like, as can be seen from the fact that the region between the inner-and outer-horizons in figure 1 admits a foliation by spatially homogenous slices that, when subjected to correspondingly homogeneous perturbations, becomes precisely an AdS-Kasner solution. Sufficiently close to a curvature singularity, one should be able to treat any solution as approximately homogeneous, so the spacelike part of the singularity should again be Kasner-like. In the left panel, the black hole is formed by a collapsing shell (in blue). is also maximal among such minimal surfaces with respect to variations of Σ. In particular, the intersection of Σ with the AlAdS boundary is allowed to vary so long as it still contains ∂A. Below, we consider restricted maximin surfaces -defined by bulk Cauchy surfaces Σ that intersect the AlAdS boundary on a fixed boundary Cauchy surface C ∂ -and show that they must agree with with HRT surfaces (and thus with unrestricted maximin surfaces) when they lie in a smooth region of the spacetime. In particular, since any Cauchy surface Σ is achronal, restricted maximin surfaces must be achronally related to some C ∂ . They are thus forbidden from reaching the null singularities in figure 2 and must lie in the smooth interior of the bulk spacetime as desired. We begin by introducing restricted maximin surfaces in section 2 and showing their equivalence to HRT surfaces when they lie in a smooth region of spacetime. Existence of HRT surfaces in (perturbed) AdS-RN-like spacetimes then follows immediately, and more generally in spacetimes where boundary-anchored bulk Cauchy surfaces can reach a future boundary only at Kasner-like singularities. Section 3 concludes with a brief discussion of possible extensions to spacetimes with more complicated null singularities. Restricted maximin surfaces This section will discuss restricted maximin surfaces. In a different context, a maximin construction that fixes the entire boundary of a (in that case partial) Cauchy surface was also used in [15]. Here and below we assume i) the null curvature condition (NCC): R ab k a k b ≥ 0 at each point for every null vector k a , ii) the generic condition [16], which requires at least some positive focusing along each segment of any null geodesic 2 , and iii) AdS-hyperbolicity in the sense of [7]. We choose an achronal codimension-1 surface A in the AlAdS boundary ∂M to define the boundary region whose entropy we wish to study. The boundary of A is denoted ∂A. Our restricted maximin surfaces are then defined via the following two-step procedure. Definition 1: For a chosen Cauchy surface C ∂ of ∂M with satisfies A ⊂ C ∂ , on any complete bulk Cauchy surface Σ with Σ ∩ ∂M = C ∂ let min(A, Σ, C ∂ ) denote the minimal-area codimension-2 surface anchored to ∂A and homologous to A within Σ (i.e., such that there is a region R of Σ for which ∂R = A ∪ min(A, Σ, C ∂ )). If there are multiple minimal area surfaces on Σ, then min(A, Σ, C ∂ ) can refer to any of them. Definition 2: The restricted maximin surface M R (A, C ∂ ) is defined as the min(A, Σ, C ∂ ) whose area is maximal with respect to variations of Σ that preserve C ∂ . We use Σ M R (A,C ∂ ) to denote a Cauchy surface on which M R (A, C ∂ ) is minimal. In the case where there are multiple such surfaces, let M R (A, C ∂ ) denote any such surface that is stable in the following sense: When Σ is deformed infinitesimally to any nearby slice Σ (still containing C ∂ ), the new Σ still contains a locally-minimal surface M R (A, C ∂ ) on Σ close to M R (A, C ∂ ) which has no greater area, i.e. Area[M R (A, Below, we follow [7] in assuming that the stability criterion can be satisfied. When Σ M R (A,C ∂ ) is both spacelike and smooth, this follows by the technical argument in section 3.5 of [7]. But it remains an assumption more generally. Existence of M R (A, C ∂ ) then follows as in section 3.4 of [7] so long as boundary-anchored Cauchy surfaces can future or past boundaries only at Kasner-like singularities. In particular, the space of boundary-anchored achronal slices is compact in the same sense as the space of achronal slices anchored only to ∂A. Equivalence of HRT surfaces and restricted maximin surfaces in smooth regions of spacetime We now show that the restricted maximin surface M R (A, C ∂ ) is an HRT surface for every choice of C ∂ that contains A so long as M R (A, C ∂ ) lies in a smooth region of the bulk spacetime. The argument follows that given in [7] for the original unrestricted maximin surfaces. We first show that M R (A, C ∂ ) extremizes the area with respect to all variations that preserve ∂A. We begin with the case where Σ M R (A,C ∂ ) has continuous first derivative. For every point on a restricted maximin surface M R (A, C ∂ ), there are two independent directions that are normal to M R (A, C ∂ ). The area is minimal with respect to variations on Σ M R (A,C ∂ ) , and maximal with respect to variations normal to this surface. The corresponding first order variations of the area vanish. Linearity of first order variations then implies the area to be stationary under all deformations that preserve ∂A; i.e., the surface is extremal as desired. If instead the first derivative of Σ M R (A,C ∂ ) jumps discontinuously, the surface M R (A, C ∂ ) must still be extremal. The argument is identical to that of Theorem 15(b) in [7]. We now show that M R (A, C ∂ ) is the (properly anchored) extremal surface with least area, and thus an HRT surface. The argument uses the notion introduced in [7] of the 'representative' of any extremal surface x(A) on a Cauchy surface Σ. The representativex Σ (A) is defined by observing that x(A) splits some Cauchy surface into two pieces, which we arbitrarily label as Σ 1 and Σ 2 . When the new Cauchy surface Σ lies to the future of Σ 1 , this representative may be taken to be the intersection of Σ with the boundary of the future of Σ 1 (one may alternatively use Σ 2 ). As noted in [7] (theorem 3), since the bulk satisfies NCC and the boundary of the future contains only null geodesics without conjugate points, the focusing theorem [17] guarantees the representative to have no more area than x(A). And since ∂Σ is fixed to be C ∂ , the representative must have the same anchor set as x(A). If Σ is not entirely to the future of Σ 1 , one may similarly use e.g. the union of the boundary of the future of Σ 1 and the boundary of the past of Σ 2 (or alternatively other combinations of the futures and pasts of Σ 1,2 ). And the representative on Σ M R (A,C ∂ ) must have area at least as great as M R (A, C ∂ ) since the latter surface is minimal on Σ M R (A,C ∂ ) . Thus , and M R (A, C ∂ ) is a least-area extremal surface. Existence of HRT surfaces in standard charged and rotating black holes The above result will show that HRT surfaces exist in charged or rotating black hole spacetimes. Let us begin with the AdS-Reissner-Nordstrom (AdS-RN) solution. The maximal analytic extension is shown in figure 1. However, since the Cauchy horizons are unstable to forming mass-inflation singularities, it is natural to truncate the solution to the unshaded region between the past and future Cauchy horizons 4 . Given any boundary region A ⊂ ∂M , we may then choose a boundary Cauchy surface C ∂ ⊂ ∂M with C ∂ ⊃ A and construct the restricted maximin surface M R (A, C ∂ ) and the associated bulk Cauchy surface Σ M R (A,C ∂ ) . For C ∂ to be a full Cauchy surface it must include pieces on both boundaries even if A is contained in a single boundary. Now, by definition, M includes only finite boundary times. Since Σ M R (A,C ∂ ) is achronal (i.e., no two of its points can be connected by a timelike curve) and ends on C ∂ , it cannot reach the Cauchy horizon. Thus M R (A, C ∂ ) lies in the (smooth) interior of the spacetime and the argument of section 2.1 shows that M R (A, C ∂ ) is also an HRT surface for AdS-RN (truncated at the Cauchy horizons). Furthermore, it is clear that the same conclusion holds for any AdS-hyperbolic spacetime satisfying i) NCC, ii) the generic condition, and for which iii) all bulk Cauchy surfaces Σ anchored on boundary Cauchy surfaces C ∂ meet future or past boundaries only at Kasner-like singularities. We may then use the analysis of Kasner-like singularities in [7] to argue as above. In particular, this is true of the perturbed AdS-RN spacetimes with mass-inflation singularities shown in figure 2. And it continues to hold when rotation is added to the black holes, again truncating the spacetime at Cauchy horizons and/or mass-inflation singularities. Furthermore, strong subadditivity follows precisely as in [7]. Discussion We have used restricted maximin surfaces to show the existence of HRT surfaces in a broad class of spacetimes including standard black holes with mass-inflation singularities. In such cases, it also follows that HRT areas satisfy strong subadditivity. The above class of solutions is believed to be generic in the class of charged and rotating black holes [8][9][10][11][12][13][14]. As explained in the introduction, our result forbids such spacetimes from displaying the HRT-pathologies found in the examples of [6]. Taken together with the Lewkowycz- Maldacena [1] and Dong-Lewkowycz-Rangamani [2] derivations, this strongly suggests that these HRT surfaces correctly compute the associated entropies of the dual CFT state. 5 While our requirements are expected to be satisfied generically, one can nevertheless imagine spacetimes where they fail. Indeed, generalizing the time-independent wormholes of [26] to include electric charge immediately yields solutions of the sort shown in figure 3 in which (limits of) boundary-anchored bulk Cauchy surfaces can reach the bulk Cauchy horizons. For this particular spacetime one may nevertheless use the fact that the right-most and left-most wedges are identical to those of AdS-RN to show that, for any A, there is a (perhaps disconnected) extremal surface anchored to ∂A that is entirely contained in the union of these wedges. Thus HRT surfaces again exist for this spacetime, but it remains to argue that smaller such surfaces have not been lost to the future and past boundaries. Other interesting spacetimes may remain to be investigated as well.
3,420.2
2019-01-12T00:00:00.000
[ "Physics" ]
Identification of Potential Diagnostic Gene Targets for Pediatric Sepsis Based on Bioinformatics and Machine Learning Purpose: To develop a comprehensive differential expression gene profile as well as a prediction model based on the expression analysis of pediatric sepsis specimens. Methods: In this study, compared with control specimens, a total of 708 differentially expressed genes in pediatric sepsis (case–control at a ratio of 1:3) were identified, including 507 up-regulated and 201 down-regulated ones. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis of differentially expressed genes indicated the close interaction between neutrophil activation, neutrophil degranulation, hematopoietic cell lineage, Staphylococcus aureus infection, and periodontitis. Meanwhile, the results also suggested a significant difference for 16 kinds of immune cell compositions between two sample sets. The two potential selected biomarkers (MMP and MPO) had been validated in septic children patients by the ELISA method. Conclusion: This study identified two potential hub gene biomarkers and established a differentially expressed genes-based prediction model for pediatric sepsis, which provided a valuable reference for future clinical research. INTRODUCTION Sepsis is a life-threatening organ dysfunction initiated by an imbalance in the systemic inflammatory response to infection (1). Over the past few decades, numerous medical studies have proposed several definitions for sepsis, such as septicemia, sepsis, toxemia, bacteremia, endotoxemia, and so on (2). Sepsis is characterized by a general pro-inflammatory cascade that causes extensive tissue damages, which includes severe clinical spectrums, such as septic shock as well as multiple organ failures (3). Sepsis can be initiated by bacteria, fungi, as well as viruses, without specific treatment (4). In this case, the diagnosis of sepsis is particularly difficult because these patients have multiple comorbidities and underlying diseases (5). Sepsis is the leading cause of child mortality worldwide, which is estimated as 60% for children under 5 (6). A US cohort study indicated a significant increase for the annual incidence of severe pediatric sepsis (7). Over the last decades, the management for pediatric sepsis has improved gradually (8). The current therapies in clinic include resuscitation, prompt and appropriate antimicrobial therapy, accurate fluid balance, blood glucose, as well as source control (9). However, we still lack a specific molecular therapy for this condition, except for antimicrobial therapy. Numerous trials of potential biological agents targeting different mediators of sepsis have failed (2). Despite advances in intensive care and supportive technology, the mortality rate of sepsis in children still stay in a high position without going down (10). The current recommendation for identifying sepsis is the SOFA score, which refers to Sequential (Sepsis-Related) Organ Failure Assessment. SOFA is a simple system, which uses accessible parameters in daily clinical practice to identify dysfunction or failure of the key organs as a result of sepsis (11,12). The European Medicines Agency has accepted that a change in the SOFA score of 2 or more is an acceptable surrogate marker for sepsis (13). Unfortunately, the criteria still cannot confirm or refute the diagnosis of sepsis completely given the complexity of the sepsis response. Moreover, sepsis is a time critical emergency, as the disease may progress rapidly to organ failure, shock, and death, which require a prompt recognition. Based on inflated misdiagnosis rate and poor accuracy of diagnosis, pediatric sepsis has brought great difficulty to clinical treatment. To this end, a more comprehensive approach to predict pediatric sepsis based on the specific target gene differential expression is required. To address these issues, in this study, we used a combination of bioinformatics and machine learning to screen out the potential biomarkers in the pathogenesis of pediatric sepsis specimens and then constructed a diagnosis model. All of these promising outcomes enriched the diagnosis of the disease, which provide tremendous help for pediatric sepsis study. Data Source This study utilized the mRNA chip data in the GEO database, and the samples were from data sets numbered GSE26440, GSE26378, and GSE66099. GSE26440 included 98 whole blood samples of septic children and 32 whole blood samples of healthy children; GSE26378 included 82 whole blood samples of septic children and 21 whole blood samples of healthy children; GSE66099 included 229 whole blood samples of septic children and 47 whole blood samples of healthy children. The whole genome expression profiles of the above samples were detected by Affymetrix Human Genome U133 Plus 2.0 Array chip platform. The age of the sample ranged from 0.1 to 9.8 years (see Supplementary Table 1 for detail). There was no significant clinical difference between the specimen of septic children and healthy children (gender, age, etc.). The third International Consensus Definitions for Sepsis and Septic Shock (Sepsis 3) suggested the Sequential Organ Failure Assessment (SOFA) score to grade organ dysfunction in adult patients with suspected infection, which was not suitable for children illness. As reported previously, we used a pediatric version of the SOFA score (pSOFA) in this study (14). The septic children in the study had a pSOFA score ≥2. Differential Gene Analysis We firstly used the Robust Multi-Array Average (RMA) method to normalize the original data measured by the chip and then took the normalized value and log2 logarithm to generate the data after normalization, subsequently for differential expression analysis. We screened differentially expressed genes based on the limma function package of R language (version 3.5.2, the same below) (15). The absolute values of log-transformed differential expression multiple (Log2FC) >1 and FDR < 0.05 were used as a criteria. Functional Enrichment Analysis For the obtained differentially expressed genes, we used the "clusterProfiler" function package in R language for enrichment analysis of GO (Gene Ontology, including Biological Process, Molecular Function, and Cellular Component) and KEGG (Kyoto Encyclopedia of Genes and Genomes, including key related pathways) analysis. When p < 0.05, we considered the corresponding entries to be significantly enriched (16). Protein-Protein Interaction Networks and Identification of Hub Genes The STRING database is the one for analyzing and predicting protein functional associations and protein interactions. In this study, we utilized STRING (https://STRING-db.org/, version 11.0) to analyze protein functional associations and protein interactions (17). The Cytoscape (version 3.7.2) was used to visualize PPI network, and the cytoHubba plug-in in Cytoscape was used to screen the key genes (hub genes) in the PPI network based on the Algorithm of Maximum neighborhood component (MNC) (18). Calculation of Immune Infiltrating Cells We used the software CIBERSORT (https://cibersort.stanford. edu) to calculate the relative proportions and p-values of 22 immune infiltrating cells in each sample. This software provided a pre-convolution algorithm to characterize the composition of immune invading cells based on the gene expression matrix using a deconvolution algorithm. CIBERSORT calculated the relative proportion of 22 immune infiltrating cells in each sample based on the expression of these 547 barcode genes as well as the pvalue. The smaller the p-value, the higher the content of immune infiltrating cells in the sample. Logistic Regression Model Construction Here, we used GSE26440 as a training set to construct the Logistic model, and GSE26378 and GSE66099 as two independent verification sets to verify the model. The part of the data set GSE66099 that coincided with GSE26440 and GSE26378 had been removed. The remaining 138 specimen were processed samples as independent verification sets in this study. Firstly, the samples were divided into two groups: normal control group and pediatric sepsis group. The GLM function in R language was used as a continuous variable, and the sample type was used as a binary response. A multifactor logistic regression model was constructed, and then the variables were further screened by stepwise regression. The model was then reconstructed using the screened variables and the p-value of each variable was calculated by the model. Finally, the candidate gene reconstruction model with p < 0.05 was selected as the final model for followup analysis. The source code (in reproducible format) in this study has been shown in Supplementary Table 2. Construction of Random Forest Classification Model In this study, the sample type was considered as a dependent variable, and the selected gene expression value was considered as an independent variable. The method of bootstrap sampling and the method of Bagging were utilized to generate multiple decision tree classifiers (implemented by the "randomForest" function package in R) and the final random forest model. ELISA Analysis of Hub Genes A total of 63 (septic) children diagnosed by pathology and evaluated by SOFA score system in Tianjin People's Hospital from January 2018 to December 2020 were randomly selected and collected. Due to the limitation of sample size as well as in order to generate statistically convincing results, the patients were divided into three groups (control group: pSOFA = 0; mild pediatric sepsis group: pSOFA = 2-4; severe pediatric sepsis group: pSOFA score ≥ 5). There were 21 cases in each group. This study is in line with the medical ethics standards and approved by the hospital ethics committee. All treatment and testing were carried out after obtaining informed consent from patients or their families. The concentrations of hub genes were determined by ELISA double antibody sandwich method from patients' peripheral blood. The specific operation was carried out in strict accordance with the instructions of the kit (Abcam company, US). The experimental results were repeated 3 times independently and were tested by statistical methods. Analysis Results of Differentially Expressed Genes We first standardized the microarray data of 3 sets of GEOs to remove batch effects. Using GSE26440 normalized data for differential gene analysis, we obtained a total of 708 differentially expressed genes in the pediatric sepsis group relative to the control group, including 507 up-regulated genes and 201 downregulated genes ( Figure 1A and Supplementary Table 3), and the expression of differentially expressed genes was significantly different between the disease group and the control group ( Figure 1B). GO and KEGG Enrichment Analysis Results By performing GO and KEGG enrichment analysis on these 708 differentially expressed genes, we found that these 708 differentially expressed genes were enriched in GO terms related to immune cells such as neutrophil activation and neutrophil degranulation (Figure 2A). At the same time, the hematopoietic cell lineage and Staphylococcus aureus infection were significantly enriched in KEGG pathway analysis (Figure 2B). Immune Cell Calculation Results Since GO and KEGG results showed the close correlation to immune cells, we next analyzed the immune cell composition in the samples to study the immunity in different groups of samples. Using the CIBERSORT algorithm, we explored the differences in immune infiltration among 22 immune cell subgroups for the 130 samples in GSE26440. We found that the relative proportions of 16 immune cells in 22 types of immune cells were significantly different between the two groups ( Figure 3A), which included resting and activated memory CD4+ T-cells, follicular helper T-cells, T regulatory cells (Tregs), gamma/delta T-cells, activated NK cells, monocytes, macrophages, resting dendritic cells, resting, and activated mast cells, as well as neutrophils. Meanwhile, the proportion of neutrophils in the pediatric sepsis group was significantly higher than the normal group ( Figure 3B). PPI Network Construction and Screening of Key Genes We established a PPI network of 708 genes by STRING database, as 577 genes with a confidence score ≥0.4. We used Cytoscape software and the MNC algorithm to score the importance of each node in the network. With these together, we developed the top 50 genes according to the score from large to small. The darker the color, the more important the node was (Figure 4). The Construction of the Logistic Model and the Random Forest Classification Model With the selected 50 genes, we generated logistic regression model 1 from the training set GSE26440. In order to use as few variables as possible to build a strong interpretation model, we performed a stepwise regression method to further identify 5 primary genes from these 50 genes, which were TLR2, MMP9, TLR8, MPO, and CCL5. Logistic regression model 2 was constructed by incorporating these 5 genes into the model as variables. An OR value >1 indicated that the expression of this factor was positively correlated with the onset of disease, while <1 was negatively correlated. At the same time, we found that the p-values of these 2 genes, MMP9 and MPO, were <0.05, indicating that they contribute greatly to the model while others (TLR2, TLR8, as well as CCL5 with p-value more than 0.05) were not used for model 3 construction and subsequent analysis. Furthermore, we reconstructed logistic regression model 3 with these 2 genes as the final model and found that this logistic regression model had no extreme point affecting the accuracy of the model. The red dashed line in the figure indicated the COOK distance. Generally, the point where the COOK distance >0.5 was a very "influential" point, which affected the reliability of the model. It could be seen in the figure that our model did not show such a point ( Figure 5A). The AUC value in the training set GSE26440 was 0.9907, while the AUC values in GSE26378 and GSE66099 were 0.9477 and 0.9562, respectively ( Figure 5B). The AUC as a numerical value can directly evaluate the quality of the model. The larger the value, the better the model. To further evaluate the importance of the 2 hub genes, we also constructed a random forest classification model. The GSE26440 was used as training set. The sample type was used as a dependent variable. The expression of 50 genes selected in the previous step was used as an independent variable. See Figure 5C for details. The figure demonstrated the top 30 genes in the importance ranking of these 50 genes in the random variable model. The MeanDecreaseAccuracy indicated the decrease of model accuracy after variable replacement, while the MeanDecreaseGini indicated the decrease of model GINI coefficient after variable replacement. The larger the 2 values were, the more important the variable was. From the figure, it was clear that MMP9 and MPO were the top 2 genes in the MeanDecreaseAccuracy as well as MeanDecreaseGini's scores, indicating that these 2 genes were more crucial variables in the random forest model (Figure 5D). The above results demonstrated that the model based on these 2 genes could be used as the primary criteria for pediatric sepsis staging. The Functional Validation of the Selected Biomarkers To functionally verify the possibility of biomarkers for pediatric sepsis in clinic, we tested the concentrations of 2 target hub genes in pediatric sepsis patients' peripheral blood by ELISA method. As pediatric sepsis severity increases, the levels of MMP9 and MPO decreased significantly (P < 0.05, shown in Table 1). DISCUSSION Sepsis can lead to death of children with a dramatic increase in the incidence of disease. The prominent problem for pediatric sepsis is the inflated misdiagnosis as well as the lack of a gold standard. Epidemiologic research has provided evidence that sepsis is more or less similar to over 50 systemic diseases in children (19)(20)(21). For instance, in the case of fever: immunosuppressed children do not always develop fever, so the infection is hard to detect. In contrast, critically ill children have a certain degree of hyperthermia but may not present infection (22). All of these may lead to the disease being ignored or misdiagnosed in the first place. Whenever symptoms are present, sepsis in children becomes more severe. All of these are all in dire need of effective forecasting targets or biomarkers for pediatric sepsis. To this end, it is beneficial to develop a comprehensive specific expression profile of genes in pediatric sepsis patients for potential candidates. Here, compared with control specimens, a total of 708 differentially expressed genes in pediatric sepsis were screened out, including 507 up-regulated genes and 201 down-regulated genes (see Figure 1A for details). We further studied the biological process related to these genes using GO and KEGG enrichment analysis. We found that these target genes were significantly enriched in biological processes (BP) related to immune cells such as neutrophil activation, neutrophil degranulation, hematopoietic cell lineage, and S. aureus infection (see Figure 2 for details). Previously, a multicohort analysis by Sweeney et al. suggested that neutrophil activation, neutrophil degranulation, monocytes, as well as T cell-associated process had been involved in sepsis development and formation (23). They utilized data sets containing cohorts of children and adults, men and women, with a mix of community-and hospital-acquired sepsis, while we focused on pediatric sepsis; this may support the fact that these processes are common for all-age sepsis. This host response in septic progression involves many defense mechanisms with strong cellular activation, including neutrophil activation. In this process, neutrophil cells are key to innate immunity through their complex interactions with vascular cells, and Their activation also leads to the release of neutrophil traps, which are involved in pathogen containment and phagocytosis, as well as coagulation activation (24). Several reports have demonstrated that neutrophils generally have a relatively high expression in sepsis patients (25,26). Previously, a study using high-throughput technologies has been able to identify differentially expressed pediatric septic shock biomarkers using gene expression data to predict long-term outcomes (27). In this study, highly expressed genes in pediatric sepsis are enriched in multiple KEGG pathways and GO terms, which are related to neutrophils. Therefore, high expression of related genes may be one of the potential causes of increased neutrophil content in pediatric sepsis. On other hand, sepsis can be induced by viruses, bacteria, fungi, etc. The enrichment of neutrophils in the study might be due to the high infection rate of bacteria in children's specimen. Previously, it was suggested that sepsis was closely associated with hematopoietic stem cell exhaustion and hematopoietic cell lineage, which was processed through a Toll-like receptor 4 (TLR4)-related mechanism (28,29). S. aureus is now the most common cause of bacteremia and infective endocarditis in industrialized nations worldwide and is associated with excess mortality when compared to other pathogens. It has been suggested that S. aureus is the primary cause of pediatric sepsis (30). Overall, these pathways are related to pediatric sepsis on some levels, and they provide a significant research starting point. Due to complexity of pathogenesis in pediatric sepsis, it is impossible to study each potential single gene individually. To this end, an alternative method combining bioinformatics and machine learning is required for our study. Here, we used Cytoscape software and the MNC algorithm to identify the top 50 genes according to their scores in the model (see Figure 4B for details). Using the 50 genes, we generated a logistic regression prediction model. With further reconstruction, two primary genes including MMP9 and MPO had been screened out. Matrix metalloproteinases 9 (MMP-9) is a zinc-dependent gelatinase, which could decrease the expression of extracellular matrix proteins and influence the metastatic behavior of immune cells (31). MMPs are secreted as pro-MMPs, which are regulated by tissue inhibitors of metalloproteinase (TIMPs) as well as by α-macroglobulins (31). Previously, MMP-9, TIMP-1 levels, and MMP-9/TIMP-1 ratio have been suggested as biomarkers in adult severe sepsis and septic shock (32,33). A study by Alqahtani et al. demonstrated that the MMP-9/TIMP-1 ratios can also serve as a biomarker for the identification of sepsis in pediatric patients (34). This is consistent with our integrated study. However, we did not find a significant difference for TIMP-1. This may be due to the fact that they used febrile controls in addition to a healthy control group as was often used in studies of biomarkers in sepsis, or the constant comparison in our report did not fluctuate timedependent analysis as they performed. All of these deserve further investigation. On the other hand, it is interesting to further explore the function of MPO in the pediatric sepsis study. The neutrophil myeloperoxidase (MPO) is mainly shown to promote oxidative stress by the production of active chlorinated molecules (35). It was rarely reported to be associated with pediatric sepsis, yet a small sample size study suggested a lower MPO level in pediatric sepsis compared with the control group (91.24 vs. 116.55 U/L; p value = 0.023) (36). These evidences highlighted the importance of understanding the relation between the MPO gene family pathway and pediatric sepsis, which initiated a tremendous starting point for the following study. Importantly, a recent study using two feature selection methods including Random Forest Feature Importance (RFFI) and Minimum Redundancy and Maximum Relevance (MRMR) also provided multiple differentially expressed genes and enriched pathways for pediatric sepsis. Within these, MPO was also a primary candidate (37). Using two potential target genes (MMP9 and MPO), we established a logistic regression model aiming for pediatric sepsis prediction. The accuracy of the model prediction was evaluated and approved by clinical data outcomes ( Table 1), which demonstrated the tendency of two biomarkers' change for different levels of pediatric sepsis patients. To conclude, in light of the fact that there remains no gold standard diagnosis and no reliable disease-specific prediction for pediatric sepsis, we summarized the differential expression profile of genes in the disease. Several target genes established a specific expression manner, which initiated new insights into the management of pediatric sepsis therapeutic biomarkers discovery and provided a very valuable data reference for future clinical research. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Tianjin Union Medical Center, Tianjin, China. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
4,968.6
2021-03-04T00:00:00.000
[ "Biology", "Medicine" ]
Memory Impedance in TiO2 based Metal-Insulator-Metal Devices Large attention has recently been given to a novel technology named memristor, for having the potential of becoming the new electronic device standard. Yet, its manifestation as the fourth missing element is rather controversial among scientists. Here we demonstrate that TiO2-based metal-insulator-metal devices are more than just a memory-resistor. They possess resistive, capacitive and inductive components that can concurrently be programmed; essentially exhibiting a convolution of memristive, memcapacitive and meminductive effects. We show how non-zero crossing current-voltage hysteresis loops can appear and we experimentally demonstrate their frequency response as memcapacitive and meminductive effects become dominant. C lassical circuit theory is founded on the axiomatic definition of three fundamental circuit elements: the resistor by Ohm 1 , the inductor by Faraday 2 , and the capacitor by Volta 3 . These definitions however are only static descriptions of the instantaneous values of the corresponding variables, despite the fact that dynamic responses have been observed well before the establishment of these definitions 4 . About forty years ago, Chua envisioned the concept of memory-resistors (memristors) based-upon a symmetry argument 5 that imposed the existence of a missing fundamental circuit element that provided linkage between charge and flux ( Figure S1a). Whilst this argument is considered to be fair, the later generalised definition by Chua and Kang 6 is of more fundamental importance as it provides a state-dependent relationship between current and voltage that broadly captures dynamic resistive elements 7 . Originally the broad generalization of ''memristors'' was conceived as a new class of dynamical systems and as such it was referred as ''memristive systems''. To avoid confusion, we follow the latest notation imposed by Chua, the lead author of both reports, and we refer to this class of devices as ''memristors''. This is in tandem with the denomination of capacitors and inductors; practical implementations are not explicit facsimile to the corresponding ideal definitions yet these are not called capacitive or inductive systems. Similarly, two new auxiliary classes of dynamical elements were coined up 8 as memory-capacitors (memcapacitors) and memory-inductors (meminductors) that establish state-dependent relationships among chargevoltage ( Figure S1b) and current-flux (Figure S1c) respectively. The common property of these distinct subsets is memory, which is attributed to inertia between the causal stimulus and the diverse range of physical mechanisms that support the various state modalities. The signature of this inertia is a pinched hysteresis loop in the i-v, q-v and w-i domain respectively for memristors, memcapacitors and meminductors (Figures S1d,e,f); notwithstanding the classical definitions of resistors, capacitors and inductors that are described by single-valued functions and should thus be considered as special cases of these broad subsets (Figure 1a). To date, the research community has shown great interest on demonstrating exclusive solid-state implementations of memristors 9-11 , memcapacitors 12-14 and meminductors 15,16 ; some of which are highlighted in a review by Pershin and Di Ventra 17 . Deliberate attempts to fabricate practical cells, past HP's work on memristors 11 and the supplementary theoretical definitions for memcapacitors and meminductors 8 , have strived to match the characteristics dictated by the ideal definitions, while reports on related phenomena that were incidentally observed were more relaxed. There is serious contention among scientists particularly for memristors, the most exploited subset of devices so far is the non-zero crossing i-v characteristics of solid-state implementations which is considered to be contradicting the original theoretical conception 18 , arguing the need for revamping the existing memristor theory; a plausible extension is to incorporate a nano-battery effect 19 . In our opinion, the global definition of memristors 6 is well-defined. Such phenomena simply manifest the coexistence of distinct memory modalities 20 , which collectively facilitate a response that inevitably differs from the classical theoretical definition; one should be able to disentangle the individual contributions within single devices. The coexistence of parasitic effects has been observed on practical devices for more than a century 21,22 , with this effect being more apparent when the devices' characteristics are exploited in broad frequency spectrums where the static parasitic contributions are more notable. We argue that practical solid-state devices would experience dynamic parasitic contributions and it is thus more appropriate to investigate the devices' response utterly as memory impedance. Figure 1a demonstrates conceptually how such a complex interaction emerges by the mingling of distinct memory effects, namely memristive, memcapacitive and meminductive. Particularly in case I (III) the corresponding i-v crossing point would occur within the first (third) quadrant as illustrated in Figure 1b (1c), due to an additive memcapacitive (meminductive) contribution. In case IV however, all three fundamental memory effects are expressed in a device and the i-v crossing point could either occur within the first or third quadrant as shown in Figure 1d, depending upon the device's impedance constituent memory elements (Figure 1d inset), whose dominance is determined via the stimulus frequency. Non-zero-crossing behaviour of TiO 2 based metalinsulator-metal devices Here, we experimentally demonstrate the coexistence of dynamic parasitic effects in the prototypical Metal-Insulator-Metal (MIM) single crossbar architecture based on a TiO 21x /TiO 2 functional core 23 ; one of our prototypes is illustrated in the inset of Figure 2a. Multiple devices were fabricated, with the fabrication flowchart outlined in the Methodology section. For all devices, we initially employ a quasi-static 63 V voltage sweep for acquiring the characteristic pinched-hysteresis i-v loop that is the well-known memristor signature 24 . Figure 2a shows the i-v signature of a Pt/TiO 21x /TiO 2 /Pt cell with an active area of 5 3 5 mm 2 . In this case, we observed that the crossing point occurs at 1 V, indicating the presence of some parasitic capacitance. The influence on this parasitic capacitance was explored while programming the device at bipolar states (see Figures S2a, b for programming/evaluating procedure). Figures 2b and 2c demonstrate the concurrent resistive/capacitive switching of our prototype, toggling from a high-resistive state (HRS) to a lowresistive state (LRS) and correspondingly from a high-capacitive state (HCS) to a low-capacitive state (LCS). These experimental results confirm that the memristive behavior is indeed accompanied by a memcapacitive behavior. It is worthy to point out that the capacitance switching ratio is frequency dependent 25 . As demonstrated in Figure 2c, the difference between HCS and LCS is significantly larger at the lower C-V test frequency (100 KHz). Unless otherwise stated, all C-V tests were implemented at 1 MHz, a frequency that is near to the RC pole of the device. Thus, the capacitive switching ratio is lower than that of resistance extrapolated from DC pulses. Similar experimental results have been previously demonstrated in other ReRAM devices 20,25 . In this manuscript, all tested devices were electrically characterized without employing an electroforming step. As a result, the activation energy supplied by a single set or reset pulse is not sufficient to generate formation and rupture of continuous filaments. Resistive switching events are thus not available at each programming pulse, rather at multiple pulses that facilitate a collective behavior, as demonstrated in Figures 2b and 2c. In this particular case, the memristive/memcapacitive switching are correlated; a phenomenon that has previously been observed in perovskite 25 . It is worth mentioning though that the coexistence of memristive and memcapacitive behaviors have also been observed on other MIM cells from the same wafer with 2 3 2 mm 2 and 10 3 10 mm 2 active areas. Interestingly, the programming memristance and memcapacitance of devices of 2 3 2 mm 2 active area are anticorrelated, as shown in Figure S2c, while devices with larger active areas, i.e. 5 3 5 mm 2 and 10 3 10 mm 2 , follow alike switching trends. Similar opposing memristor -memcapacitor programming trends have been reported recently 26 , with the programming and evaluation of the two memory properties though being executed independently one from another. The reason for the area dependence of the relationship between capacitance and resistance is argued to be due to distinct dominant conducting mechanisms. It should be noted that the measured data in devices with active area of 2 3 2 mm 2 was attained under bipolar switching mode, where schottky barriers at both the top and bottom interfaces are anticipated to play a dominant role. In this case, positive programming pulses could decrease the top interface resistance and its related capacitance, but it would not necessarily increase the bottom interface resistance and its related capacitance due to the pulse's saturation limit 25,27 . As a result, the device's effective resistance will decrease due to the shunting of the top space charge region. In turn, the total capacitance of the whole device will increase because its value is now dominated mainly by the bottom interface 25 . Similar explanations apply for negative programming pulses. To further explore any possible influence of the device's electrodes on the measured capacitance, we implemented C-V tests on pristine devices of varying electrode areas (2 3 2 mm 2 , 5 3 5 mm 2 , and 10 3 10 mm 2 ). As expected, the measured initial capacitance is in proportion to the electrode areas and the capacitance per unit electrode area is ascertained to be C unit 5 18 (fF/mm 2 ). Detailed arguments could be found in Supplementary materials. The experimental results shown in Figure S3 in the supplementary materials serve as evidence that the functional mechanism of resistance modulation in our MIM devices is indeed filamentary in nature. We thus argue that for the device to assume a LRS, the TiO 2 core will locally undergo substantial reduction (TiO 2-x ) that will support one or multiple continuous current percolation filaments that would Figure 2d. It should be noted that the filament in TiO 2 based resistive devices has a bulky conical shape, which has been experimentally demonstrated in recent studies 28,29 . In the case however that the device is programmed at HRS, this filament would be annihilated (or partially formed), resulting in a barrier region (L 1 ) among any existing percolation branch and the BE. Yu et al. 30 pointed out that such a barrier would render a poor DC conduction that can be modelled as a capacitance, nonetheless they overlooked the fact that this is essentially an auxiliary memcapacitance, as theoretically denoted by Mouttet 31 . Depending on the polarity of the applied potentials, this barrier would decrease or increase that in turn would set the corresponding static resistive and capacitive states. While the device is programmed at a HRS, measured results acquired by impedance spectroscopy would denote that the device could be statically modelled as a parallel combination of a resistor (R 1 ) and a capacitor (C) in series with a small resistor (R 0 ), as shown in Figure 2e. R 0 represents all other Ohmic resistances including that of the initial conducting filaments and measurement connections, which is usually no more than 20 V overall 32 . One would argue that when a continuous filament is formed, resulting into LRS, this barrier would diminish, rendering minimum static values for both resistance and capacitance. This is indeed illustrated in Figure 2f, where the measured results on the presented Nyquist plot cluster together on the Re (Z) axis. A series of repeated impedance measurement cycles was implemented on the 10 3 10 mm 2 active cell with the acquired results shown in Figures 3a-3b (the corresponding programming-evaluating procedures are shown in Figures S2a, b). The 10 3 10 mm 2 cells demonstrate a rather interesting response, as shown in Figure 3b, that directly contradicts the unipolar impedance measurements acquired from 5 3 5 mm 2 active cells. In this case, the applied stimuli cause the device's reactance to toggle between both negative and positive values, indicating that capacitive and inductive behaviours are alternately dominating. The measured reactance values in this case are treated as the 'net' contribution of a concurrent capacitive and inductive response. Our measured results thus indicate that a functional-oxide based MIM capacitor of relatively large active area (in this case $ 10 3 10 mm 2 ) can effectively support concurrently all three memory states. The origin of this triple-state coexistence and the distinct programming trends can possibly be explained via the filamentary formation/annihilation that occurs within the active core of our prototypes due to a redox mechanism of TiO 2 . In HRS, large resistive states would ascertain large tunnelling gaps (L 1 ) between the device's electrodes, and thus should introduce a significant capacitive effect. A larger active area, as in the case of our 10 3 10 mm 2 prototype, in principle allows percolation channels to occur over larger volumes, essentially facilitating the formation of winding conductive paths that will inevitably introduce a notable effective inductance. But in HRS, the filaments density and path tortuosity are limited, and thus the capacitive effects are dominant. In contrast, the CF is fully shorting TE and BE in LRS, which would minimise the capacitive effect, while introduce the dominant inductive effect from a number of winding filaments as shown in Figure 3d. This is also verified in Figures S2c-S2e, which clearly demonstrated that the conductivity of LRS is correlated with the cell size. Frequency response of i-v characteristics So far, all non-zero-crossing behaviours have been observed by employing sweeping potentials 19 of static frequency. In order to prove our hypothesis that ReRAM cells concurrently support memristive, memcapacitive and meminductive components, we To ensure that we do not encounter any erroneous parasitic effects while evaluating the characteristics of single devices, we optimised our instrumentation setup and limited the measuring frequency spectrum up to 1 MHz. This setup was also benchmarked while measuring known SMD (Surface-mount device) components ( Figure S4) that up to 1 MHz showed no significant parasitic effects. It should also be noted that to preserve devices from any hard-breakdown as well as minimising the effect of any switching thresholds asymmetry, at any single frequency point, only one sinusoidal period was applied from a Tektronix arbitrary function generator (AFG-3102). Figure 4 depicts measured i-v characteristics of our 10 3 10 mm 2 prototypes as the stimulus frequency ranged from 1 Hz to 1 MHz. Specifically, in the inset of Figure 4a, a small offset (40 mV) is observed at f 5 1 Hz indicating the existence of a nanobattery 19 (V emf ), as the influence of parasitic effects could be neglected at such low frequency. By increasing the stimulus frequency to 100 Hz, we observe the shrinking of the area encountered within the right i-v lobe along with a slight increase in the crossing point offset, respectively captured in Figures 4g and 4h. The influence of a memcapacitive response starts to appear in Figure 4c, as the stimulus frequency is further increased at f 5 20 KHz. The crossing i-v point is now clearly displaced to the first quadrant, while the hysteresis loop opens up from a lissajous towards a circular form, a characteristic of memcapacitance ( Figure S1g) that becomes even more apparent at f 5 100 KHz (Figure 4d). Interestingly, further increasing the stimulus frequency causes the crossing i-v point to drift towards the third quadrant, as predicted theoretically (Figure 1d) and shown here experimentally in Figures 4e and 4f. It is worthy to point out that there is a step in the i-v curves in Figures 4d and 4e. We argue this behaviour being similar to the steps observed on the i-v characteristics of some classes of diodes that include high internal field regions, and are subject to local breakdown, such as tunnel diodes and silicon controlled rectifiers. In the vicinity of the breakdown such devices can exhibit a local negative differential resistance without violating passivity. When the i-v characteristics of this class of devices are traced in conductance mode (source v from a small source impedance, measure i) or in impedance mode (source i from a small source admittance, measure v) without any special precaution to overcompensate the device's negative resistance, steps could be observed as illustrated in Figures 4d and 4e. A plausible form of i-v characteristic that can give rise to the observed step has been annotated in green on Figure 4d. Overall, the exhibited hysteresis of our prototypes is large for low frequencies where the memristive component is dominant, and as frequency increases it reduces (as anticipated by the memristor theory), increases (as memcapacitive effects come into play) and then again reduces; as depicted in Figure 4i. At the same time, the crossing i-v point origins at 0 V denotes an almost purely passive component at low frequencies, with capacitive (inductive) components adding a positive (negative) offset at frequencies where the corresponding effects are more dominant as depicted in Figure 4g. Our experimental results were closely fitted with an equivalent circuit model (inset of Figure 4g) comprising a parallel combination of a memristor and memcapacitor that are serially connected to a meminductor and a nanobattery (detailed simulated methods and parameters can be found in Supplementary Materials). The areas bordered by the pinched hysteresis i-v loops were calculated as reported previously 33,34 . Figure 4h depicts the changes of left and right lobes' areas. It is clear that at frequencies below 100 Hz, the memristive effect was dominant, thus area of right lobes dropped; Considering the migration of crossing point to the first quadrant, the area of left lobe kept almost the same. Then at frequencies between 100 Hz to 200 KHz, Figure 4i depicts the sum and normalised difference of the lobes' areas at distinct frequencies. It can be observed that at frequencies below 10 KHz, the right lobe hysteresis outweighs the left one. In contrast, an opposite ratio polarity could be obtained when memcapacitive effect is dominant (10 KHz to 700 KHz), while meminductive would invert the polarity again at even higher frequencies (above 1 MHz). Summary In this work we presented experimental evidence that TiO 2 -based MIM devices, commonly known as memristors, exhibit concurrently memristance, memcapacitance and meminductance. We showed that these components are concurrently modulated under voltage biasing and we have identified that meminductance is more apparent for devices of large active areas. We also demonstrated that the frequency response of the devices' pinched-hysteresis i-v does not follow the classical signature of memristors, and it is a manifestation of all three memory components. We believe that these features can be particularly useful in developing adaptive circuits that operate in radio-frequencies 35 , while they open up the possibility of establishing self-resonating nanoscale components that could find applications in cellular neural networks and neuromorphic implementations 36 . Methods summary Fabrication of TiO 2 based active cells. In this process flow, we used thermal oxidation to grow 200 nm thick SiO 2 on 40 silicon wafer and we employed optical lithography method to pattern the Bottom Electrodes (BE), Top Electrodes (TE) and the TiO 2 active material. E-beam evaporation was employed to deposit 5 nm adhesive layer and 30 nm Pt layer as BE and 30 nm thick Pt as TE, followed by lift-off process to define the patterns. For TiO 2 , sputtering at 300 Watts and 2 min wet etching, in 1550/HF: H 2 O solution, were performed. The design allows having Pt/TiO 21x /TiO 2 / Pt ReRAM structures in cross-bars and standalone configurations. Electrical measurements. Electrical measurements for active cells on wafer were performed utilising a low-noise Keithley 4200 semiconductor characteristics system combined with a probe station (Wentworth AVT 702). The i-v characteristics were firstly obtained via sweeping voltages, following distinct sequences for bipolar and unipolar resistive switching. Then, impedance spectroscopies were tested biasing a small 10 mV AC signal (DC bias is 0 V) with frequencies sweeping from 10 KHz to 10 MHz. To measure the changing trends of impedance, a series programming (5 V for SET, and 25 V for RESET) pulses were applied across active cells with a followed 0.5 V pulse to read resistance values. In all measurements, pulse widths were set to 10 ms. All reactance measurements were implemented via C-V tests by employing 30 mV 1 MHz AC signals (DC bias is 0.5 V). Specifically, the measuring option for devices with active areas of 2 3 2 mm 2 and 5 3 5 mm 2 was set to the parallel capacitance and conductance (C p -G p ), while for devices with active area of 10 3 10 mm 2 , the measuring option was set to complex impedance (Z-Theta). Table S1).
4,762.8
2014-03-31T00:00:00.000
[ "Physics" ]
Evaluation of the Anti-inflammatory, Antimicrobial, Antioxidant, and Cytotoxic Effects of Chitosan Thiocolchicoside-Lauric Acid Nanogel Aim: The present study explored the anti-inflammatory, antimicrobial, antioxidant, and cytotoxic effects of a combination of chitosan thiocolchicoside and lauric acid (CTLA) nanogel. Materials and methods: A nanogel formulation of thiocolchicoside and lauric acid was developed and tested for potential applications. The antimicrobial activity was assessed using the well diffusion method, while the antioxidant activity was evaluated using the 2,2-diphenyl-1-picryl hydrazyl (DPPH) free radical scavenging assay and hydrogen peroxide (H2O2) antioxidant assay methods. The anti-inflammatory activity was determined through the egg albumin denaturation method, the bovine serum albumin denaturation method, and the membrane stabilization assay. A brine shrimp lethality assay was used to study the cytotoxic effect of the nanogel. Results: We identified significant positive outcomes for the CTLA nanogel. The results showed a percentage of inhibition of 81% at 50μg/mL, which showed the nanogel’s significant anti-inflammatory activity by inhibiting bovine serum albumin denaturation. The anti-inflammatory properties of the nanogel were comparable to the standard diclofenac sodium at all tested concentrations. The egg albumin denaturation assay results revealed a percentage inhibition of 76% at 50 μg/mL. In the membrane stabilization assay, a percentage inhibition of 86% was obtained at a concentration of 50 μg/mL against 89% for the standard drug. The nanogel exhibited a zone of inhibition of 20 mm against Streptococcus mutans and 22 mm with a dilution of 100 µg/mL of CTLA nanogel against Staphylococcus aureus. The antioxidant activity was studied by using the DPPH method, 50 μg/ml has an 89% inhibition, which was similar to the standard. The inhibitory activity of CTLA nanogel at 50 μg/ml was 81.6% in the hydroxyl free radical scavenging assay, which was comparable to the standard drug. At 5 μg/mL concentration of CTLA nanogel, approximately 90% of the nauplii remained alive after 48 hours. Conclusion: The CTLA nanogel showed excellent anti-inflammatory and antioxidant properties suggesting its potential for managing inflammatory conditions and oxidative stress-related disorders. Introduction Nanogels are hybrid materials that blend the properties of nanomaterials with hydrogels [1].Hydrogels, known for their high water content, offer tunability regarding their physical and chemical structures, and excellent mechanical and nontoxic properties [2].In nanomedicine, nanogel-based formulations have shown great potential in various applications, including imaging, anticancer therapy, and drug delivery [3]. Chitosan, a deacetylated chitin is a biopolymer that is a significant component of the cell wall of fungi, the exoskeleton of insects, and the shells of crustaceans.It is a linear copolymer containing β-(1 to 4)-2-aminod-glucose units and β-(1 to 4)-2-acetamido-d-glucose units, known to have excellent properties like biodegradability and biocompatibility with the least immune response even after implantation or application due to its nontoxic nature [4]. Thiocolchicoside is a semi-synthetic derivative of colchicoside, which is obtained from plants like Gloriosa superba and Colchicum autumnale [5].It is an anti-inflammatory drug used as a muscle relaxant in the treatment of musculoskeletal disorders [6].The muscle relaxant activity of thiocolchicoside is attributed to its inhibition of the glycine receptor in the brain stem and spinal cord [7].In previous studies, thiocolchicoside has been formulated in various semi-solid forms such as ointments, gels, and creams, with ointments showing higher drug release compared to other formulations [8]. Lauric acid, on the other hand, is a saturated fatty acid commonly used in nutritional and cosmetic applications.It is a 12-carbon chain fatty acid found in certain plants, particularly coconut oil and palm kernel oil [9].Lauric acid has been recognized for its broad spectrum of antimicrobial activity against bacteria and viruses.Although the exact mechanism of its action against bacteria is not fully understood, it is believed to disrupt the cell membrane and is thus helpful in protecting against microbial infection and can control human microbiota balance in our body [10,11].Due to its biological activity and strong antiviral properties, lauric acid is considered one of the most active components in coconut oil [12]. Inflammaging is a term used to denote a systemically developing chronic inflammation that is of low-grade type in the absence of infection in elderly people.As most age-related disorders are linked with inflammation, inflammaging is a significant risk factor for morbidity and mortality in old age [13]. Developing new anti-inflammatory drugs is always of interest to limit the chronic inflammatory process in the human body which includes arthritis, colitis, dermatitis, neurodegenerative disorders, and malignancy.Since the anti-inflammatory drugs to relieve inflammation are always non-steroidal anti-inflammatory drugs (NSAID) or corticosteroids, which develop a lot of adverse drug reactions like gastric irritation, and liver, and renal disorders on long-term usage [14]. Mitochondrial metabolism plays a powerful role in the induction of carcinogenesis by increasing the levels of reactive oxygen species (ROS) production from oncogene transformation to cancer progression.An increase in ROS production results in structural damage to cellular components which leads to cancer, inflammation, and different disorders.There is a recent trend to study natural compounds with significant antioxidant activity that could affect the redox reactions taking place in a cell and prevent and control free radicalmediated reactions [4]. The objective of the present study was to assess the antimicrobial activity, antioxidant activity, antiinflammatory activity, and cytotoxic effects of a chitosan nanogel formulation containing thiocolchicoside and lauric acid (CTLA nanogel).By combining these two active components, the researchers aimed to explore the potential synergistic effects and broaden the biomedical applications of the nanogel. Materials And Methods Preparation of chitosan thiocolchicoside and lauric acid (CTLA) nanogel was reported by Ameena et al. [15].This method allowed for the integration of the lauric acid and thiocolchicoside into a nanogel formulation.The chitosan served as a medium to facilitate the incorporation of the active components and stabilize the nanogel structure. Bovine Serum Albumin (BSA) Denaturation Assay The anti-inflammatory activity of the CTLA nanogel was evaluated as described by Das et al. [16].To assess the anti-inflammatory activity, 0.05 mL of the CTLA nanogel was taken, and various concentrations ranging from 10µg/mL, 20µg/mL, 30µg/mL, 40µg/mL, and 50µg/mL were added to 0.45 mL of a 1% aqueous solution of bovine serum albumin.The pH of the solution was corrected to 6.3 using a small amount of 1N hydrochloric acid.These samples were then incubated at room temperature for 20 minutes, followed by heating at 55°C for 30 minutes in a water bath.After the heating process, the samples were allowed to cool down, and the absorbance was measured using a spectrophotometer at 660 nm.Diclofenac sodium was the standard drug for comparison.Dimethyl sulfoxide (DMSO) was used as a control in this experiment. The percentage of protein denaturation was determined using the following equation: % Inhibition = (Absorbance of control -Absorbance of sample/Absorbance of control) x 100. Egg Albumin Denaturation Assay The anti-inflammatory activity of the CTLA nanogel was determined.The samples used for this assay include 0.2 mL of egg albumin (fresh), 2.8 mL of phosphate-buffered saline (PBS) at pH 6.4, and 0.6 mL of the nanogel at various concentrations dissolved in 0.2% DMSO.The concentrations of the nanogel in the total reaction solution ranged from 10-50 µg/mL.The samples were incubated for 10 minutes at 37°C and then heated at 70°C in a water bath for an additional 20 minutes to induce denaturation of the egg albumin.After cooling the mixture, the absorbance was measured at 660 nm.Negative controls consisting of 0.2 mL of fresh egg albumin, 0.6 mL of 0.2% DMSO, and 2.8 mL of PBS were included in the experiment.Diclofenac sodium was used as a positive control for the study. The percentage of protein denaturation inhibition, which indicates the anti-inflammatory activity of the compound, was calculated by the following equation: % Inhibition = (As/Ac -1) × 100 (As = absorbance of sample, Ac = absorbance of control). Membrane Stabilization Assay The in vitro membrane stabilization assay is a commonly employed technique for evaluating the membrane stabilizing properties of natural and synthetic drugs.This assay measures the ability of a drug to stabilize the cell membrane by preventing its disruption and subsequent release of intracellular contents.The materials include human red blood cells (RBCs), Tris-hydrochloride (Tris-HCl) buffer (50 mM at pH 7.4), and PBS.Different concentrations of CTLA nanogel (10-50 µg/mL) were prepared.Saline solution and distilled water were used as controls in this study. Fresh human blood was collected in a sterile tube containing anticoagulants.The blood was centrifuged for 10 minutes at 1000 g at room temperature to separate the RBCs from other blood components.The supernatant was slowly removed and the RBCs left behind were washed three times using PBS.Then RBCs were resuspended in Tris-HCl buffer to obtain a 10% (v/v) RBC suspension. 1mL of the RBC suspension was pipetted into each centrifuge tube and different concentrations of CTLA nanogel were added to each tube, gently mixed, and incubated for 30 minutes at 37°C.The centrifuge tubes were then centrifuged at 1000 g for 10 minutes at room temperature to pellet the RBCs.The absorbance of the supernatant obtained was measured at 540 nm using an ultraviolet spectrophotometer. The percentage inhibition of hemolysis was calculated using the following formula: % Inhibition = {(OD control -OD sample)/OD control} x 100.OD control in the formula is the absorbance of the RBC suspension without the test compound and OD sample is the absorbance of the RBC suspension with the test compound. Anti-microbial activity The antibacterial activity of the CTLA nanogel was investigated against bacterial strains including Streptococcus mutans, Staphylococcus aureus, Pseudomonas aeruginosa, Lactobacillus and Candida albicans.For this experiment, a 24-hour freshly prepared culture of the bacteria was used.To determine the zone of inhibition, Muller-Hinton agar (MHA) was prepared and sterilized by autoclaving at 121°C for 30 minutes. The sterilized MHA was poured into sterile Petri plates and was allowed to solidify.The wells were then created using a Well cutter and the fresh bacterial culture of Streptococcus mutans, Staphylococcus aureus, Pseudomonas aeruginosa, Lactobacillus and Candida albicans was evenly spread on the Petri plates.Different concentrations of the CTLA nanogel (25 µg, 50 µg, 100 µg) were loaded into separate wells on the agar plate in triplicates.Additionally, an antibiotic Amoxyrite was used as a standard for bacteria, and for candida, Fluconazole was used as a standard and placed in the fourth well for comparison.The plates were incubated for 24 hours and 48 hours for fungal cultures at 37°C.The antimicrobial activity of the compound was assessed by measuring the diameter of the zone of inhibition around the wells.The zone of inhibition was measured using a ruler and recorded in millimeters (mm).This measurement indicated the antibacterial effectiveness of the nanogel. Anti-oxidant activity 2,2-Diphenyl-1-Picryl Hydrazyl (DPPH) Free Radical Scavenging Assay The antioxidant activity of CTLA nanogel was analyzed using the DPPH assay.Various concentrations (10μg/mL, 20 μg/mL, 30 μg/mL, 40 μg/mL, and 50 μg/mL) of the nanogel were mixed with 1 mL of DPPH (0.1 mM) in methanol and 450 μg/mL of 50 mM Tris-HCl buffer at pH 7.4.The mixture was then incubated in a dark room for 30 minutes.The reduction in the quantity of the DPPH free radical was assessed by measuring the absorbance at 517 nm.This measurement indicated the antioxidant capacity of the nanogel.Ascorbic acid was used as standard in this assay.By evaluating the absorbance at 517 nm, this assay determined the antioxidant activity of the CTLA nanogel. The percentage of the inhibition was determined from the following equation: % inhibition of sample = (Absorbance of control -Absorbance of sample/Absorbance of control) x 100. Hydroxyl Free Radical Scavenging Assay The hydroxyl free radical scavenging assay was conducted.Freshly prepared solutions were used for the experiment.In a reaction mixture of 1.0 mL, the following components were added: 100 µL of a 28 mM solution of 2-deoxy-2-ribose dissolved in phosphate buffer at a pH 7.4, 500 µL of a solution containing different concentrations of the CTLA nanogel (ranging from 10 to 50 µg), 200 µL of a 200 µM ferric chloride (FeCl 3 ) and 1.04 mM ethylenediaminetetraacetic acid (EDTA) mixture in a 1:1 volume ratio, 100 µL of H 2 O 2 (1.0 mM), and 100 µL of ascorbic acid (1.0 mM).The reaction mixture was incubated for 1 hour at 37°C.The extent of deoxyribose degradation after the incubation period, was determined by thiobarbituric acid (TBA) reaction.The mixture was further incubated for 1 hour at 37°C, and the optical density at 532 nm was measured against a blank solution.For comparison, vitamin E served as the positive control, and ascorbic acid was used as the standard.By measuring the optical density at 532 nm, the hydroxyl radical scavenging activity of the CTLA nanogel was assessed. Cytotoxic effect Brine Shrimp Lethality Assay To prepare the solution, 2 grams of iodine-free salt was weighed and dissolved in 200 mL of distilled water.Six enzyme-linked immunosorbent Assay (ELISA) well plates were used for the experiment, and each well was filled with 10-12 mL of prepared saline water.Subsequently, 10 nauplii were added slowly to each well, with different concentrations of the CTLA nanogel (5 µg/mL, 10 µg/mL, 20 µg/mL, 40 µg/mL, and 80 µg/mL) added to the respective wells.The sixth well served as a control and did not receive the nanogel.The plates were then incubated at room temperature for 24 hours, allowing the desired effects of the nanogel on the nauplii to take place. After 24 hours, the ELISA plates were carefully observed and counted for the number of live nauplii present and calculated by using the following formula: Number of dead nauplii/Number of dead nauplii + Number of live nauplii × 100. Results The anti-inflammatory activity of CTLA nanogel was evaluated using the bovine serum albumin denaturation assay.The nanogel was tested at different concentrations, and its inhibitory effects were compared to standard values.The results showed a percentage of inhibition of 47% at a concentration of 10 μg/mL, 53% at 20 μg/mL, 69% at 30 μg/mL, 72% at 40 μg/mL, and 81% at 50 μg/mL.These values indicate that the nanogel exhibits significant anti-inflammatory activity by inhibiting bovine serum albumin denaturation.Moreover, the anti-inflammatory properties of the nanogel were comparable to the standard diclofenac sodium at all tested concentrations (Table 1).The anti-inflammatory activity of CTLA nanogel was assessed using the egg albumin denaturation assay.The nanogel was tested at various concentrations and compared to standard values.The results revealed a percentage of inhibition of 53% at a concentration of 10 μg/mL, 58% at 20 μg/mL, 61% at 30 μg/mL, 69% at 40 μg/mL, and 76% at 50 μg/mL.These findings demonstrate that the nanogel exhibits significant antiinflammatory activity in the egg albumin denaturation assay.Furthermore, the anti-inflammatory properties of the nanogel were found to be comparable to the standard diclofenac sodium at all tested concentrations like 10 μg/mL, 20 μg/mL, 30 μg/mL, 40 μg/mL, and 50 μg/mL.Therefore, the CTLA nanogel shows promising anti-inflammatory activity in the context of the egg albumin denaturation assay ( The anti-inflammatory activity of CTLA nanogel was assessed using the human red blood cell membrane stabilization assay.The nanogel was tested at different concentrations and compared to standard values.The results revealed a percentage of inhibition of 56% at a concentration of 10 μg/mL, 67% at 20 μg/mL, 75% at 30 μg/mL, 80% at 40 μg/mL, and 86% at 50 μg/mL against 58%, 70%, 77%, 82%, and 89% for the standard, diclofenac sodium at same concentrations.These findings show a significant anti-inflammatory activity of CTLA nanogel in the membrane stabilization assay (Table 3).The antimicrobial activity of CTLA nanogel was evaluated using the agar well diffusion method.The nanogel exhibited a zone of inhibition of 20 mm with a dilution of 100 µg/mL of CTLA nanogel against Streptococcus mutans while 13 mm was the zone of inhibition for the standard.The nanogel developed a zone of inhibition of 22 mm with a dilution of 100 µg/mL of CTLA nanogel against Staphylococcus aureus while 11 mm was the zone of inhibition for the standard.The zone of inhibition of CTLA nanogel at 100 µg/mL against Candida albicans was 16 mm and fluconazole was 20 mm.The nanogel exhibited a zone of inhibition of 9 mm against both Lactobacillus species and Pseudomonas aeruginosa at three different concentrations, indicating similar levels of inhibition for both bacterial strains.In comparison, the fourth well, which contained the commercial antibiotic Amoxyrite, showed a higher zone of inhibition of 14 mm against Lactobacillus species and a lower area of inhibition of 9 mm against Pseudomonas aeruginosa at the same diluted concentration (Figure 1).The antioxidant activity was done by using the DPPH method.DPPH assay was compared from a lower concentration to a higher concentration of the CTLA nanogel.In the CTLA nanogel, a different concentration was added.In the different concentrations of nanogel, 10μg/ml had a 68% inhibition, 20 μg/ml had a 76% inhibition, 30 μg/ml had a 79% inhibition, 40 μg/ml had an 85% of inhibition and 50 μg/ml had an 89% of inhibition.The antioxidant activity of CTLA nanogel was slightly similar when compared to the standard (Figure 2).The antioxidant activity of CTLA nanogel was evaluated using the H 2 O 2 method.Different nanogel concentrations were tested, ranging from lower to higher concentrations.The results showed that the nanogel exhibited antioxidant activity, with increasing levels of inhibition as the concentration of the nanogel increased.At 10 μg/ml, the nanogel demonstrated 50.6% inhibition, while at 20 μg/ml, it showed 54.7% inhibition.The inhibitory activity further increased at 30 μg/ml with 65.12% inhibition, at 40 μg/ml with 74.3% inhibition, and at 50 μg/ml with 81.6% inhibition.The CTLA nanogel displayed similar antioxidant activity to the standard (Figure 3). FIGURE 3: Graph of H2O2 assay Graphical representation of the activity of chitosan thiocolchicoside lauric acid (CTLA) nanogel estimated with hydrogen peroxide (H2O2) assay The cytotoxic effects of CTLA nanogel were evaluated using the brine shrimp lethality assay.This assay is commonly employed to assess the cytotoxicity of substances by measuring their impact on the survival of the brine shrimp nauplii.In the present study, a control group without any drug was maintained to establish a baseline for calculating the percentage of live nauplii.The results of the cytotoxicity assessment indicated that different concentrations of the nanogel exhibited varying effects on nauplii survival.At a concentration of 5 μg/mL of thiocolchicoside-lauric acid nanogel, approximately 90% of the nauplii remained alive. Similarly, at concentrations of 20 μg/mL and 40 μg/mL, the nanogel resulted in the preservation of approximately 70% of live nauplii.However, at a higher concentration of 80 μg/mL, only 60% of the nauplii survived (Figure 4). FIGURE 4: Graph showing the cytotoxic effect of CTLA nanogel Graphical representation of the cytotoxic effect of chitosan thiocolchicoside lauric acid (CTLA) nanogel estimated by brine shrimp lethality assay Discussion Anti-inflammatory properties of various plant extracts are available in the literature [17].Overall, our study highlights the ability of the CTLA nanogel and the herbal formulation extract to effectively inhibit inflammation by preventing protein denaturation. The lysosomal membrane stabilization in activated neutrophils prevents the leakage of lysosomal contents like protease and other bactericidal enzymes, thereby playing a pivotal role in anti-inflammatory response in the human body.Red blood cells and lysosomal membranes of neutrophils have a similar structure, so stabilization of one membrane may limit the destruction of the other.This principle is the basis for the human red blood cell membrane stabilization assay which uses hypotonicity-induced lysis to evaluate the anti-inflammatory activity of drugs [14].In the present study, it was found that the CTLA nanogel had excellent anti-inflammatory action using the human red blood cell membrane stabilization assay, and the process of membrane stabilization could be directly related to its anti-inflammatory properties. Kyene et al. using the agar well diffusion method studied the antimicrobial activity of the zinc oxide nano particles against Staphylococcus aureus, Escherichia coli, Salmonella typhi, and Candida albicans [18].Shanmugam et al. reported that silver nanoparticles with the addition of curcumin-assisted chitosan nanocomposite had remarkable antibacterial activity against gram-positive bacteria when compared to gram-negative bacteria in their study [19]. Furthermore, the individual component of the nanogel, lauric acid, demonstrated significant inhibitory effects against various clinical isolates.In previous research works, the inhibitory effect varied based on the concentration of the acid, with the highest zone of inhibition observed against Staphylococcus aureus (10 mm), Streptococcus species (10 mm), and Lactobacillus species (10 mm).On the other hand, the lowest inhibitory effect was found against Escherichia coli (4 mm) at the same concentration of dilution [12].Previous studies have also reported the antimicrobial effects of lauric acid, specifically against Grampositive streptococci but not as effective against Gram-negative bacilli such as Escherichia coli, Klebsiella oxytoca, Klebsiella pneumoniae, and Serratia marcescens [11].Nagase et al. compared the bactericidal activity of ovirgin coconut oil and lauric acid using an antibacterial disk diffusion test and reported that the bacteria-inhibiting zone was 0.17 μmol /40 μL or 0.085 μmol / 40 μL of Lauric acid on plates inoculated with Streptococcus pyogenes, Streptococcus agalactiae, Streptococcus mutans, and Streptococcus sanguinis and was greater than that of the paper disk containing 0.17 μ mol40 μL of virgin coconut oil [20].They found that lauric acid has significant antimicrobial activity against Staphylococcus aureus and Streptococcus salivarius and organic virgin coconut oil had weaker antimicrobial activities against several Streptococcus species than lauric acid.Bhardwaj et al. studied the antibacterial activity of coconut oil by agar well diffusion method using ciprofloxacin as a standard antibiotic [21].They found that Streptococcus species were susceptible to coconut oil while Escherichia coli were not.They confirmed that the presence of lauric acid is responsible for the bactericidal action of coconut oil.These findings highlight the antimicrobial potential of lauric acid, particularly against certain bacterial strains, as demonstrated by various studies. Chitosan nanoparticles exhibited redox-regulatory activity due to inhibition of free radical production, decreasing serum free fatty acids, and malondialdehyde, and increasing intracellular antioxidant enzymes in vitro as well as in vivo studies [4].The antioxidant activity of CTLA nanogel was evaluated using the DPPH method.Different nanogel concentrations were tested, ranging from lower to higher concentrations.The results indicated that the nanogel exhibited antioxidant activity, with varying degrees of inhibition at each concentration.Comparatively, the CTLA nanogel showed a slight similarity to the standard antioxidant. Virgin coconut oil reduced the DPPH free radical concentration by 50% and EC50 was found to be 5.07 ± 0.19 mg/L in a study by Ahmad et al. [22].The hydrogen-donating capacity of virgin coconut oil makes it a good antioxidant, but the level of free-radical scavenging activity depends on the processing condition of virgin coconut oil [22].The free radical scavenging activity (%) of green tea-loaded with chitosan nanoparticles, green tea, and ascorbic acid was reported by Piran et al. showed the high antioxidant activity of green tea and green tea-loaded chitosan nanoparticles [23].The range of green tea and green tea-loaded chitosan nanoparticles scavenging was found to be 32.07-91.0μg/ml and 46.75-96.1 μg/ml respectively [23].Wen et al. observed the effects of chitosan nanoparticles in H2O2-induced oxidative damage in murine macrophage cells and found that viability loss in cells induced by H2O2 was significantly replaced by chitosan nanoparticles [24].It suppressed the production of malondialdehyde, restored superoxide dismutase and glutathione peroxidase, and increased total antioxidant capacity. Marina et al. reported that virgin coconut oil, due to its antioxidant properties, reduced the initial concentration of DPPH radicals by 50%, with an EC50 value of approximately 5.07 ± 0.19 mg/L [25].Virgin coconut oil can donate hydrogen ions and thus can act as an antioxidant.The processing conditions of virgin coconut oil may influence the free radical scavenging activity of its phenolic compounds [25]. Thyme essential oil encapsulated in chitosan nanoparticles proved to have greater antioxidant activity than free thyme essential oil [26].Similarly, green tea-loaded chitosan nanoparticles and green tea itself demonstrated significant scavenging activity, indicating high antioxidant capacity.The scavenging activity ranged from 32.07% to 91.034% for green tea and from 46.75% to 96.12% for green tea-loaded chitosan nanoparticles at various concentrations [27].Overall, the CTLA nanogel, exhibited promising antioxidant properties, as demonstrated by their ability to scavenge free radicals and inhibit oxidative processes. Overall, the cytotoxic activity results revealed a relatively low toxicity rate of the CTLA nanogel, which aligns with the findings of the current study.This indicates that the nanogel formulation exhibited minimal cytotoxic effects on brine shrimp nauplii, suggesting its potential safety for future applications. Limitations We performed various in-vitro analyses in the present study to assess the activity of CTLA nanogel.Testing with Fourier-transform infrared spectroscopy (FTIR) or nuclear magnetic resonance (NMR) will help to decipher the active ingredients of our nanogel formulation.Further in vivo studies including animal studies and clinical trials will help in a better understanding of its effects. Conclusions In conclusion, combining thiocolchicoside and lauric acid as chitosan nanogel showcases a versatile therapeutic agent with multifaceted properties.Its antimicrobial activity, eco-friendly nature, and biocompatibility highlight its promising role in combating infections.Moreover, its anti-inflammatory and antioxidant properties suggest its potential for managing inflammatory conditions and oxidative stressrelated disorders.These characteristics position thiocolchicoside-lauric acid as a promising candidate for future biomedical applications, paving the way for further exploration and development in the field of medicine. TABLE 2 : Anti-inflammatory activity with egg albumin assay Table showing the anti-inflammatory activity of chitosan thiocolchicoside lauric acid (CTLA) nanogel with egg albumin denaturation assay TABLE 3 : Red blood cell membrane stabilization assay Table showing the human red blood cell membrane stabilization assay of chitosan thiocolchicoside lauric acid (CTLA) nanogel
5,780.6
2023-09-01T00:00:00.000
[ "Materials Science", "Medicine" ]
REGIONAL AIRPORTS ’ POTENTIAL AS A DRIVING FORCE FOR ECONOMIC AND ENTREPRENEURSHIP DEVELOPMENT – CASE STUDY FROM BALTIC SEA REGION Meanwhile it is generally acknowledged that accessibility belongs to major factors of economic attractiveness of metropolitan areas, other territories and peripheral regions. The aviation industry in general and airports’ activities in particular contribute considerably to the improvement of regional accessibility. For some remote regions the airports are the only gateway to bigger hubs. However, due to the increasing competition in the aviation sector the airports and especially regional airports in Europe face structural and operational challenges nowadays. According to the report of the EU Commission: “The Future of the Transport Industry” the number of loss making small and regional airports in Europe is constantly growing. On the other hand the regional airports might crucial role in boosting economic development and entrepreneurship growth in regions. In this context, it is very urgent for regional airports themselves, as well as for regional policy makers, business and other relevant stakeholders to recognize the role of regional airports on the economic growth in their regions. As a response, this paper addresses to the evaluation and assessment of potential effects of regional airports on economic and entrepreneurship growth in their regions. Introduction The transport sector, in direct and indirect meaning, is one of the main driving forces of European and global economies (EC, 2015a).The White Paper on Transport that is the main policy document on transport policy in the EU states: "Transport is fundamental to our economy and society.Mobility is vital for the internal market (…) enables economic growth and job creation".In the overall transport sector, the air transport is considered to The International Journal ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/aims-and-scope-of-research/2016 Volume 3 Number 3 (March) 229 be one of the dominant modes for the passenger traffic over long and middle distances in Europe and worldwide.Air transport is plying also a vital role for the air cargo transport with a high value added or time sensitive goods (EC, 2014).European airports are responsible for employment over a million people, working directly or indirectly in aviation business, employed by airlines or by airports' operational environment, i.e. technical aircrafts' maintenance, logistics or catering services, retailing and or traffic control, etc.The aviation business in total contributes more than 140 billion euro to the European GDP (EC, 2015a(EC, , 2015b)).Air transport is also considered as one of the main driving forces for the trade of innovative manufactures worldwide (IATA, 2015) and as an enhancer of the economic potential of a region (Goetz, 1992;Alkaabi & Debbage 2007;Debbage & Delk 2001).However, the number of loss making European airports (especially small and regional airports) is constantly growing (EC, 2014).In spite of growing losses, in order to secure accessibility to remote or peripheral regions, the regional or national public authorities keep on supporting the regional airports (Breidenbach, 2015). European regional policy makers have invested millions of euros in airports' infrastructure development, however today almost all regional airports depend on public subsidies.However, the new state aid rules for a competitive aviation industry, issued by EU Commission in February 2014, order the substantial cut of financial public subsidies of any art on the EU national or regional level for regional airports (EC, 2014).The main objective here is not to "recycle" the regional airports, but to stimulate them to operate on cost efficient, and profitable basis.On the other hand a number of experts argue that it is rather a false approach to focus on the monetary losses of the regional airports only, without recognising their importance for regional development and emphasize positive effects for the development of the regional industry (especially service and high-tech industry) benefit from airport's operation (cf.Sheard, 2014;Brueckner, 2003;Button & Taylor, 2000;Beifert, 2015, Rezk, et al. 2015).Bråthen (2003) merely economic point of view in terms of airports closure is not enough; he stressed the importance of regional development issues while conducting such an analysis.However, the provision and growth of transport services alone would not automatically lead to economic and regional development (Green 2007).In fact, it is economic and regional development that might lead to the growing demand for transportation services, and although the direct linkage between air transportation and economic growth does really exist, the causation is not completely clear (Button et al. 2010).Halpern & Bråthen (2011) pointed also that on the one hand the airports might act as primary facilitators for the economic and regional growth, providing accessibility and improving supply side components; on the other hand, it might be economic development (here: demand side) that determines demand and growth of transport services.Halpern & Bråthen further argue that the question if the demand or the supply in this context have the stronger effect. In the framework of the EU funded project "Baltic.AirCargo.Net" (BACN, 2014) a number of regional airports in the Baltic Sea Region (BSR) have been analysed aiming, among other things, at assessing potential of regional airports and their role in the regional economic environment.The main findings of the BACN project demonstrated playing an essential role in improving regional accessibility and being an indispensible part of the European aviation system, especially regional airports face growing challenges; their relevance for regional development is being questioned now.This paper explores the potential of regional airports as economic and entrepreneurship driving forces for their regions.This paper is organised as follows: the theoretical framework showcases theoretical approaches to regional development, regional airports and their possible interdependencies.The following sections present, methodology, main findings of the case studies investigated in the framework of the BACN project and conceptual implications of regional airports that might improve their efficient participation in the regional economic growth, thus making the airports and their regions more profitable and attractive to invest in. Theory and concepts The roots of the location and regional theories related to transportation may be traced back to the works of Weber (1929), where he primarily focused on transportation costs, arguing that companies, while delivering raw materials and goods to the market, are trying to minimize the transportation costs (cf.Dawkins, 2003).The works of Hoover (1937) discussing advantages from local agglomeration, such as large-scale economies, localization economies (i.e. businesses of the same industry collocate and cooperate in the same area) urbanization economies (i.e.colocation of companies from different industries), gave further impulses to the development of regional cluster theories (Porter, 1985).The works of Greenhut (1956) and Isard (1956) although focusing mainly on mathematical optimisation modelling of industry given the costs for transporting raw materials and final goods, argued that the business companies tend to locate near primary input sources, whereas the monetary weight of raw materials can be larger compared to the weight of the final goods. The dedicated research studies focusing on the relation between air transport services and regional development may be traced to Ndoh and Caves (1995), investigating the influence of supply side of air transport on demand, arguing that the attractive accessibility may directly influence location decision-making and stimulate further economic activity.Percoco (2010) considers the role of infrastructure and especially of airports as one of the crucial factors of regional growth due to the increasing importance of air transport in connecting territories.The linkage between airports and regional development as well as the impact of accessibility on regional economic development by means of air transport has been also investigated in a number of other studies (Graham, 1995;Rietveld & Bruinsma, 1998;Shin and Timberlake, 2000;Horst, 2006;Hakfoort et al., 2001;Niemeier, 2001;Cherry, 2014).The scientific studies of Bogai and Wesling (2010), Baum et al. (2005), Hujer et al. (2008), Brueckner (2003) note the considerable effects of airports on regional employment structure, regional labour market and general regional economic growth.Boon et al. (2008), Hart and Mccann (2000) in their works also underline the economic effects and benefits from airports' operation on the regional development. According to the supply-side theory, the availability of adequate transport infrastructure and provision of transport services will lead to economic development, and therefore the airports may be seen as catalysts for regional economic development, on the other hand according to demand-side theory, economic growth will increase the demand for the transportation services (cf.Rodrigue & Notteboom, 2013).While the relationship and interdependence between airports and regional economic development is considered to be very strong, the availability of supply side or providing air transportation would not automatically lead to regional economic development (Halpern & Bråthen, 2011).However, the causality discussion still remains open, i.e. is it an airport that stimulates the growth and economic development or it is economic development in a region that may boost the demand for air transportation services (Ndoh and Caves 1995;Green 2007;Button et al., 2010).Mukkala and Tervo (2013) also stressed the existing causality of airports to regional development in peripheral regions, pointing also out that in core regions this causality is less clear, however they clearly underlined that air transportation is a very significant factor for boosting economic development in remote regions. Generally, the researcher in the framework analysis of the airports' impact on regional development differentiate between following impact factors: direct, indirect, induced, purchasing power effects of an airport's activities on the regional economic growth (cf.Malina et al., 2007 and2008) and so-called catalytic impacts relating to the wider role of the airport on regional development (cf. York Aviation, 2004).According to Malina et al., (1) direct impact relates to the operation of the airport itself, direct economic activities of firms located in the airport' operational environment, employment and investments; (2) indirect effects arise from value chain of suppliers of goods and services related to the airport and airport's region; (3) induced effects are caused by the consumption demand of direct and indirect airport employees; and (4) purchasing power effects arise due to an inflow or outflow of demand for goods by the passenger flows.Baum et al. (2004) explained direct, indirect, and induced effects of air transport on a region in economic metric terms, such as employment or production value.Beyond this, the airports have a so-called (5) catalytic or multiplier impact by improving location attractiveness for businesses and tourism. Although a number of studies focus on the first four types of airports' impact factors, since they are relatively easy to measure (cf.Hakfoort et al., 2001), Halpern & Brathen (2011) argue that catalytic impacts are the most essential function of an airport and regional development (cf. York Aviation, 2004).The catalytic impact of airports and air transport sector on regional development has been studied by several researchers (Robertson, 1995;Cezanne & Mayer, 2003;Cooper and Smith, 2005;Gloersen, 2005;Bandstein et al., 2009).The previous studies basically note that due to the fact that it not easy to differentiate catalytic impacts of an airport from other factors and due to their complex character, the identification and measurement of catalytic effects is seen as rather problematic.Halpern & Brathen (2011) identify two main types of catalytic impact of airports on regional growth: (1) catalytic impacts that relate to regional economic competitiveness, resulting from airports' export activities, business operations and productivity; (2) catalytic impacts that relate to regional accessibility and social development, arising from airports' potential to improve regional accessibility.Braun et al. (2010) differentiate catalytic impacts of airports on a region between (1) consumer surplus; (2) environmental social effects; and (3) economic spin-offs, whereas positive economic spin-offs may stimulate inbound investments, inbound leisure or business tourism and improved productivity; negative spin-offs relate to outbound tourism, outbound investment.Wittmer et al. (2009) noted also the importance of intangible economic catalytic effects of regional airports on economic growth, such as network capacity, skills and competences, structural and image effect, etc.Although the intangible impacts cannot be clearly measured, they also have a strategic economic and social effect on the regional development (Wittmer et al., 2009). Technically, it is not an airport, but rather airlines or logistic services providers that execute passenger or airfreight services.An airport provides the required hard (e.g.runways, terminals, warehouses, catering, etc.) and soft (e.g.security regulations, air cargo screening, sky-guiding, etc.) infrastructure.In this perspective an airport might be also seen as a logistics cluster (Juchelka & Brenienek, 2016).The concept of industrial clusters is well recognized in academic research (Marshall, 1890;Porter, 2000)."A Cluster is a proximate group of interconnected companies and associated institutions in a particular field, linked by commonalities and complementarities."(Porter, 2000, p. 254).The definition of logistics clusters however, is still disputable due to differences of spatial and economic approaches (Elsner et al. 2005).The researches generally identify global, national or regional logistics clusters (Rivera & Sheffi 2012).Wang (2015) views logistics clusters as "geographically concentrated sets of logistics-related business activities, which have already become one of the most important regional development strategies."Along with the classical advantages of the logistics cluster such as: know-how and expertise sharing, service and costs benefits, etc., the logistics cluster participants might utilise or develop common approaches in terms of (a) provision of the systematic services and acquire adequate benefits from other (regional, inter-regional, international) markets; (b) benefiting from positive feedback circle through cluster cooperation; (c) enhancing core regional and firms' competences; (d) acquiring sustainable driving forces for companies' competitive advantages (cf.Wang, 2015).Furthermore, regional airports shall not be seen as simple locations that provide air transport services, but rather as an essential subject of regional development activities and regional planning policies, whereas their operational success might be one of the most important influencing factors (cf. Feldhoff, 2012;Beifert, 2013 and2015).A number of researches argued a firm's (here: an airport's) impact on the regional development lies also on the strategic and operational success that mainly derives from the following three elements: diversity, differentiation and innovation of airport business (Prahalad & Hamel, 1990).In this context, the following theoretical concepts pinpointing diversification, differentiation and innovation potential internally (i.e.regional airport) and externally (market) for regional airports: Resource-Based View (RBV) (Wernerfelt 1984;Barney, 1991), competitive advantage and cluster theory by Porter (2000) including innovation management process are of a special importance.The resource-based view approach examines the competitive environment from "insideout" aspect, dealing with the internal environment of a company (Prahalad & Hamel, 1990).In order to increase an impact on the regional development, the regional airports need to optimise their performance strategy internally (organisation-based) and externally (market-driven), thus enhancing also their diversification and differentiation potential.As one of the bottlenecks for economic prosperity of an airport is often not accessibility, but rather the deficit of qualified manpower or resources in the airport's operational environment (EC, 2014), the airports shall not be reliant on a single or traditional revenue source, but rather on wider airports' potential and performance depending on efficient utilisation of the available resources in form of human or financial capital, intangible valuable or unique assets (Barney, 1991).In the framework of the opportunity-based entrepreneurship theory Drucker (1985) argues that entrepreneurs do not cause change, but use the opportunities that changes bring: "the entrepreneur always searches for change, responds to it, and exploits it as an opportunity".Stevenson (1990) further extends Drucker's opportunity-based model by including so called resourcefulness that identifies generally that the hub of entrepreneurial management means the "pursuit of opportunity without regard to resources currently controlled" (Stevenson, 1990, p 2).The entrepreneurship resource-based theory states that access to resources is an essential factor for the entrepreneurship growth (Alvarez & Busenitz, 2001).This theory underlines the important role of social, human and financial, resources; arguing that the access to resources stimulates the entrepreneurial ability to utilise discovered opportunities more efficiently (Davidson & Honing, 2003).Financial, social and human capital represents three classes of theories under the resourcebased entrepreneurship theories.However, some regional airports often view new market opportunities as not promising or underestimate their strategic value due to their disruptive innovations character in the aviation and airport business (Beifert, 2015).But if those innovative concepts (e.g.Logistics Bonded Park or Airport Industrial Zone) are already utilised or offered by the nearest regional competitors, it might be often inefficient just to reduplicate them (Downes & Nunes, 2013).Osterwalder & Pigneur (2010) developed a comprehensive business model that includes nine elements: customer segments, value propositions, channels, customer relationships, revenue sources, key resources, key activities, key partnerships and cost structure, that might be considered as a basement assessment tool for a successful business operation.In this context it might be recommended that regional airports shall learn to identify these market opportunities and deploy them considering innovation business models and better bargaining potential of entrepreneurs, e.g. by utilizing of so called "air trucking services" (Beifert, 2013). Methodology Although a number of scientific research studies and empirical evidences are available nowadays that relate to such subjects as: logistics' clusters (e.g., Rivera & Sheffi 2012; Wang, 2015;Juchelka & Brenienek, 2016), airport's operational environment and their impact on the regional development (e.g.Malina et al., 2007, Braun et al., 2010;Halpern & Brathen, 2011), however it might be stated that much less attention has been paid to regional airports so far and the earlier studies have been focusing mostly on airport-hubs or metropolitan areas, whereas the perspective of regional airports and their potential impact on their region in terms of economic and entrepreneurship development has been studied less thoroughly (Mukkala & Tervo, 2012).Halpern and Bråthen (2011) also noted that catalytic impact of regional airports on regional development calls for deeper and wider research.Based on the above-mentioned theoretical concepts and earlier empirical evidences and it might be assumed that regional airports might have a strong potential to enhance economic growth and entrepreneurship activities in their regions.In the framework of this study the following research questions are investigated: Question 1: What are the possible conceptual approaches to optimise or to enhance regional airports' impact on economic growth and entrepreneurship development in their region?Question 2: What is the appropriate approach to evaluate potential of regional airports to boost economic growth and entrepreneurship development in their regions? With regard to the above-presented concepts, it is argued here that regional airports acting as a gravity force for logistics cluster-building in a region and multi-layer business systems may be analysed by applying various assessment criteria found in the theoretical framework discussed above.The following presented assessment matrix is based on theoretical concepts of direct, indirect or induced effects of airports on regional economic development (Malina et. al., 2008;Baum et al., 2004), catalytic impact (Bandstein et al., 2009;Halpern & Brathen, 2011), airports' clustering effect (Rivera & Sheffi, 2012;Wang, 2015), airports' internal success factors, i.e.RBV of Prahalad & Hamel (1990) as well as innovation and entrepreneurship growth of Osterwalder & Pigneur (2010). As it has been mentioned before, due to the fact that the causality discussion of the impact relationship between airports' operation and regional growth still remains open, the author identifies here two main groups of the growth enhancers: (1) perspective of regional development (here: demand-side), by which regional airports may be considered as an object of regional economic growth where economic development in a region will boost the demand for the air transportation services and stimulate an airport's growth; and (2) regional airport's perspective (here: supply-side), whereby regional airports act as a subject of regional development, e.g.airport's activities may stimulate economic and entrepreneurship growth in its region.As a response to the first research question, the following assessment matrix evaluating regional airport's potential to influence economic and entrepreneurship growth might be suggested (Table 1). Regional Development Perspective (demand-side enhancers for airport's growth, i.e. regional airport as an object of regional development) Regional accessibility Regional economic competitiveness Regional business concentration Regional density of high-growth and innovative enterprises Regional level of entrepreneurial and innovation activities Regional density of population Regional labour market Regional prosperity and purchasing power Regional level of skills and competences Regional network capacity, governance and coordination level Linkages of airports with other public & private R&D Linkage of airports with innovation policies Regional marketing activities Regional awareness of airport's capacities and value proposition Regional Airport's Perspective (supply-side enhancers for economic and entrepreneurship growth, i.e. regional airport as a subject of regional development) Airport The author of this paper argues that the above-presented assessment matrix for the regional airports' assessment based on the consolidated theoretical frameworks based on airports' impact factors of Malina; catalytic impacts of Halpern and Brathen; RBV by Prahalad and Hamel; Innovation Business Canvas of Osterwalder and airports' clustering effects of Wang enables comprehensive assessment of regional airports' potential on the economic and entrepreneurship growth. The following assessment results and main findings presented in this paper have been based on secondary and primary data, including qualitative expert interviews and surveys that have been collected and produced in the framework of the EU funded research project Baltic.AirCargo.Net (BACN, 2014), financed by the EU Programme "INTERREG IVB, Baltic Sea Region", ERDF Funds.The empirical data was collected from diverse sources of evidence during the project life 2011-2014, i.e. primary empirical data sources in form of quantitative and qualitative observations of the involved project experts, researchers and relevant stakeholders.The evaluations, project documentation and observations gathered from respective project activities such as workshops, conferences as well as from the field notes from project meetings.Following target groups and relevant stakeholders participated in the surveys and expert interviews a) representatives from Transport Ministries and Airport Management; b) representatives from Transport and Logistics companies from participating regions; c) representatives from the academic side, c) expert from aviation sector, air cargo security and air cargo freight sector.In terms of the presented investigated case studies, 67 qualitative interviews were conducted and evaluated.The above-presented assessment matrix for regional airports' potential on regional development (cf.Table 2) has been chosen as a basement to present compliant evaluation analysis of the selected airport. In the framework of the BACN project, in total nine regional airports from eight BSR countries have been analysed and evaluated.Parchim Airport (Germany) has been selected here as a demonstration case using an evidence-based method in order to assess the airports' potential as a driver for economic and entrepreneurship development in Mecklenburg-Vorpommern region (Germany).A case study approach shall generally draw an essential attention on contemporary study issues by addressing strategic question "know-why?"(Yin, 2009). Although the applied qualitative methods here may make it difficult to validate the presented events, it will enable to highlight the particularity and complexity of the single case evidences (Stake, 1995). Main findings and implications Parchim Airport is located in the county Ludwigslust-Parchim (area: ca. a Chinese company that is the current owner of the airport.The airport has a direct connection to the highway A24, linking Hamburg and Berlin and beyond to the German and European long-distance transport network.Rail connections are limited to regional traffic, since no direct access to long distance train lines in Parchim traffic exists.No regular flights are offered in Parchim Airport at the moment.The new owners have planned the internationalization business model for the Parchim airport.The objective was to extend the site to an air cargo hub for transportation between Europe, Africa and Asia.Three flights a week were planned with an option for extension up to 30 flights a day.Furthermore, a sufficient logistics infrastructure was intended.These investments should be made in cooperation with Goodman Group.In 2007 two airfreight connections have been established, one to Zhengzhou (CGO) in the province of Henan and another to Urumqi (URC), the capital of the Xinjiang Uyghur Autonomous Region of China.The targeted frequency of service on these flight connections has not been achieved so far.In 2010 only 8000 tons of air cargo were handled, a volume that has to be considered as completely insufficient to guarantee a cost-effective operation.For this reason more and more capacity utilization problems arise due to the fact that only a low activity rate can be achieved for the personnel and also the technical equipment (aircraft tugs, fire-fighting vehicles, etc.) needed for airport operations as well as for the offered logistic services.The current as well as the to-be expected volumes in air cargo transport are insufficient to generate the necessary revenues for maintaining operations at the airport.Relevant revenues coming from other business areas cannot compensate operational costs of Parchim Airport at the moment Regional Development Perspective Evaluation of Parchim Airport In the framework of the regional development perspective or evaluation demand-side enhancers for the airport's growth evaluation, the following assessment scale of the given criteria was applied (very good developed / provided: 5; adequate developed / provided: 4; average developed / provided: 3; insufficient developed / provided: 2; very poor developed / provided: 1).In the framework of BACN project, external experts (i.e.representatives from regional relevant business and policy structures, entrepreneurs and academic field) participated in the analysis of the Parchim Airport.The assessment of Parchim Airport's growth potential from the point of view of demand side perspective has shown the following results.The applied weighting scale of the assessment criteria has been based on the overall compilation of the experts' evaluations and the results of the experts' interviews fulfilled in the framework of the BACN project.The experts of the BACN project also pointed out that although this weighting scale might be very subjective, however it needs to be integrated in this or another form in the evaluation process, since the assessment criteria are not equal.Total mean value Regional accessibility 10% 3 0,30 Regional economic competitiveness 10% 3 0,30 Regional business concentration 10% 2 0,20 Regional density of high-growth and innovative enterprises 10% 2 0,20 Regional level of entrepreneurial and innovation activities 10% 3 0,30 Regional density of population 10% 2 0,20 Regional labour market 5% 3 0,15 Regional prosperity and purchasing power 6% 4 0,24 Regional level of skills and competences 10% 4 0,40 Regional network capacity, governance and coordination level 5% 3 0,15 Linkages of airports with other public & private R&D 2% 2 0,04 Linkage of airports with regional innovation policies 2% 1 0,02 Regional marketing activities 5% 4 0,20 Regional awareness of airport's capacities and value proposition 5% 1 0,05 TOTAL 100% 2,75 None of the given criteria has been evaluated as "very good developed / provided".The overall mean value of the evaluation of the demand-side enhancers on the airports' operation is slightly below average value.Only three criteria (i.e.regional prosperity and purchasing power, regional level of skills and competences and regional marketing activities) were evaluated as "good developed or provided" in the region.Two criteria indicators have been evaluated as "adequate developed / provided", i.e. linkage of airports with regional innovation policies and regional awareness of airport's capacities and value proposition.Although the criteria: "network capacity, governance and coordination level" in general has been evaluated as "average", a number of BACN experts saw here a big potential for improvements.In fact, the BACN experts mentioned that certain gaps in networking and communication from the side of the Chinese owner and relevant regional stakeholders such as public admiration of County of Ludwigslust-Parchim (co-owners of the Parchim Airport), German Customs Authorities, Ministry of Transport of Mecklenburg-Vorpommern do really exist. Regional Airport's Perspective Evaluation In the framework of the regional airports perspective or evaluation supply-side enhancers for the economic and entrepreneurship growth in the region, the same assessment scale of the given criteria were applied as by the demand-side enhancers (cf.Regional Development Perspective Evaluation of Parchim Airport).The assessment of Parchim Airport's potential impact on economic and entrepreneurial growth in the region, i.e. from the point of view of supply-side perspective has shown the following results.None of the given criteria has been evaluated as "very good developed / provided".Three criteria indicators (i.e.airport's infrastructure, incl.tangible and intangible resources, airport's extension potential for future clustering activities and level of value proposition) have been evaluated as "adequate developed / provided".In fact, the airport's infrastructure belongs to one of the main tangible resources of Parchim airport: the new tower was built in May 2015, the length of the runway is 3000 meter, the airport has appropriate passengers and cargo terminals, including required security screening technologies.The following attributes have been mentioned by the experts as the airport's distinctive intangible resources: low costs operation airport; -24/7 operation, i.e. aircrafts are allowed to land and departure 24 hours daily and 7 days a week; no restriction to night flight operations; all types of aircrafts (incl.AN124 and A380) can be accommodated and handled at the airport, over-size cargo operations are possible; efficient customs services, that makes Parchim Airport's cargo terminal to dry a port.237 So called "24/7" operation was mentioned as valuable or non-substitutable intangible resource of Parchim Airport.Comparing to other German airports, nowadays a number of official and civil discussion have been started to introduce a night ban for the state owned airports.Since Parchim Airport is in the private hands, the owners and the airport management claim that in the mid-tem and in the long-term perspective, the 24/7 operation will be still valid for Parchim Airport and might not be questioned.Although considering expanding of the passenger traffic, Parchim Airport is clearly positioning itself as an international gateway to China with a strong focus on the air cargo.According to the current development plan, the airport will be upgraded to ICAO 4F class airport.The experts evaluated the level of value proposition as "adequate" considering the air-cargo development model and cost-performance ratio. It has to be mentioned that a number of various value added services does already exists or is being developed and implemented in Parchim Airport: -Bond Logistics Park (partly realized)-a customs free zone, where cargo may be stored in the Customs Bond Warehouse, there is no time limitation and is treated as outside the boundary of EU or Germany, tax or duties will not be applied if cargo is purposed to be transit to other countries or Bond Zones, air cargo transit to other countries or Bond Zones via the Customs Bond Warehouse may be exempted for import procedures -Customs Bonded Industrial Park (in planning) -the commodities could be assembled by various value added model under Customs bond.The commodities could be considered as "Assembled in Germany" or "Made in Germany" with value-added determination in the Bond Zone by EU regulations and policies.The goods from the EU countries' "preferential origins" can enjoy reduced or zero tariffs in some countries (mutual agreements -Bond Trade & Procurement Centre (in planning) -the commodities can be exhibited for trading or auction purpose.Import procedures will be required and tax & duty will apply only when cargo need to enter into EU markets.Cargo transit to other countries or Bond Zones via the Customs Bond Warehouse in Parchim International Airport is exempted for import procedures However, in spite of above mentioned plans and already realized activities, the level of clustering activities in Parchim Airport has been ranked as "poor".The experts underline the deficit of attracting factors for potential investors that might be connected also to a lack of targeted or direct communication as well as rather weak regional economic structure and the absence of the critical mass of local industries and companies.It has been further noticed to improve the level of operational effectiveness and quality of micro-economic business environment.In spite of the appropriate infrastructure, like runway and the newly built tower, the institutional and infrastructure framework in which the airport operates has been considered as "poor".Furthermore, it has been stated that the current competitive advantage of Parchim Airport is based nowadays mostly on low costs model than on unique/innovative products and services. Discussion Due to growing social and political responsibility in terms of environmental issues, such aspects or impact analysis of an airport's operation on environment might be also discussed.There are already a number of EU funded project that have started to examine airports as so-called environmental sensors.The possible implications of an airport in this direction might be mitigation of environmental impacts and risks, implementation of strategic plans to minimize noise and air pollution effects on the environment.At the moment some relevant regulations and standards imposed by EU and national current legislations shall motivate airports to pay more attention, in other words to invest in innovative and "green" technologies, e.g.technologies for the production of renewable energy on an airport's territory, producing so-called ecological corridors that reconnect parts of the territory through environmental linear infrastructure, etc.This entire legislative framework, acting as demand-side enhancer may stimulate new entrepreneurship and innovation business activities within the nearest airports' operational environment. Furthermore, in according to the guidelines of the European infrastructure development plan for 2014-2020, the airport connectivity (especially in some remote regions) will be improved aiming at improving territorial synergy or networking between nearby airports as well as better integration of smaller and regional airports in common the organizational logistical network though extensive airports' integration with local transport systems, e.g.railways and local buses.All these initiatives may also give an impulse for an airport's growth, thus increasing complementarities, improving value proposition, diversification and specialisation of airports. The BACN experts underlined also the importance for every region to be accessible.In our innovation-driven economies regional accessibility is very important both for people (both: tourists and businessmen) and companies.The regional airports might positively contribute to improving their regional accessibility and herewith economic and entrepreneurship growth in a region.The BACN experts mentioned that it might be also achieved through improved horizontal or networking cooperation between regional airports in the Baltic Sea Region.The distribution of the "weight" between the assessment criteria in the presented assessment model may represent a subject of future disputes and discussions.The experts of the BACN underlined that although this weighting scale might be very subjective, however the it needs to be integrated in this or another form in the evaluation process, since the assessment criteria are not equal.It has been further noted by BACN experts that this weighting scale is not a "universal" for every regional airport, but on the contrary the evaluation approach and the correspondingly applied weighting scale must be very individual, respecting the regional peculiarities, economic perspectives and regional stakeholders' interests. Specifically for Parchim airport it might be mentioned that one of the basic prerequisites for the successful realization of the current strategies is the assumption that innovative Chinese companies and entrepreneurs will start to settle in the airport area and build up a critical mass of interconnected companies, thus creating a cluster.However, the creation of such exterritorial low-wage areas will be hardly possible today to enforce on a political level, as by adopting such a procedure fundamental structures of the German labour and social law would be questioned.The second problem is the use of the established brand Made in Germany which image would be permanently damaged and will cause severe and long-lasting loss of image to the German industrial reputation going far beyond the Parchim location if the quality standards are not properly met.Whether and to what extend the presented concepts can be realized, remains open.A key issue is the question for which companies the Parchim International Airport can be an attractive alternative to other airport locations in the Northern and Central German region.The visions with respect to the possible development of the site that have been propagated for a number of years will be presented in the next paragraph. LinkGlobal presents visions of the future of a Parchim Bond Business Park with the aim to find users for the airport and the local logistics facilities.Advantages of this location are the favourable geographic situation in Europe, the technical equipment for all aircraft types, the cost effective structures with an operating time of 24 hours as well as the status of a customs free zone.The Bond Logistics Park, the Bond Industrial Park and the Bond Trade & Procurement Center essentially constitute the fundaments of the Parchim Bond Business Park serving as its economic core.The Parchim Business Park will be complemented with an Asia Center as well as with a Business Cooperation Zone.With the help of these two the attractiveness of the Parchim location is to be increased. Conclusions Although the role of the European airports for socio-economic development can be hardly overestimated today, the number of loss making small and regional airports in Europe is growing.The regional airports face structural and economic challenges.Since the causality discussion about interdependent relationship between airports and regional economic growth still remains open, the author argues that the approach in evaluating of an airport's potential influence on economic and entrepreneurial activities in a region shall be balanced, i.e. assessment of both perspectives might be necessary.The regional airports shall not be viewed as a transport infrastructure that provides air transport services, but rather as an essential subject and an object of regional development activities and regional planning policies, whereby an airport's operational success might be one of the most important influencing factors on regional economic and entrepreneurship growth. The main findings indicate that regional accessibility is very important nowadays, whereas normally it is not the absence or inadequate airport's infrastructure or an airport's extension capacities (e.g. for industrial bonded parks, warehouses, etc.) that make an airports' impact on regional development insignificant, but rather soft factors that might be improved, such as level of level of customer experience creation, level of value chain of suppliers of goods and services related to the airport and airport's region or level of competing sophistication (operational effectiveness and quality of micro-economic business environment.A special attention shall be paid to enhancing of clustering activities, e.g.though structuring and combining regional logistics services, creating efficient network of regional and inter-regional logistic service providers, coordinating airport's own service structure with relevant regional political and business stakeholders, etc. The above-presented results demonstrated that the regional airports should better recognize their important role for the economic and entrepreneurship growth in their regions as well as accept their own dependence on regional prosperity, as well as improve their operational activities through better coordination with relevant stakeholders of their region. Table 1 . Regional airports' impact assessment and sustainable business model development 4.752 square kilometre; population density ca.45 per one square kilometre) near regional town Parchim in the State of Mecklenburg-Vorpommern, Germany.There are two main cities in the catchment area of Parchim airport, i.e.: Schwerinca.44 km or 40 minutes by road, which is the capital city of Mecklenburg-Vorpommern region with ca.91 thousand people and; and Rostock -ca.111 km or 1,5 hours by road.The geographical transport and time distance by road from Parchim to the nearest airport-hubs are: to Berlin Tegel: 172 km, ca. 2 hours; to Hamburg: 131 km, ca.1,5 hours.The geographical transport and time distance from Parchim to other operating regional airport, i.e.Airport Rostock-Laage is ca.70 km or ca. 1 hour by road.The airport has been used for more than 70 years exclusively for the military purposes.In 2007 the airport was sold to a private investor LinkGlobal International Logistics Group Ltd. Table 2 . Assessment of the demand side enhancer for the airport's growth Table 3 . Assessment of the airport's impact on regional development
8,901.2
2016-03-28T00:00:00.000
[ "Economics" ]
When will the Covid-19 epidemic fade out? A discrete-time deterministic epidemic model is proposed with the aim of reproducing the behaviour observed in the incidence of real infectious diseases. For this purpose, we analyse a SIRS model under the framework of a small world network formulation. Using this model, we make predictions about the peak of the Covid-19 epidemic in Italy. A Gaussian fit is also performed, to make a similar prediction. Introduction Humanity has always been afflicted by the onset of epidemics. Owing to the absence of vaccines, the slow connections between people and isolation between infectious and susceptible were the only remedies to their devastating effects. Over the last two decades, there have been three major epidemics due to human-transmissible viruses, namely Avian, Ebola and Sars, but fortunately the advanced ability of the scientific world has been able to contain their effects. A dangerous impact of infectious diseases on populations can arise from emergence and spread of novel pathogens in a population, or a sudden change in the epidemiology of an existing pathogen. Today, due to the absence of a vaccine and to a highly globalized society, the Covid-19 epidemic is frightening the world, raising a series of important questions. Among these, the most common among people is: when will the epidemic die down? During spreading, this is a difficult question to answer: in addition to understand the early transmission dynamics of the infection, control measures should also be accounted for, which may significantly affect the trend of infection. Generally, the transmission dynamics of an infectious disease is described by modelling the population movements among epidemiological compartments, and assuming random-mixing interactions [1]. When the population mixes at random, each individual has a small and equal chance of coming into contact with any other individual [2]. A more realistic approach describes spatially extended populations, such as elements in a network. At present, we propose an epidemiological model that describes the population as elements in a network, whose nodes represent individuals while links stand for interactions among them [3]. This model includes the fundamental parameters that characterize a disease, namely infection probability, incubation period of pathogen, social structure and so on. Using this model and the available statistical data, we attempt predictions on the Covid-19 epidemic trend. In this short report, we limit ourselves to analyse the epidemiological situation in Italy, and compare to the observed data from France. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint Numerical calculations Each individual of a population is represented by a node of a small world network [4]. The host population is partitioned into categories containing susceptible S, infectious I, and recovered individuals R (SIRS model). Each node is characterized by a counter " . A susceptible individual S( " = 0) can come into the infected state through contagion by infected ones. An infected individual I( " = 1) passes to the refractory state R after an infection time ( , and a refractory individual returns to susceptible state after immunity duration ) [5]. The cycle is completed after these ( + ) time steps, when it returns to the susceptible state. The contagion of susceptible elements occurs stochastically at a local level. If a susceptible element i has " neighbours, of which ",-are infected, then it will become infected with probability / 0 = ",-" . Detail are given in [3]. As it can be seen from this figure, the trend of the infected is exponential ∝ 56 , with = 0.13days >? . Most likely, this value will decrease, in the coming days, given the very restrictive conditions on the movements of the population enforced by the Italian Government. In Figure 2 we present a realization of our model, wherein the fraction of infected inf , is shown, as a function of time, calculated for = 5000 nodes and = 0.1. In the small-world model [4] the parameter controls the transition between a regular lattice ( = 0) and the random graph ( = 1). . CC-BY 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The initial fraction of infected is " = 0.001 (that is 5 individuals). The parameter ( = 14 was chosen on the basis of median incubation period for COVID-19 estimated in 11.5 days of infection [7]: a larger value was chosen because there can be a delay to symptom appearance resulting from the incubation period. The refractory period was chosen arbitrarily as ) = 150, for calculation purposes only. The network is supposed static, that is no individuals can change their links. Self-sustained oscillations with a period of nearly 200 days characterize the time series pattern. The system has phase-synchronized oscillations, as it is typical of influenza viruses. As the number of nodes is finite, the infection could end at a fixed time , with no further evolution. Therefore, we added sources of infection in the model, i.e. a limited number of infective individuals remain in this state all the time [8]. We are interested in the initial growing part of the figure, that is, the one relating to the first 53 days of epidemic. In Figure 3 the plot of infected individuals in the first 53 days of disease is compared with the model time series of infected nodes inf , . . CC-BY 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint Also in this case, the trend of the infected nodes is exponential , ∝ 5 I 6 , but with ? = 0.11days >? . The curve of the infected in the model is less steep than that of the real data and has the first maximum at = 62. Thus, according to the model, the peak of disease will be 62 days after the epidemic outbreak, that is about the 2 nd of April 2020. Due to the aforementioned actions taken by the Italian government, it will probably happen before that day. The shape of each epidemic wave is similar to a Gaussian. Therefore, in Figure 4 a Gaussian curve fit with expected value = 73 and variance = 16 is added to the epidemic data. Unlike the previous case, the prediction is that a maximum of 10 N infected will be reached 73 days from the beginning of the epidemic that is about 13 April 2020. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted March 30, 2020. We have performed additional calculations allowing the network be rewired. Random rewiring produces larger oscillations, enhancing the maximum number of infected people reached in each period of infectious, whereas adaptive rewiring (as humans tend to respond to the emergence of an epidemic by avoiding contacts with infected individuals) reduces the amplitude of oscillations, accounting for the hoped reduction of infected people [3]. Thus, the effect of Italian Government social distancing orders, aimed at limiting contacts between individuals, should reduce both the value of the maximum in the curve of infected people and the time in which it is reached. We can therefore say, with an acceptable level of confidence, that the peak in the evolution of the epidemic, with the consequent decrease in infections, should take place between April 2 and April 13, 2020. The results regarding Italy are also important, because similar epidemic spreads can occur in other countries in Europe. As can be seen in Figure 5, the curve of the infected from Italy and France [9] overlap almost perfectly. Preventive measures taken by other nations could help significantly reduce the spread of the epidemic if taken early. Conclusions We have attempted to describe the pattern of infections of coronavirus disease 2019 (COVID-19) in Italy starting from January 31 to today. Our aim was to make a reasonable prediction on the time the epidemic will reach its peak, with a consequent decrease of disease. We used a SIRS model under the framework of small world network formulation. Incubation period is a fundamental parameter in the epidemic evolution and it can be included directly in the model. We used the model pattern of infected model that fit the real data in the increasing part of initial wave. We estimated the peak as the maximum value in the model curve of infection people. Alternately we used a Gaussian curve, and obtained an additional peak estimation. We estimate that the duration of the COVID-19 disease is around 5 months. This is a rough . CC-BY 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted March 30, 2020. . https://doi.org/10.1101/2020.03.27.20045138 doi: medRxiv preprint estimate, as the epidemic spread can be to some extent controlled via contact tracing for suspected cases isolation for confirmed cases.
2,161.6
2020-03-30T00:00:00.000
[ "Computer Science" ]
Two decades of statistical language modeling: where do we go from here? Statistical language models estimate the distribution of various natural language phenomena for the purpose of speech recognition and other language technologies. Since the first significant model was proposed in 1980, many attempts have been made to improve the state of the art. We review them, point to a few promising directions, and argue for a Bayesian approach to integration of linguistic theories with data. OUTLINE Statistical language modeling (SLM) is the attempt to capture regularities of natural language for the purpose of improving the performance of various natural language applications.By and large, statistical language modeling amounts to estimating the probability distribution of various linguistic units, such as words, sentences, and whole documents. Statistical language modeling is crucial for a large variety of language technology applications.These include speech recognition (where SLM got its start), machine translation, document classification and routing, optical character recognition, information retrieval, handwriting recognition, spelling correction, and many more. In machine translation, for example, purely statistical approaches have been introduced in [1].But even researchers using rule-based approaches have found it beneficial to introduce some elements of SLM and statistical estimation [2].In information retrieval, a language modeling approach was recently proposed by [3], and a statistical/information theoretical approach was developed by [4]. SLM employs statistical estimation techniques using language training data, that is, text.Because of the categorical nature of language, and the large vocabularies people naturally use, statistical techniques must estimate a large number of parameters, and consequently depend critically on the availability of large amounts of training data. Over the past twenty years, successively larger amounts of text of various types have become available online.As a result, in domains where such data became available, the quality of language models has increased dramatically.However, this improvement is now beginning to asymptote.Even if online text continues to accumulate at an exponential rate (which it no doubt will, given the growth rate of the web), the quality of currently used statistical language models is not likely to improve by a significant factor.One informal estimate from IBM shows that bigram models effectively saturate within several hundred million words, and trigram models are likely to saturate within a few billion words.In several domains we already have this much data. Ironically, the most successful SLM techniques use very little knowledge of what language really is.The most popular language models ( -grams) take no advantage of the fact that what is being modeled is language -it may as well be a sequence of arbitrary symbols, with no deep structure, intention or thought behind them. A possible reason for this situation is that the knowledge impoverished but data optimal techniques of -grams succeeded too well, and thus stymied work on knowledge based approaches. But one can only go so far without knowledge.In the words of the premier proponent of the statistical approach to language modeling, Fred Jelinek, we must 'put language back into language modeling' [5].Unfortunately, only a handful of attempts have been made to date to incorporate linguistic structure, theories or knowledge into statistical language models, and most such attempts have been only modestly successful. In what follows, section 2 introduces statistical language modeling in more detail, and discusses the potential for improvement in this area.Section 3 overviews major established SLM techniques.Section 4 lists promising current research directions.Finally, section 5 suggests both an interactive approach and a Bayesian approach to the integra-tion of linguistic knowledge into the model, and points to the encoding of such knowledge as a main challenge facing the field. Definition and use A statistical language model is simply a probability distribution ¡ £¢ ¥¤ §¦ over all possible sentences ¤ . 1 It is instructive to compare statistical language modeling to computational linguistics.Admittedly, the two fields (and communities) have fuzzy boundaries, and a great deal of overlap.Nonetheless, one way to characterize the difference is as follows.Let ¨be the word sequence of a given sentence, i.e. its surface form, and let © be some hidden structure associated with it (i.e. its parse tree, word senses, etc.).Statistical language modeling is mostly about estimating Pr ¢ ¨¦ , whereas computational linguistics is mostly about estimating Pr ¢ © ¨¦ .Of course, if one could estimate well the joint Pr ¢ © ¦ , both Pr ¢ ¨¦ and Pr ¢ © ¨¦ could be derived from it.In practice, this is usually not feasible. Statistical language models are usually used in the context of a Bayes classifier, where they can play the role of either the prior or the likelihood function.For example, in automatic speech recognition, given an acoustic signal , the goal is to find the sentence ¤ that is most likely to have been spoken.Using a Bayesian framework, the solution is: where the language model ¡ £¢ ¥¤ §¦ plays the role of the prior.In contrast, in document classification, given a document @ , the goal is to find the class A to which it belongs.Typically, examples of documents from each of the (say) B classes are given, from which B different language models C ¡ 'D 6¢ @ ¦ , ¡ E 0¢ @ ¦ , F 6F 6F , ¡ HG I¢ @ ¦ QP are constructed.Using a Bayes classi- fier, the solution A is: where the language model ¡ T ¢ @ ¦ plays the role of the like- lihood.In a similar fashion, one can derive the role of language models in Bayesian classifiers for the other language technologies listed above. Measures of progress To assess the quality of a given language modeling technique, the likelihood of new data is most commonly used.The average log likelihood of a new random sample is given 1 Or spoken utterances, documents, or any other linguistic unit. by: Average-Log-Likelihood ¢ ¥W X ¦ ' `Y ba Ic ed gf $ h¡ Vi p¢ ¥W c ¦ where W q C W £D W rE §F 6F 6F % W ts 5P is the new data sample, and M is the given language model.This latter quantity can also be viewed as an empirical estimate of the cross entropy of the true (but unknown) data distribution ¡ with regard to the model distribution ¡ Hi Actual performance of language models is often reported in terms of perplexity [6]: " d ) H¢ ¥¡ £u Q¡ i ¦ !cross-entropy ed gf (5) Perplexity can be interpreted as the (geometric) average branching factor of the language according to the model.It is a function of both the language and the model.When considered a function of the model, it measures how good the model is (the better the model, the lower the perplexity).When considered a function of the language, it estimates the entropy, or complexity, of that language. Ultimately, the quality of a language model must be measured by its effect on the specific application for which it was designed, namely by its effect on the error rate of that application.However, error rates are typically non-linear and poorly understood functions of the language model.Lower perplexity usually result in lower error rates, but there are plenty of counterexamples in the literature.As a rough rule of thumb, reduction of 5% in perplexity is usually not practically significant; a 10%-20% reduction is noteworthy, and usually (but not always) translates into some improvement in application performance; a perplexity improvement of 30% or more over a good baseline is quite significant (and rare!). Several attempts have been made to devise metrics that are better correlated with application error rate than perplexity, yet are easier to optimize than the error rate itself.These attempts have met with limited success.For now, perplexity continues to be the preferred metric for practical language model construction.For more details, see [7]. Known weaknesses in current models Even the simplest language model has a drastic effect on the application in which it is used (this can be observed by, say, removing the language model from a speech recognition system).However, current language modeling techniques are far from optimal.Evidence for this comes from several sources: Brittleness across domains: Current language models are extremely sensitive to changes in the style, topic or genre of the text on which they are trained.For example, to model casual phone conversations, one is much better off using 2 million words of transcripts from such conversations than using 140 million words of transcripts from TV and radio news broadcasts.This effect is quite strong even for changes that seem trivial to a human: a language model trained on Dow-Jones newswire text will see its perplexity doubled when applied to the very similar Associated Press newswire text from the same time period ([8, p. 220]). False independence assumption: In order to remain tractable, virtually all existing language modeling techniques assume some form of independence among different portions of the same document.For example, the most commonly used model, the -gram, assumes that the probability of the next word in a sentence depends only on the identity of the last -1 words.Yet even a cursory look at any natural text proves this assumption patently false.False independence assumptions in statistical models usually lead to overly sharp distributions.This is precisely what is happening in language modeling, as can be seen for example in document classification: the posterior computed by equation 2 is usually extremely sharp, reaching virtually one for one of the classes and virtually zero for all others.This of course cannot be the true posterior, since the average classification error rate is typically much greater than zero. Shannon-style experiments: Claude Shannon pioneered the technique of eliciting human knowledge of language by asking human subjects to predict the next element of text [9,10].Shannon used this technique to bound the entropy of English.[11] formulated a gambling setup and used it to derive its own estimate of the entropy of English.In the 1980s, the speech and language research group at IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.Since then, Shannonstyle experiments have been performed by several other researchers.For example, [12] performed experiments aimed at establishing the potential for language modeling improvements in specific linguistic areas.A common observation during all these experiments is that people improve on the performance of a language model easily, routinely and substantially.They apparently do so by using reasoning at the linguistic, common sense, and domain levels. SURVEY OF MAJOR SLM TECHNIQUES This section briefly reviews major established SLM techniques.For a more detailed technical treatment, see [13]. Almost all language models to date decompose the probability of a sentence into a product of conditional probabil-ities: where h c is the m th word in the sentence, and l c def C h D , h nE , F §F 6F h c ¥o D pP is called the history. -grams -grams are the staple of current speech recognition technology.Virtually all commercial speech recognition products use some form of an -gram.An -gram reduces the dimensionality of the estimation problem by modeling language as a Markov source of order -1: The value of trades off the stability of the estimate (i.e. its variance) against its appropriateness (i.e.bias).A trigram ( =3) is a common choice with large training corpora (millions of words), whereas a bigram ( =2) is often used with smaller ones. Deriving trigram and even bigram probabilities is still a sparse estimation problem, even with very large corpora.For example, after observing all trigrams (i.e., consecutive word triplets) in 38 million words' worth of newspaper articles, a full third of trigrams in new articles from the same source are novel [8, p. 8].Furthermore, even among the observed trigrams, the vast majority occurred only once, and the majority of the rest had similarly low counts.Therefore, straightforward maximum likelihood (ML) estimation ofgram probabilities from counts is not advisable.Instead, various smoothing techniques have been developed.These include discounting the ML estimates [14,15], recursively backing off to lower order -grams [16,17,18], and linearly interpolating -grams of different order [19].Other approaches include variable-length -gram [20,21,22,23,24] as well as a lattice approach [25].Much work has been done to compare and perfect smoothing techniques under various conditions.A good recent analysis can be found in [26].In addition, toolkits implementing the various techniques have been disseminated [27,28,29,30]. Yet another way to battle sparseness is via vocabulary clustering.Let s c be the class word h c was assigned to.Then any of several model structures could be used.For example, for a trigram: Pr ¢ 9h 3t h D h nE Q¦ u Pr ¢ 9h 3t s t §¦ 7 Pr ¢ s t h D h nE p¦ (8) Pr ¢ 9h 3t h D h nE Q¦ u Pr ¢ 9h 3t s t §¦ 7 Pr ¢ s t h D Qs E p¦ (9) Pr ¢ 9h 3t h D h nE Q¦ u Pr ¢ 9h 3t s t §¦ 7 Pr ¢ s t s D Qs E p¦ (10) Pr ¢ 9h 3t h D h nE Q¦ u Pr ¢ 9h 3t s D Qs E p¦ (11) The quality of the resulting model depends of course on the clustering s ¢ ¥¦ .In narrow discourse domains (e.g.ATIS, [31]), good results are often achieved by manual clustering of semantic categories (e.g.[32]).But in less constrained domains, manual clustering by linguistic categories (e.g.parts of speech) does not usually improve on the wordbased model.Automatic, iterative clustering using information theoretic criteria [33,34] applied to large corpora can sometimes reduce perplexity by 10% or so, but only after the model is interpolated with its word-based counterpart. Decision tree models Decision trees and CART-style [35] algorithms were first applied to language modeling by [36].A decision tree can arbitrarily partition the space of histories by asking arbitrary binary questions about the history l at each of the internal nodes.The training data at each leaf is then used to construct a probability distribution Pr ¢ ¥h l ¦ over the next word. To reduce the variance of the estimate, this leaf distribution is interpolated with internal-node distributions found along the path to the root. As usual, trees are grown by greedily selecting, at each node, the most informative question (as judged by reduction in entropy).Pruning and cross validation are also used. Applying CART technology to language modeling is quite a challenge: The space of histories is very large (Y §v D %w w for a 20 word sequence over a 100,000 word vocabulary), and the space of possible questions is even larger ( D %w x zy ¥y ).Even if questions are restricted to individual words in the history, there are still v 7 § D %w Q{ such questions.Very strong bias must be introduced, by restricting the class of questions to be considered and using greedy search algorithms.To support optimal single-word questions at a given node, algorithms were developed for rapid optimal binary partitioning of the vocabulary (e.g.[37]). The first attempt at CART-style LM [36] used a history window of 20 words and restricted questions to individual words, though it allowed more complicated questions consisting of composites of simple questions.It took many months to train, and the result fell short of expectations: a 4% reduction in perplexity over the baseline trigram, and a further 9% reduction when interpolated with the latter.In the second attempt [38], much stronger bias was introduced: first, the vocabulary was clustered into a binary hierarchy as in [33], and each word was assigned a bit-string representing the path leading to it from the root.Then, tree questions were restricted to the identity of the most significant as-yetunknown bit in each word in the history.This reduced the candidate set to a handful of questions at each node.Unfortunately, results here were also disappointing, and the approach was largely abandoned. Theoretically, decision trees represent the ultimate in partition based models.It is likely that trees exist which significantly outperform ngrams.But finding them seems difficult, for both computational and data sparseness reasons. Linguistically motivated models While all SLMs get some inspiration from an intuitive view of language, in most models actual linguistic content is quite negligible.Several SLM techniques, however, are directly derived from grammars commonly uses by linguists. Context free grammar (CFG) is a crude yet well understood model of natural language.A CFG is defined by a vocabulary, a set of non-terminal symbols and a set of production or transition rules.Sentences are generated, starting with an initial non-terminal, by repeated application of the transition rules, each transforming a non-terminal into a sequence of terminals (i.e.words) and non-terminals, until a terminals-only sequence is achieved.Specific CFGs have been created based on parsed and annotated corpora such as [39], with good, though still incomplete, coverage of new data. A probabilistic (or stochastic) context free grammar puts a probability distribution on the transitions emanating from each non-terminal, thereby inducing a distribution over the set of all sentences.These transition probabilities can be estimated from annotated corpora using the Inside-Outside algorithm [40], an Estimation-Maximization (EM) algorithm (see [41]).However, the likelihood surfaces of these models tend to contain many local maxima, and the locally maximal likelihood points found by the algorithm usually fall short of the global maximum.Furthermore, even if global ML estimation were feasible, it is generally believed that context sensitive transition probabilities are needed to adequately account for actual behavior of language.Unfortunately, no efficient training algorithm is known for this situation. In spite of this, [42] successfully incorporated CFG knowledge sources into a SLM to achieve a 15% reduction in a speech recognition error rate in the ATIS domain.They did so by parsing the utterances with a CFG to produce a sequence of grammatical fragments of various types, then constructing a trigram of fragment types to supplant the standard ngram. Link grammar is a lexicalized grammar proposed by [43].Each word is associated with one or more ordered sets of typed links; each such link must be connected to a similarly typed link of another word in the sentence.A legal parse consists of satisfying all links in the sentence via a planar graph.Link grammar has the same expressive power as a CFG, but arguably conforms better to human linguistic intuition.A link grammar for English has been constructed manually with good coverage.Probabilistic forms of link grammar have also been attempted [44].Link grammar is related to dependency grammar, which will be discussed in section 4. Exponential models All models discussed so far suffer from data fragmentation, in that more detailed modeling necessarily results in each new parameter being estimated with less and less data.This is very apparent in decision trees, where, as the tree grows, leaves contain fewer and fewer data points. Fragmentation can be avoided by using an exponential model of the form: where ~c are the parameters, | t¢ l ¦ is a normalizing term, and the features c ¢ l V h ¦ are arbitrary functions of the wordhistory pair.Given a training corpus, the ML estimate can be shown to satisfy the constraints: where ¡ is the empirical distribution of the training corpus. The ML estimate can also be shown to coincide with the Maximum Entropy (ME) distribution [45], namely the one with highest entropy among all distributions satisfying equation 13.This unique ML/ME solution can be found by an iterative procedure [46,47]. The ME paradigm, and the more general MDI framework, were first suggested for language modeling by [48], and have since seen considerable success (e.g.[49,50,8]).Its strength lies in principly incorporating arbitrary knowledge sources while avoiding fragmentation.For example, in [8], conventional ngrams, distance-2 ngrams, and long distance word pairs ("triggers") were encoded as features, and resulted in up to 39% perplexity reduction and up to 14% speech recognition word error rate reduction over the trigram baseline. While ME modeling is elegant and general, it is not without its weaknesses.Training a ME model is computationally challenging, and sometimes altogether infeasible.Using a ME model is also CPU intensive, because of the need for explicit normalization.Unnormalized ME modeling is attempted in [51].ME smoothing is analyzed in [52]. The relative success of ME modeling focused attention on the remaining problem of feature induction, namely, selection of useful features to be included in the model.An automatic iterative procedure for selecting features from a given candidate set is described in [47].An interactive procedure for eliciting candidate sets is described in [53]. Adaptive models So far we have treated language as a homogeneous source.But in fact natural language is highly heterogeneous, with varying topics, genres and styles. In cross-domain adaptation, test data comes from a source to which the language model has not been exposed during training.The only useful adaptation information is in the current document itself.A common and quite effective technique for exploiting this information is the cache: the (continuously developing) history is used to create, at runtime, a dynamic -gram ¡ V % S ¥ p §¢ ¥h l ¦ , which in turn is interpolated with the static model: with the weight ~optimized on held-out data.Cache LMs were first introduced by [59] and [60].[61,62] report reduction in perplexity, and [63] also reports reduction in recognition error rate.[64] introduced yet another adaptation scheme. In within-domain adaptation, test data comes from the same source as the training data, but the latter is heterogeneous, consisting of many subsets with varying topics, styles, or both.Adaptation then proceeds in the following steps: 1. Clustering the training corpus along the dimension of variability, say, topic (e.g.[65]). 2. At runtime, identifying the topic or set of topics ( [66,67]) of the test data. 3. Locating appropriate subsets of the training corpus, and using them to build a specific model. 4. Combining the specific model with a corpus-wide model (in statistical terminology, shrinking the specific model towards the general one, to trade off the former's variance against the latter's bias).This is usually done via linear interpolation, at either the word probability level or the sentence probability level [65]. A special (and very common) case is when one has only small amounts of data in the target domain and large amounts in other domains.In this case, the only relevant step is the last one: combining models from the two domains.The outcome here is often disappointing, though: training data outside the domain has surprisingly little benefit.For example, when modeling the Switchboard domain (conversational speech, [68]), the 40 million words of the WSJ corpus (newspaper articles, [69]) and even the 140 million words of the BN corpus (broadcast news transcriptions, [70]) improve by only a few percentage points the application performance of the in-domain model trained on a paltry 2.5 million words.Although this is a significant improvement on such a difficult corpus, it is nonetheless disappointing considering the amount of data involved.By some estimates [71], another 1 million words of Switchboard data would help the model more than 30 million words of outof-domain data.This suggest that our adaptation techniques are too crude. PROMISING CURRENT DIRECTIONS This section discusses current research directions that, in this author's subjective opinion, show significant promise. Dependency models Dependency grammars (DG) describe sentences in terms of asymmetric pairwise relationships among words.With a single exception, each word in the sentence is dependent upon one other word, called its head or parent.The single exception is the root, which serves as the head of the entire sentence.For more about DGs, see [72].Probabilistic DGs have also been developed, together with algorithms for learning them from corpora (e.g.[73]). Probabilistic dependency grammars are particularly suited to -gram style modeling, where each word is predicted based on a small number of other words.The main difference is that in a conventional -gram, the structure of the model is predetermined: each word is predicted from a few words that immediately preceded it.In DG, which words serve as predictors depends on the dependency graph, which is a hidden variable.A typical implementation will parse a sentence ¤ to generate the most likely dependency graphs c (with attendant probabilities ¡ 8¢ c ¦ ), compute for each of them a generation probability ¡ 8¢ 9¤ c ¦ (either - gram style or perhaps as an ME model), and finally estimate the complete sentence probability as ¡ £¢ ¥¤ §¦ £q c ¡ £¢ c ¦ n7 ¡ £¢ 9¤ c ¦ (this is only approximate because the ¡ 8¢ c ¦ them- selves were derived from the sentence ¤ .)Sometime ¡ £¢ 9¤ 6¦ is further approximated as ¡ £¢ 9¤ ¦ , where is the single best scoring parse. An example of such a model is [74], which uses the parser of [75] to generate the candidate parses, and trains the parameters using maximum entropy.The probabilistic link grammar [44] mentioned in section 3.3 also falls roughly in this category.Most recently, [76] employed a parser with probabilistic parameterization of a pushdown automata, and used an EM-type algorithm for training, with encouraging results (1% recognition word error rate reduction on the notoriously difficult Switchboard corpus).In all, this method of combining hidden linguistic structure with chain-rule parameterization can yield a linguistically grounded yet computationally tractable model. Dimensionality reduction One of the reasons language is so hard to model statistically is that it is ostensibly categorical, with an extremely large number of categories, or dimensions.A prime example is the vocabulary.To most language models, the vocabulary is but a very large set of unrelated entries.BANK is no closer to LOAN or to BANKS than it is to, say, BRAZIL.This results in a large number of parameters.Yet our linguistic intuition is that there is a great deal of structure in the relationship among words.We feel that the "true" dimension of the vocabulary is actually quite lower. Similarly, for other phenomena in language, the underlying space may be of moderate or even low dimensionality.Consider topic adaptation.As the topic changes, the probabilities of almost all words in the vocabulary change.Since no two documents are exactly about the same thing, a straightforward approach would require an inordinate number of parameters.Yet the underlying topic space can be reasonably modeled in much fewer dimensions.This is the motivation behind [77], which uses the technique of Latent Semantic Analysis ( [78]) to simultaneously reduce the dimensionality of the vocabulary and that of the topic space.First, the occurrence of each vocabulary word in each document is tabulated.This very large matrix is then reduced via Singular Value Decomposition to a much lower dimension (typically 100-150).The new, smaller matrix captures the most salient correlations between specific combinations of words on one hand and clusters of documents on the other.The decomposition also yields matrices that project from document-space and word-space into the new, combined space.Consequently, any new document can be projected into the combined space, effectively being classified as a combination of the fundamental underlying topics, and adapted to accordingly.In [77], this type of adaptation is combined with an -gram, and a perplexity reduction of 30% over a trigram baseline is reported.In [79], the technique is further developed and is found to also reduce recognition errors by 16% over a trigram baseline. Whole sentence models All language models described so far use the chain rule to decompose the probability of a sentence into a product of conditional probabilities of the type Pr ¢ ¥h l ¦ .Historically, this has been done to facilitate estimation by relative counts.The decomposition is ostensibly harmless: after all, it is not an approximation but an exact equality.However, as a result, language modeling by and large has been reduced to modeling the distribution of a single word.This in turn may be a significant hindrance to modeling linguistic structure: some linguistic phenomena are impossible or at best awkward to think about, let alone encode, in a conditional framework.These include sentence-level features such as person and number agreement, semantic coherence, parsability, and even length.Furthermore, external influences on the sentence (e.g.previous sentences, topic) must be factored into the prediction of every word, which can cause small biases to compound. To address these issues, [53] proposed a whole sentence exponential model: Compared with the conditional exponential model of equation 12, | is now a true constant, which eliminates the seri- ous burden of normalization.Most importantly, the features 6c ¢ ¥¤ §¦ can capture arbitrary properties of the entire sentence. Training this model requires sampling from an exponential distribution, a non-trivial task.The use of Monte Carlo Markov Chain and other sampling methods for language is studied in [80].Sampling efficiency is crucial.Consequently, the bottleneck in this model is not the number of features or amount of data, but rather how rare the features are, and how accurately they need to be modeled.Interestingly, it has been shown [81] that most of the benefit is likely to come from the more common features. Parse-based features have been tried in [81], and semantic features are discussed in [53].An interactive methodology for feature induction was also proposed in [53].This methodology leads to a formulation of the training problem as logistic regression, with significant practical benefits over ML training. CHALLENGES Perhaps the most frustrating aspect of statistical language modeling is the contrast between our intuition as speakers of natural language and the over-simplistic nature of our most successful models. As native speakers, we feel strongly that language has a deep structure.Yet we are not sure how to articulate that structure, let alone encode it, in a probabilistic framework.Established linguistic theories have been of surprisingly little help here, probably because their goal is to draw a line between what is properly in the language and what isn't, whereas SLM's goals are quite different. As an example, consider the problem of clustering the vocabulary words which was discussed in section 3.1.As mentioned there, several automatic iterative methods have been proposed (e.g.[33,34]).Table 1 lists example word classes derived by such a method [82].While most words' placement appear satisfactory, a few of the words seem out of place.Not surprisingly, these are often words whose count in the corpus was insufficient for reliable assignment.Ironically, it is exactly these words which stood to benefit the most from clustering.In general, the more reliably a word can be assigned to a class, the less it will benefit from that assignment.How then is vocabulary clustering to become effective?I believe that the solution to this problem, and others like it, is to inject human knowledge of language into the process.This can take the following forms: Interactive modeling.Data-driven optimization and human knowledge and decision making can play complementary roles in an intertwined iterative process.For the vocabulary clustering problem, this means that a human is put in the loop, to arbitrate some borderline decisions and override others.For example, a human can decide that 'TUESDAY' belongs in the same cluster as 'MONDAY', 'WEDNES-DAY', 'THURSDAY' and 'FRIDAY', even if it did not occur enough times to be placed there automatically, and even if it did not occur at all.Another example of this approach is the interactive feature induction methodology described in [53]. Encoding knowledge as priors.One of the perils of using human knowledge is that it is often overstated, and sometimes wrong.Thus a better solution might be to encode such knowledge as a prior in a Bayesian updating scheme.After training, whatever phenomena are not sufficiently represented in the training corpus will continue to be captured thanks to the prior.Whenever enough data exist, however, they will override the prior.For the vocabulary clustering problem, experts' beliefs about the relationships between vocabulary entries must be suitably encoded, and the clustering paradigm must be changed to optimize an appropriate posterior measure.Thus, in the example above, enough data may exist to separate out 'FRIDAY' because of its use in phrases like "Thank God It's Friday". Encoding linguistic knowledge as a prior is an exciting challenge which has yet to be seriously attempted.This will likely include defining a distance metric over words and phrases, and a stochastic version of structured word ontologies like WordNet [83].At the syntactic level, it could include Bayesian versions of manually created lexicalized grammars.In practice, the Bayesian framework and the interactive process may be combined, taking advantage of the superior theoretical foundation of the former and the computational advantages of the latter.
7,413
2000-08-01T00:00:00.000
[ "Linguistics", "Computer Science" ]
Tuning spin-orbit torques across the phase transition in VO$_2$/NiFe heterostructure The emergence of spin-orbit torques as a promising approach to energy-efficient magnetic switching has generated large interest in material systems with easily and fully tunable spin-orbit torques. Here, current-induced spin-orbit torques in VO$_2$/NiFe heterostructures were investigated using spin-torque ferromagnetic resonance, where the VO$_2$ layer undergoes a prominent insulator-metal transition. A roughly two-fold increase in the Gilbert damping parameter, $\alpha$, with temperature was attributed to the change in the VO$_2$/NiFe interface spin absorption across the VO$_2$ phase transition. More remarkably, a large modulation ($\pm$100%) and a sign change of the current-induced spin-orbit torque across the VO$_2$ phase transition suggest two competing spin-orbit torque generating mechanisms. The bulk spin Hall effect in metallic VO$_2$, corroborated by our first-principles calculation of spin Hall conductivity, $\sigma_{SH} \approx 10^4 \frac{\hbar}{e} \Omega^{-1} m^{-1}$, is verified as the main source of the spin-orbit torque in the metallic phase. The self-induced/anomalous torque in NiFe, of the opposite sign and a similar magnitude to the bulk spin Hall effect in metallic VO$_2$, could be the other competing mechanism that dominates as temperature decreases. For applications, the strong tunability of the torque strength and direction opens a new route to tailor spin-orbit torques of materials which undergo phase transitions for new device functionalities. the bulk spin Hall effect in metallic VO2, could be the other competing mechanism that dominates as temperature decreases. For applications, the strong tunability of the torque strength and direction opens a new route to tailor spin-orbit torques of materials which undergo phase transitions for new device functionalities. * Author to whom correspondence should be addressed<EMAIL_ADDRESS> INTRODUCTION Long-term goals of spintronics are the generation and the utilisation of spin currents for information processing and storage 1,2 . Compared to optical and electrical spin injection schemes 3,4 , the spin current generation via spin-orbit interaction has demonstrated efficient charge-to-spin conversion 5,6 and has received much interest not only in the fundamental understanding but also in particular for technological applications. Notably, magnetisation switching via spin-orbit torques 7,8 offers a number of advantages over conventional spintransfer torque switching and is actively being developed into new generation spintronics devices such as spin-orbit torque magnetoresistance random access memory 9 . The main mechanisms behind these recent advances are the current-induced spin-orbit torques 10,11 . The torques can be realised in a number of different ferromagnet-nonmagnet systems, and efficient charge-to-spin conversion was observed not only in conventional metallic heterostructures but also in non-magnetic metal bilayers 12 , semiconductor quantum wells [13][14][15] and topological insulators [16][17][18] . So far various mechanisms for the observed spinorbit torques have been identified, however, very often it has been challenging to identify the origin of the spin-orbit torques because different mechanisms contribute at the same time and compete with each other. Furthermore, varying layer thicknesses in order to disentangle bulk and interface effects poses difficulties as the growth and interface properties change with varying the thicknesses. For the bulk effects, the spin Hall effect of the nonmagnet has been regarded as one of the main contributions and the values can now be calculated also theoretically 6,8 . Meanwhile, while the effect of spin-orbit coupling in the ferromagnet has been regarded negligible so far, a recent experiment 19 revealed that even a single ferromagnet can generate substantial self-induced torque with a defined sign of the torque. Moreover, it was shown that orbital Hall current generated from the nonmagnetic layer can also contribute strongly to the torque 20 . The spin-orbit torque efficiency is a parameter that is usually set for a specific material and interface, and it cannot be modulated easily. While it has been recently shown that strain can be used to control the spin-orbit torque to some extent 21 , a piezoelectric substrate is often required which complicates growth and optimisation of thin films. In this respect, an interesting material is vanadium dioxide (VO2), a transition metal oxide which undergoes a prominent insulator-metal transition with temperature. The hysteretic phase transition allows to deliberately switch between insulating and metallic states, which can then influence the current flow and thus spin-orbit effects. The change in the VO2 orbital occupation 22 across the structural phase transition leads to the large changes in electrical resistivity 23 as well as optical 24 , structural 25 and magnetic properties [26][27][28] , and is expected to affect the spin-orbit coupling directly 29 . However, the effect of the VO2 phase transition on current-induced spin-orbit torques in a VO2/ferromagnet heterostructure, which is of the key importance for the future functionalisation, has not been investigated and therefore is the main focus of this study. In this work, we investigate current-induced spin-orbit torques in a VO2/NiFe heterostructure across the VO2 insulator-metal phase transition with the emphasis on the functionalisation. The sign and the magnitude of the generated spin-orbit torques are probed using the spin-torque ferromagnetic resonance (ST-FMR) technique, where we inspect resonance linewidths of the bilayer strips with an additional DC current through the strip. Due to the several orders of magnitude changes in the electrical resistivity of the VO2 layer across the phase transition, the ratio of applied charge currents in VO2 and NiFe layers is thus controlled by changing temperature. In particular, we quantify the large variation including a sign change of spin-orbit torque in different VO2 phases. The observed hysteretic, phasedependent spin-orbit torques could be utilised for future device concepts. ST-FMR MEASUREMENTS ACROSS INSULATOR-METAL TRANSITION Firstly, several structural characterisations were performed to inspect the quality of the VO2 films. Figure 1a shows an out-of-plane x-ray diffraction spectrum of the 70 nm thick VO2 film deposited by reactive sputtering on Al2O3(1-102). The (110), (200), and (111) VO2 peaks are visible, indicating the polycrystalline growth of the VO2 film. In Figure 1b, a 1 m x 1 m atomic force microscope image of the same film shows large structural domains of a few hundred nm sizes with a root-mean-square roughness of 6.3 nm. Figure 1c displays the temperature dependence of van der Pauw resistance of the as-deposited VO2 film, where an insulator-metal transition with temperature yields with a resistance change of four orders of magnitude, as observed previously 23 . In order to characterise the current-induced spin-orbit torques of the VO2 layers, a Ni81Fe19 (5 nm) / MgO (2 nm) / Ta (3 nm) multilayer stack is sputter-deposited on top of the VO2 (70 nm) film. M-H hysteresis loops of the resulting multilayer stack measured at 300 K are shown in Figure 1d. The magnetically soft NiFe layer is fully saturated by a 10 mT magnetic field to within 10% of the expected bulk saturation value of ~ 8.8 × 10 5 A m -1 , with the coercivity below 1 mT. where and are the symmetric and the anti-symmetric coefficients, is the resonance linewidth, 0 is the vacuum permeability, is the resonance field and is an offset voltage in the measurement. The symmetric component, S, of Vmix is proportional to the damping-like torque generated by the spin current from the bulk VO2 layer and the VO2/NiFe interface, while the anti-symmetric component, A, is generated by the Oersted field produced by the RF excitation current as well as the field-like torque arising from the spin current. The RF frequency dependence of and W can be found in Supporting Information ( Figure S2). As the sample temperature increases, the VO2 becomes more metallic and the amount of the RF current through the NiFe layer that produces the Vmix signal decreases. The relationship between the resonance linewidth and the driving frequency can be described by where 0 is the inhomogeneous broadening, is the electronic gyromagnetic ratio of NiFe . The Table 1 shows the values of , , the number of field-sweeps and the average R 2 values for the measurements at 290 K. The changes in the linewidth W with the added DC current is shown in Figure 4b and Figure 4c at 290 K and 355 K, respectively. (The ST-FMR spectra with DC currents at different temperatures can be seen in Supporting Information Figure S4.) The change in the linewidth is linearly proportional to the magnitude of the DC bias, indicating that the generated spin current is also linearly proportional to the applied charge current. Remarkably, we observed a sign change of the torques across the phase transition of VO2, suggesting competing origins of the spin-orbit torques. Figure 4d summarises the DCinduced linewidth changes, W/IDC, at different temperatures, as compared to the device resistance across the phase transition. At 290 K, the VO2 layer resistance is several orders of magnitude higher than that of the NiFe layer, and most of the applied DC current flows through the NiFe layer. The lack of the DC current flowing through the VO2 layer eliminates the bulk spin Hall effect in the VO2 as the main origin of the large spin-orbit torque observed at this temperature. The effect of the self-induced torque in NiFe, as observed in Ni 19 can explain our observed sign of the signal. Additional interfacial effects, such as inverse spin galvanic effect prominent in many Rashba-like interfaces 15,17,33 , can also additionally contribute but these effects have been reported not to have a unique sign of the generated torques. Furthermore, as the interface between the VO2 and the NiFe is present in both the low and the high temperature phase, it is not clear that strongly different inverse spin galvanic effects can be expected as a function of temperature. As the temperature increases, VO2 undergoes an insulator-metal transition and more current flows through the VO2 layer. (The device resistance dependence of the W/IDC can be found in Supporting Information Figure S5.) Therefore, this charge current can create spin currents in the VO2 layer by the bulk spin Hall effect, which competes with the other contributions such as the self-induced torque from NiFe or the interfacial torque. As seen in Figure 4d, the observed total spin-orbit torque decreases in magnitude with increasing temperature from 290 K, goes through the sign change at ~ 325 K near the middle of the insulator-metal transition then increases again in magnitude to 355 K. The spin-orbit torque generated via the bulk spin Hall effect in the metallic VO2 layer at 355 K is of the same sign as seen in V/CoFeB 34 and VO2/YIG 32 (negative effective spin Hall angle). METALLIC VO2 & DICSUSSION Taking into account all the above points, the large changes in the spin-orbit torque observed in our system can be interpreted by two competing mechanisms. One of the major changes brought forward by the insulator-metal transition is the electric current flowing within the VO2 layer. In the metallic phase of the VO2, this current in turn generates spin/orbital Hall current, which is injected into NiFe layer and thus exerts a torque. In order to estimate this effect quantitatively, we performed first-principles calculations of spin and orbital Hall conductivities of the metallic VO2 in the rutile structure, as seen in Figure 5. In the figure, we show SH (blue solid line) and OH (red dashed line) as a function of the Fermi energy ( F ) with respect to the true Fermi energy ( F true ), where F is varied assuming that the potential is fixed to the potential for F true . The result indicates that there are two peaks for SH near F ≈ F true . On the other hand, a peak of OH is located ∼ 0.3 eV above F true . The values for the spin and orbital Hall conductivities at the true Fermi energy are SH = −96 (ℏ/e)(Ω ⋅ cm) −1 and OH = +320 (ℏ/e)(Ω ⋅ cm) −1 , respectively. More details of the calculation can be found in Supporting Information (Section IV). Although the orbital Hall conductivity is larger than the spin Hall conductivity, its contribution to the torque is expected to be negligible here since the orbital-to-spin conversion ratio in NiFe is expected to be less than 10% 20,35 . We would like to point out that the sign of computed spin Hall conductivity is consistent with the sign of the effective spin Hall angle measured in the experiment, which allows us to conclude that the spin Hall effect of the VO2 is one of the main mechanisms for the torque when VO2 is driven into the metallic phase. Meanwhile, there can be another contribution by the so-called self-induced torque/anomalous spin-orbit torque 19 in the ferromagnetic layer itself. As predicted and experimentally observed previously 19,36 , this can be interpreted as the transfer of spin angular momentum between spin-polarised charge currents and magnetisation. While the spin Hall conductivity from this anomalous spin-orbit torque in NiFe itself was found to be large at ~2,300 S/cm, the value reduces to 10 -100 S/cm at an interface with a non-magnetic layer such as Cu and AlOx, due to the additional angular momentum loss to the lattice via spin-orbit coupling. The expected magnitude and the positive sign of the spin Hall conductivity explains well the observed behaviour in our NiFe/VO2 bilayer system across the VO2 phase transition. At the VO2 insulating regime the spin-orbit torque arises purely from the self-induced torque of the NiFe layer, while as the VO2 becomes more metallic across the phase transition, the bulk and the negative spin Hall effect from the VO2 dominates and reverses the spin-orbit torque direction. Finally, there may be a contribution to the torque originating in the interfacial scattering, but we expect that the interfacial contribution does not change drastically across the insulatormetal transition since the strain induced by the structural phase transition is small (typically of ~ 1% 27 ). We can now compare our results with the previous spin-pumping inverse spin Hall effect (SP-ISHE) measurements in VO2/YIG 32 . In this system, the only source of the observed ISHE signal is the spin-to-charge conversion within the VO2. Therefore, there is no phase-dependent reversal of the signal with temperature, but only the broadening and the reduction of the signal due to the increased interface spin-transparency at the high temperature metallic phase, which is also observed in our case as the increase in the Gilbert damping parameter  (Figure 3). In VO2/YIG, the SP-ISHE signal is largely affected by the conductivity change of the VO2, which is observed as a sharp decrease in the signal at high temperature. In our system, the ST-FMR signal depends on the rectified AMR effect in NiFe, whose conductivity does not change significantly across the VO2 phase transition. This allows the measurements of spin-orbit torques present at the VO2/ferromagnet interface directly across the VO2 phase transition. As studied in depth using X-ray absorption spectroscopy 22 , the change in the VO2 orbital occupation across the phase transition is likely to affect directly the current-induced spin-orbit torque generation mechanisms. The investigation of the orbital correlation and its effect in the spin-orbit torque at the VO2/ferromagnet interface is reserved for a future work. CONCLUSIONS We have measured the current-induced spin-orbit torques in the VO2/NiFe bilayer system using the spin-torque ferromagnetic resonance technique. A sign change of the damping-like spinorbit torques with temperature is observed across the VO2 layer phase transition. The sign change and the modulation of the observed torques with temperature suggest coexistence of various competing mechanisms, mainly the bulk spin Hall effect in metallic VO2, corroborated by our first-principles calculation, and the self-induced torque in NiFe. While additional interfacial effects can play a role, we expect these not to change significantly across the transition, but additional measurements could be carried out to identify further possible contributions. For applications, the large (±100%) modulation, as well as the sign change of the spin-orbit torque enables full tunability of the torque to any desired value via device thermal history engineering, leading to drastically different device architectures. Supporting Information
3,679.2
2022-01-17T00:00:00.000
[ "Physics", "Materials Science" ]
Exponential and Laplace approximation for occupation statistics of branching random walk We study occupancy counts for the critical nearest-neighbor branching random walk on the d-dimensional lattice, conditioned on non-extinction. For d > 3, Lalley and Zheng [4] showed that the properly scaled joint distribution of the number of sites occupied by j generation-n particles, j = 1, 2, . . ., converges in distribution as n goes to infinity, to a deterministic multiple of a single exponential random variable. The limiting exponential variable can be understood as the classical Yaglom limit of the total population size of generation n. Here we study the second order fluctuations around this limit, first, by providing a rate of convergence in the Wasserstein metric that holds for all d > 3, and second, by showing that for d > 7, the weak limit of the scaled joint differences between the number of occupancy-j sites and appropriate multiples of the total population size converge in the Wasserstein metric to a multivariate symmetric Laplace distribution. We also provide a rate of convergence for this latter result. Introduction Branching random walk (BRW) is a fundamental mathematical model of a population evolving in time and space, which has been intensely studied for more than 50 years due to its connection to population genetics and superprocesses; see, e.g., [1,Chapter 9] and references. Among this literature, the most relevant to our study is [4,Theorem 5], which states that the exponential distribution arises asymptotically for certain occupation statistics of a critical BRW conditioned on non-extinction. Their result is closely related to the classical theorem of [13], which says that the distribution of the size of a critical Galton-Watson process, properly scaled and conditioned on non-extinction, converges to the exponential distribution. Yaglom's theorem has a large related literature of embellishments and extensions, e.g., [5] and [2] give elegant probabilistic proofs, and [7] gives a rate of convergence using Stein's method. We now define the nearest neighbor critical BRW on the d-dimensional integer lattice. At each time step n = 1, 2, . . ., every particle generates an independent number of offspring having distribution X with E[X] = 1, Var(X) = σ 2 < ∞, and each offspring moves to a site randomly chosen from the 2d + 1 sites having distance less than or equal to 1 from the site of its parent. We say that a site has multiplicity j in the nth generation if there are exactly j particles from the nth generation at that site. Starting the process from a single particle at the origin, let Z n be the number of particles in nth generation, and let M n (j) be the total number of multiplicity j sites in the nth generation. Lalley and Zheng [4,Theorem 5] showed that, when d 3, there are constants κ 1 , κ 2 . . . with j 1 jκ j = 1 such that, as n → ∞, L Z n n , M n (1) n , M n (2) n , . . . Z n > 0 → L (1, κ 1 , κ 2 , . . .)Z , (1.1) where Z ∼ Exp(σ 2 /2) is exponential rate σ 2 /2, and the convergence is with respect to the product topology (which is the same as convergence of finite dimensional distributions). We study the second order fluctuations in this limit, working with the finite dimensional distributions of (1.1) in the L 1 -Wasserstein metric. More precisely, let · 1 be the L 1 -norm on R r , let H r = {h : R r → R : |h(x) − h(y)| x − y 1 for every x, y ∈ R r } be the set of L 1 -Lipschitz continuous functions with constant 1, and define Our first main result is as follows. Below and throughout the paper, we use c to represent constants that do not depend on n, but possibly L (X), the dimension d, and the length of the vector r, and can differ from line to line. We also disregard the pathological case where Var(X) = 0. Before discussing the ideas behind the proofs of these two theorems, we make a few remarks. The limiting covariance matrix Σ is a constant multiple (Var(X)/2) of the limit of the (unconditional) covariance matrix of (M n (j)−Z n E[M n (j)]) r j=1 , given by Lemma 2.9 below. We are only able to show the limit exists, and cannot exclude the possibility that Σ is degenerate or even zero. Where Theorem 1.2 applies, it implies a rate of convergence of n −1/2 in Theorem 1.1. However, we present the results in this way, as it is conceptually natural, and simplifies the presentation of the proofs. An interesting open question from our study is what is the minimal dimension for which the convergence in Theorem 1.2 occurs? The assumption that d 7 stems from Lemma 2.9, giving the behavior of the covariance matrix. It may be possible to sharpen some estimates, e.g., (2.21), but there are others that may be sharp and still require d 7, for example, the upper bound of (2.23), which must be o(1) (as n → ∞) for our arguments to go through. We now turn to a discussion of the proofs, where a key result is the following rate of convergence for Yaglom's theorem from [7]. With this result in hand, the basic intuition behind Theorems 1.1 and 1.2 is that the number of multiplicity j sites in generation n is approximately a sum of a random number of conditionally independent random variables. The number of summands is Z m , for a well-chosen m < n, and a given summand represents the contribution from only descendants of a single generation m particle. If m is large, then Theorem 1.3 implies that Z m will be roughly exponential with large mean; hence approximately geometrically distributed; and, if d 3, the summands approximate the true variable because the random walk is transient, and most low occupancy sites consist of particles descended from exactly one individual in generation m; see Lemma 2.5. Thus the vector of Theorem 1.1 is approximately a geometric sum with small parameter, which, by Rényi's Theorem for geometric sums, is close to its mean times an exponential. For Theorem 1.2, the idea is similar, but the summands need to be centered for the Laplace limit to arise. A first thought is to subtract the mean of the summands, but in fact we must subtract the mean times a variable with mean one that is highly correlated to the summand to get the correct scaling; see Lemma 2.9. In order to obtain rates of convergence for the Laplace distribution, we prove a general approximation result for random sums, Theorem 2.8 below. The approach of [4] used to obtain (1.1) uses a similar idea, but the conditioning is different to ours, and does not seem amenable to obtaining the error bounds necessary for Theorems 1.1 and 1.2. Here we use couplings via an explicit construction of (M n (j)|Z n > 0), which is an elaboration of [5], along with Theorem 1.3, to evaluate the bounds necessary to obtain Theorems 1.1 and 1.2. We also use the explicit representation in a novel way to compare two different conditionings appearing in our argument; see Lemma 2.3. The use of the Wasserstein metric is essential in our argument, even if Kolmogorov bounds are the eventual goal (via standard smoothing arguments). The organization of the paper is as follows. In the next section we provide constructions and lemmas used to prove Theorems 1.1 and 1.2. In Section 2.1 we state and prove our general Laplace approximation result, and then apply it to prove Theorem 1.2. Section 3 gives some auxiliary multivariate normal approximation results that are adapted to our setting, and used in the proofs. Constructions, moment bounds and proofs To prove Theorems 1.1 and 1.2, we first need to relate L (Z m |Z n > 0) to L (Z m |Z m > 0) (in Lemma 2.3 below). We use the size-biased tree construction from [5]. Size-biased tree construction. Assume that the tree is labeled and ordered, so if w and v are vertices in the tree from the same generation and w is to the left of v, then the offspring of w is to the left of the offspring of v, too. Start in Generation 0 with one vertex v 0 and let it have a number of offspring distributed according to a size-biased version X s of X, so that Pick one of the offspring of v 0 uniformly at random and call it v 1 . To each of the siblings of v 1 , attach an independent Galton-Watson branching process with offspring distribution X. For v 1 proceed as for v 0 , that is, give it a size-biased number of offspring, pick one at uniformly at random, call it v 2 , attach independent Galton-Watson branching process to the siblings of v 2 and so on. For 1 j n, denote by L n,j and R n,j the number of particles in generation n of this tree that are descendants of the siblings of v j to the left and right (excluding v j ). This gives rise to the size-biased tree. From this construction we define another tree T n as follows. For 1 j n, let R n,j be independent random variables with L (R n,j ) = L (R n,j |L n,j = 0). Start with a single "marked" particle in generation 0, represented as the root vertex of T n , and give this particle R n,1 offspring. Then choose the leftmost offspring of the marked particle as the generation 1 marked particle, and give it R n,2 offspring. To continue, the generation j marked particle is the leftmost offspring of the marked particle in Generation j − 1, and has R n,j+1 offspring. In addition, every non-marked particle has descendants according to an independent Galton-Watson tree with offspring distribution L (X). Let T n be the tree generated in this way to generation n. The key fact from [5, Theorem C(i)], is that the distribution of T n is the same as the entire tree created from an ordinary Galton-Watson process with offspring distribution X conditional on non-extinction up to Generation n. Moreover, this tree and its marked particles can be closely coupled to the size-biased tree and the v j "spine" offspring. Now, let A k,j = {L k,j = 0} and let X m,j , X m,j be random variables that are independent of each other and the size-biased tree constructed above, such that Before proving the lemma, we state a key result for controlling the conditioning on non-extinction is the following second order version of "Kolmogorov's estimate" found in [11,Display between (5) and (6)]. where the first inequality is a union bound, the equality is by independence of lineages, the second inequality is because L (L (i) m,j ) = L (Z m−j ), and the last is by Lemma 2.2. To bound further, we have that, conditional on X j , I j , using the independence of lineages, where we have used Lemma 2.2. The function g(x) = xa x is 1-Lipschitz on (0, ∞) for any a < 1, so we can apply Theorem 1.3 and use the fact that L (L where we have used Lemma 2.2, and we now choose K so that c log(n−m) 2 /(n−m) < 1/2 whenever n − m > K. Using this bound and combining the last three displays, we have Now noting that given I j and X j , R m,j is independent of A m,j and A n,j , we easily find A union bound implies Combining this with (2.2) shows (ii). For (iii), we show that for k = m, n, which easily implies the result. To show (2.3), we use the following correlation inequality: if f is non-decreasing and g is non-increasing, then Cov(f (X), g(X)) 0. Note that where the first line is obvious and the second is because of conditional independence. But we can couple I j to X j in such a way that it is non-decreasing with X j , and thus the correlation inequality implies Combining the last two displays implies (2.3) for k = m. For k = n, using similar ideas, and the second factor decreases with L m,j and so Finally, But again we can couple (X j , I j ) such that I j is non-decreasing in X j , and thus and then (iv) easily follows from (iii) and To continue, we need a lemma giving some moment information for variables in the BRW. Lemma 2.4. Assume the definitions and constructions above, and let Y n;m denote the number of particles in generation n of the BRW that occupy a site with another generation n particle that has a different ancestor at generation m. We have the following: , n = 1, 2, . . . has a limit and the bound follows since We give a construction of the critical BRW, building from the size-biased tree construction. BRW construction. To construct M n (j) conditional on Z n > 0, first generate T n from the size-bias tree section above. Since this tree is distributed as the Galton-Watson tree given Z n > 0, we construct the conditional BRW by attaching a random direction to each offspring, chosen uniformly and independently from the 2d + 1 available directions for the nearest-neighbor random walk. It is obvious that this "modified" BRW process has the same distribution as the original conditioned on non-extinction to generation n. For the modified process, letẐ k denote the size of generation k andM n (j) be the number of multiplicity j sites in generation n, then, in particular, we have L Z m , Z n , (M n (j)) r j=1 |Z n > 0 = L Ẑ m ,Ẑ n , (M n (j)) r j=1 . Modified BRW construction. A key to our approach is the following lemma that shows the cost of replacingM n (i) by a sum of a random sum of conditionally independent variables. GivenẐ m , for i = 2, . . . ,Ẑ m , let Z i n,m be the number of generation n offspring of the ith particle in generation m of the modified BRW construction; here the labelling is left to right (so particle 1 is always the marked particle), and note these are distributed as the sizes of the (n − m)th generations of i.i.d. Galton-Watson trees with offspring distribution L (X). Let also M i n,m (j) be the number of sites having exactly j generation-n descendants from the generation m particle labeled i in the critical BRW construction above, where the counts ignore particles descended from other generation m particles at those sites. Also let (Z 1 n,m , M 1 n,m (j) be an independent copy of (Z n−m , M n−m (j)). Note that givenẐ m , M 1 n,m (j), . . . , MẐ m n,m (j) are i.i.d. Lemma 2.5. For the variables described above and m < n, Proof. The differences between the two variables are (i) multiplicity j sites with more than 1 ancestor from generation m, (ii) multiplicity k > j sites with exactly j particles descended from some single generation m particle, (iii) the number of multiplicity j sites with only descendants of the first particle of generation m, and (iv) M 1 n,m (j Before proving Theorem 1.1, we state and prove a simple lemma. Lemma 2.6. For any nonnegative random variable Y on the same space as (Z j ) 0 j n and m < n, we have Proof. Using Kolmogorov's approximation, where the hat-couplings are those in the BRW description above and where recall the Z i n,m are the number of generation n offspring of the ith particle in generation m of the modified BRW construction, which are distributed as the sizes of the (n − m)th generations of i.i.d. Galton-Watson trees with offspring distribution L (X). Laplace distribution approximation The centered multivariate symmetric Laplace distribution is a cousin to the Gaussian distribution that arises in a number of contexts and applications; see [3] for a book length treatment of this distribution. The r-dimensional distribution is denoted SL r (Σ), where the parameter Σ is an r × r positive definite matrix. In general, its law is the same as that of √ EZ, where E ∼ Exp(1) and Z is a centered multivariate normal vector with covariance matrix Σ. The covariance matrix of SL r (Σ) is Σ, which can thus be thought of as a scaling parameter. The characteristic function is evidently (1), |x| and the multivariate density is given in [3, (5.2.2)] in terms of modified Bessel functions of the 3rd kind. The symmetric Laplace distribution arises as the limit of a geometric sum. More precisely, we have the following theorem, which is elementary, using, for example, characteristic functions. Theorem 2.7. Let N p ∼ Geo(p) be independent of X 1 , X 2 , . . ., which are i.i.d. rdimensional random vectors having mean zero and covariance matrix Σ. Then as p → 0, Here we provide a rate of convergence to a generalization of Theorem 2.7 in a metric amenable to our setting; see also [8] for a related result when r = 1. Theorem 2.8. Let M 1 be a random variable with mean µ > 1, independent of X 1 , X 2 , . . ., which are i.i.d. r-dimensional random vectors with zero mean, covariance matrix Σ = (Σ ij ), and finite third moments. Then there is a constant C r depending only on r such that Proof. Let E ∼ Exp(1), N ∼ Geo(µ −1 ) and Z = Z Σ be a centered multivariate normal vector with covariance matrix Σ, with the three variables independent and independent of M and the X i . Then, since L ( We use below that if X, Y, Z are random elements defined on the same space, then Conditioning on M , applying (2.10), and using independence, we find that where we have used Jensen's inequality in the last line. To bound (2.8), we use (2.10) and the general fact that for fixed z ∈ R d , and random Now use the dual definition of Wasserstein distance, see, for example [9] or [12], to choose a coupling between M and N such that d W (L (M ), L (N )) = E |M − N | . Using Thus, conditioning on Z, using (2.10) and independence, we have that (2.12) implies where the last inequality is because To bound (2.9), we again use (2.10), (2.12), and independence, to find since |1 + 1/(µ log(1 − µ))| 1 for µ > 1. Finally, Then the limits (2.17) Remark 2.10. As a check on the limiting constant and linear growth of the fourth moment of (M n (j) − µ n (j)Z n ) given by (2.17), Theorem 1.2 suggests (assuming appropriate uniform integrability) that for E ∼ Exp(1) and Z j ∼ N(0, Σ jj ), The left hand side of (2.18) is equal to be defined as follows: The first (respectively, second) coordinate is the number of sites with j (respectively, k) particles in generation n descended from particle i in generation 1, and the third coordinate is the number of offspring in generation n descended from generation 1. Note that, given Z 1 , these are i.i.d. copies of (M n−1 (j), M n−1 (k), Z n−1 ). Now write The first term above is the main contribution, and the second term is a small error. For the first term, Similarly, Finally, cn and, from the proof of [4, Proposition 21], that |µ n ( ) − µ n−1 ( )| n −d/2 , we can collect the work above to find To bound the errors, first note (2.20) Similarly, and the same inequality holds with |e (2) n (k)| on the left hand side. For α > 0, to be chosen later, we bound Since d 7, n 1 n 1−d/3 < ∞, and therefore (A n (j, k)) n 1 is a Cauchy sequence; denote its limit by Σ jk , and observe that To prove the second assertion, we fix j and drop it from the notation, e.g., writing M n for M n (j). We follow the strategy above, but now there are higher moments, which we denote by where the second equality follows, similar to the argument above, from where a k is 1, −4, or 6 as appropriate. Bounding these similar to (2.20) and (2.21) above, using that E[X 5+ 18/(d−6) ] < ∞, we have that for α > 0 and β = 18/(d − 6) + 1, n ]n −β(1+α) + n 3(1+α)−d/2 ) c(n 3−βα + n 3(1+α)−d/2 ). Choosing α = (d − 6)/6 − ε, for ε > 0 small enough that αβ > 3, and noting that d 7, we have for δ = min{αβ − 3, 3ε} > 0. Thus we find the fourth moment (2.22) is equal to (2.24) for δ = min{δ, 3 − d/2} > 0. As before, we expand the random sums in the expectations above and then simplify. We cover in detail only the middle term, which is the most involved, and just write the final expressions for the other terms. Write Σ {i,j,k} for sums over distinct indices. For the middle term, For a quick parity check of this formula, note that for non-negative integer z, Similar arguments shows Plugging these into (2.24), we find (it's easiest to compute the coefficients for each of σ 2 , γ 3 , γ 4 ; the last two are zero) that Therefore, where Σ n = (A n−m (j, k)) j,k with A n−m (j, k) = Cov( M 1 n,m (j), M 1 n,m (k)). Using the coupling definition of Wasserstein distance and Lemma 2.5, we can bound (2.25) by noting Similarly, (2.26) is bounded from where we used Lemma 2.4 in the second inequality, and part (iii) of Lemma 2.1 in the second to last. Noting our Wasserstein distance is with respect to L 1 distance, summing over j and k, and using the inequalities above shows that ( Multiplying this by the n −1/6 factor coming from the powers of µ in the bound from Theorem 2.8 gives a term of order n − 2d−9 6(2d+1) . For the remaining (nontrivial) term, the triangle inequality implies cn − 2d−9 6(2d+1) , and putting these bounds into Theorem 2.8 implies that (2.28) is bounded by cn − 2d−9 6(2d+1) . Finally, we bound (2.29). Using the representation L ( √ EZ) = SL r (Cov(Z)) for E distributed as an exponential with rate one, independent of Z, an r-dimensional multivariate normal, we apply Lemma 3.3 below and Lemma 2.9, and noting that d 7, cn − 2d−9 6(2d+1) , and combining the bounds above yields the theorem. CLT with error In this section we prove a multivariate CLT with Wasserstein error for sums of i.i.d. variables that is adapted to our setting. The proof is relatively standard using Stein's method, with the complications that we are working in the Wasserstein (rather than smoother test function) metric, and that we do not demand the covariance matrix be non-singular. In what follows, denote by |·| the Euclidean L 2 -norm. For a k-times differential Clearly, for any vectors a 1 , . . . , a k , x ∈ R r , we have r i1,...,ir=1 a 1,i1 · · · a k,i k ∂ k f (x) ∂x i1 · · · ∂x ir |a 1 | · · · |a k |M k (f ). Var X 1 = Σ = (Σ uv ) 1 u,v r . Let W = n −1/2 n i=1 X i , and let Z have a standard multivariate normal distribution, and let Z Σ = Σ 1/2 Z. Then, for any differentiable function Proof. We first replace h by h ε , which is defined as where Z has a standard multivariate normal distribution, independent of all else. Applying (3.1) to the quantity inside the second expectation, we have r u,v,w=1 Now, let Y 1 be an independent copy of Y 1 , and note that we have Σ uv = n E(Y 1,u Y 1,v ). Applying again (3.1) to the quantity inside the second expectation, we have r u,v,w=1 Subtracting one from the other, it follows that yields the final bound. We also have the following easy corollary to fit our setup above. . Proof. By equivalence of norms in R r , there is a constant q r depending only on r such that for any a ∈ R r , q −1 r |a| a 1 q r |a|. Therefore, for any a, x ∈ R r , r i=1 a i |a| ∂h(x) ∂x i a 1 |a| q r , and E |X| 3 q 3 r E X 3 1 . The result now follows from Theorem 3.1. Finally, we state a simple lemma used to compare centered multivariate normal distributions. Lemma 3.3. Let Σ and Σ be two non-negative semi-definite (r × r) matrices for r 1. Let X = (X 1 , . . . , X r ), respectively Y = (Y 1 , . . . , Y r ), be a centered multivariate random normal vector with covariance matrix Σ, respectively Σ . Then for some constant C that only depends on r. Proof. Using Stein's identity for the multivariate normal, for any twice-differentiable function f , we have The result now follows by using the smoothing argument and Stein's method as in the proof of Theorem 3.1; we omit the details.
6,105.4
2019-09-04T00:00:00.000
[ "Mathematics" ]