text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Novel Function and Intracellular Localization of Methionine Adenosyltransferase 2β Splicing Variants* Human methionine adenosyltransferase 2β (MAT2β) encodes for two major splicing variants, V1 and V2, which are differentially expressed in normal tissues. Both variants are induced in human liver cancer and positively regulate growth. The aim of this work was to identify interacting proteins of V1 and V2. His-tagged V1 and V2 were overexpressed in Rosetta pLysS cells, purified, and used in a pulldown assay to identify interacting proteins from human colon cancer cell line RKO cell lysates. The eluted lysates were subjected to Western blot and in solution proteomic analyses. HuR, an mRNA-binding protein known to stabilize the mRNA of several cyclins, was identified to interact with V1 and V2. Immunoprecipitation and Western blotting confirmed their interaction in both liver and colon cancer cells. These variant proteins are located in both nucleus and cytoplasm in liver and colon cancer cells and, when overexpressed, increased the cytoplasmic HuR content. This led to increased expression of cyclin D1 and cyclin A, known targets of HuR. When endogenous expression of V1 or V2 is reduced by small interference RNA, cytoplasmic HuR content fell and the expression of these HuR target genes also decreased. Knockdown of cyclin D1 or cyclin A blunted, whereas knockdown of HuR largely prevented, the ability of V1 or V2 overexpression to induce growth. In conclusion, MAT2β variants reside mostly in the nucleus and regulate HuR subcellular content to affect cell proliferation. Methionine adenosyltransferase (MAT) 2 is essential to life, because it is the only enzyme that catalyzes the formation of S-adenosylmethionine, the principal biological methyl donor (1). In mammals, two different genes, MAT1A and MAT2A, encode for two homologous MAT catalytic subunits, ␣1 and ␣2. MAT1A is expressed mostly in the liver while MAT2A is widely distributed. In adult liver, increased expression of MAT2A is associated with rapid growth or de-differentiation. Up until recently, the MAT2␤ gene was thought to encode for the regulatory subunit (␤) that is associated only with MAT2Aencoded enzyme (MATII) to lower its K m and K i for methionine and S-adenosylmethionine, respectively. MAT2␤ expression is induced in cirrhosis and hepatocellular carcinoma (HCC). Importantly, increased MAT2A and MAT2␤ expression offer liver cancer cells a growth advantage (2,3). In a recent publication (4), we described novel functions of MAT2␤ that greatly increased its importance in biology. To study transcriptional regulation of MAT2␤, we cloned and characterized its 5Ј-flanking region and uncovered multiple alternate splicing variants and termed the two major variants V1 and V2. V1 encodes a 334-amino acid protein beginning MVGREKELSIHFVPGSCRLVE…. The alternatively spliced V2 utilizes a different first exon lying further upstream in the genomic sequence to encode a hypothetical 323-amino acid isoform beginning MPEMPEDMEQ… (4). The reading frame for both variants converge after this point and are identical. We examined their expression pattern in human tissues and HCC and the effect of tumor necrosis factor ␣ on their expression. MAT2␤ is expressed in most but not all tissues, and the two variants are differentially expressed. The mRNA levels of both variants are markedly increased in HCC. Tumor necrosis factor ␣, which induces MAT2A in HepG2 cells, also induced V1 (but not V2) expression. Both variants enhance growth of liver cancer cell lines. Reduced expression of V1 (but not V2) sensitized HepG2 cells to tumor necrosis factor ␣-induced apoptosis. Reduced expression of V1 also led to apoptosis in RKO cells, a human colon cancer cell line. The aim of the current work was to identify proteins that interact with V1 and V2 to better understand how these variant proteins regulate growth. Here we report novel findings that both variants are highly expressed in the nucleus and interact with HuR, an mRNA-binding protein known to stabilize the mRNA of cyclins (5), to affect its subcellular content and ultimately the expression of its target genes. EXPERIMENTAL PROCEDURES Cell Culture and Materials-Human liver cancer cell lines HepG2 and HuH-7, and human colon cancer cell line RKO cells, were obtained from the Cell Culture Core of the University of Southern California (USC) Research Center for Liver Diseases and grown according to instructions provided by the American Type Culture Collection (Rockville, MD). All reagents were of analytical grade and obtained from commercial sources. HCC and Adjacent Non-cancerous Tissues-HCC and adjacent non-cancerous tissues were obtained from the USC Liver Repository. The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki as reflected in a priori approval by the Keck School of Medicine USC human research review committee. Expression and Purification of MAT2␤ V1/V2 Proteins-Forward primers (V1, 5Ј-GCGGAATTCGTGGGGCGGGAGAA-AGAG-3Ј; V2, 5Ј-TCTGAATTCCCTGAAATGCCAGAG-GAC-3Ј) and reverse primer (5Ј-AGACTCGAGCTAATGAAA-GACCGTTTG-3Ј) were used to PCR amplify MAT2␤ V1 or V2 full-length cDNA. To express recombinant human MAT2␤ V1 or V2, the full-length cDNA was cloned into the pET-28a (ϩ) expression vector as a His-tagged fusion protein via Ndel and Xhol sites. The accuracy of the constructs was confirmed by DNA sequencing. MAT2␤ V1 or V2 protein was expressed in Rosetta pLysS cells (Novagen, San Diego, CA) with isopropyl 1-thio-␤-D-galactopyranoside induction of either 4 h at 37°C or 16 h at 16°C. The proteins were separated from cellular debris through sonication and centrifugation. The resultant proteins were purified by nickel-nitrilotriacetic acid beads (Qiagen, Valencia, CA). After elution with excess imidazole, MAT2␤ V1 expressed at 37°C and MAT2␤ V2 expressed at 16°C were further purified by a Superdex 75 size-exclusion column (Amersham Biosciences). The protein samples were concentrated and then bound to nickel-nitrilotriacetic acid-agarose beads. 5-l beads per sample were boiled in 2ϫSDS gel loading buffer and run on a SDS-PAGE gel. Proteins were visualized with Coomassie Blue staining. Pulldown Assay with MAT2␤ V1/V2 Proteins-MAT2␤ V1 expressed at 37°C and MAT2␤ V2 expressed at 16°C were used for the pulldown assay. His-tagged Myc 1-93 was used as an irrelevant His-tagged protein control. RKO cells were lysed in binding buffer (50 mM Tris, pH 7.0, 150 mM NaCl, 2 mM MgCl 2 , 2 mM CaCl 2 , 20 mM imidazole, 0.5% Triton X-100, and protease inhibitor). 20 mg of protein in RKO cell lysate was incubated with 100 l of MAT2␤ V1/V2 beads at 4°C for 2 h. The beads were washed with binding buffer, and the bound proteins were eluted with elution buffer (20 mM Tris, pH 8.0, 500 mM NaCl, 6 M Urea, and 2 mM 1% Triton X-100). Purified V1 and V2 proteins and their binding proteins were compared by silver staining of SDS-PAGE gels. In Solution Digestion and Liquid Chromatography Tandem Mass Spectrometry-Mass spectrometry was performed as described (6). Proteins were digested by trypsin directly in solution. Peptides were analyzed by capillary electrospray ionization-liquid chromatography/tandem mass spectrometry on a linear ion trap LTQ (Thermo Electron, Inc.) mass spectrome-ter. Data were analyzed using Bioworks 3.2, utilizing the SEQUEST algorithm and Sage-N Sorcerer to determine crosscorrelation scores between acquired spectra and NCBI protein FASTA databases. The following parameters were used for the TurboSEQUEST search: molecular weight range, 0 -5000; threshold, 1000; monoisotopic; precursor mass, 1.4; group scan, 10; minimum ion count, 20; charge state, auto; peptide, 1.5; fragment ions, 0; and static amino acid modifications, Cys 57.05 and Met 15.99. Results were filtered using SEQUEST cross-correlation scores of Ͼ2.0 for ϩ1 ions, 3.0 for ϩ2 ions, and 3.5 for ϩ3 ions. Cell Transfection and Gene Expression Analysis-The expression plasmids pcDNA3.1D/V5-His/MAT2␤ V1 (V1 expression vector) or pcDNA3.1D/V5-His/MAT2␤ V2 (V2 expression vector) have been described previously (4). To overexpress MAT2␤ variant proteins in HuH-7 cells, 90% confluent HuH-7 cells were transiently transfected with the V1 or V2 plasmids using Lipofectamine 2000 (Invitrogen) following the manufacturer's instructions. In some experiments HuH-7 cells were co-transfected with V1 or V2 expression vector and 30 nM siRNA against HuR (CAC GCU GAA CGG CUU GAG GUU (sense) and CCU CAA GCC GUU CAG CGU GUU (antisense)), 60 nM siRNA against cyclin A (Santa Cruz Biotechnology, Santa Cruz, CA), 40 nM siRNA against cyclin D1 (Santa Cruz Biotechnology), or scrambled control using Lipofectamine 2000 (Invitrogen) for 48 h according to the manufacturer's protocol. In separate experiments, RKO cells were transfected with siRNA against V1, V2, or scrambled control as we described (4) for 48 h. Cell lysates were prepared 48 h after transfection for further experiments. Total cell RNA for quantitative real-time PCR was extracted by using the Total RNA Isolation Kit (BioMega, San Diego, CA) and subjected to reverse transcription by Moloney murine leukemia virus reverse transcriptase (Invitrogen). A total of 2 l of reverse transcription product was subjected to quantitative real-time PCR analysis. The primers and TaqMan probes for HuR, cyclin D1, and cyclin A were purchased from ABI (Foster City, CA). Hypoxanthine phosphoribosyltransferase 1 was used as a housekeeping gene as described (7). The quantitative real-time PCR was performed as described previously (8). Immunoprecipitation-HuH-7 cells were transfected with V1 or V2 expression vectors, or empty vector control that contain a V5 tag for 48 h. Cells lysates were prepared by scraping cells into radioimmune precipitation assay buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1% (v/v) Nonidet P-40, 0.1% (w/v) SDS, and protease inhibitor mixture tablets (Roche Molecular Biochemicals), followed by centrifugation at 12,000 rpm for 30 min. Protein A/G beads (Santa Cruz Biotechnology) were used to clear the cell lysates for 1 h at 4°C. A total of 500 g of cell lysate was immunoprecipitated with 2 g of anti-V5 (Invitrogen) or normal IgG antibody (Santa Cruz Biotechnology) for 16 h at 4°C on a rotator. Protein A/G beads were added and incubated for another 4 h. Beads were washed three times with radioimmune precipitation assay buffer and subjected to SDS-PAGE. Western blot was performed for HuR with anti-HuR antibody (Santa Cruz Biotechnology). The reverse direction immunoprecipitation was carried out in HuH-7 cells overexpressing V1 or V2 and HepG2 and RKO cells. Cell lysates were processed for immunoprecipitation with anti-HuR antibody, and blots were probed with antibody against V5 (Invitrogen) or MAT2␤ (Novus Biologicals, Littleton, CO). Nuclear and Cytosolic Protein Separation and Western Blot Analysis-Nuclear and cytosolic proteins were separated with an NE-PER Nuclear and Cytoplasmic Extraction Kit (Pierce) following the manufacturer's instructions. The protein concentrations were determined by using a Bio-Rad Protein Assay kit. Equal amounts of proteins were resolved in 12% SDS-polyacryl-amide gels and electrophoretically transferred to nitrocellulose membranes. Blots were probed with antibodies against V5, MAT2␤, and HuR. Histone 3 and ␣-tubulin (Cell Signaling) were used as loading controls for nuclear and cytosolic proteins, respectively. Western blot analysis for MAT2␤ was also done using whole cell lysates from HuH-7 cells overexpressing V1, V2 or empty vector control, HCC, and adjacent non-cancerous tissues using actin as loading control. Blots were developed by enhanced chemiluminescence. Immunofluorescence and Confocal Microscopy-HuH-7 cells were plated in 24-well plates containing coverslips. 48 h after transfection with V1 or V2 overexpression vectors or empty vector, cells were fixed and permeabilized with 4% paraformaldehyde for 15 min at room temperature and ice-cold methanol for 15 min at Ϫ20°C. A 5% goat serum solution in phosphatebuffered saline was used to block the cells for 1 h at 37°C. Cells were incubated for 1 h at room temperature with each primary antibody and then with an Alexa Fluor 488-conjugated secondary antibody (Invitrogen) for 1 h at room temperature. Nuclei were stained with Hoechst 33342 (Sigma) for 5 min at room temperature. Slides were mounted with Dako fluorescent mounting medium and analyzed with an Eclipse TE300 confocal microscope (Nikon Instruments Inc., Melville, NY). Cell Proliferation Assay-HuH-7 cells were plated on a 96-well plate (ϳ30% confluent) and cotransfected with V1 or V2 expression vector and 30 nM siRNA against HuR, 60 nM siRNA against cyclin A, 40 nM siRNA against cyclin D1, or scrambled control using Lipofectamine 2000 (Invitrogen) for 48 h according to the manufacturer's protocol. The EdU (5-ethynyl-2Ј-deoxyuridine, an alternative to bromodeoxyuridine for measuring new DNA synthesis) incorporation was measured with the Click-iT EdU Microplate Assay kit (Invitrogen). Statistical Analysis-Data are given as mean Ϯ S.E. Statistical analysis was performed using Student t test for comparison of paired samples and analysis of variance followed by Fisher test for multiple comparisons. Significance was defined by p Ͻ 0.05. FIGURE 1. Purification of MAT2␤ V1 and V2 and identification of interacting proteins. A, V1 and V2 were overexpressed in Rosetta pLysS cells and purified by nickel-nitrilotriacetic acid. The SDS-PAGE gel illustrates Histagged V1 and V2 proteins expressed with isopropyl 1-thio-␤-D-galactopyranoside induction at 37°C or 16°C. 5 l of V1 or V2 beads was heated in SDS loading buffer and run on a 12% SDS-PAGE gel. Proteins purified by nickelnitrilotriacetic acid were visualized with Coomassie Blue staining. B, Western blot analysis for His tag was performed to confirm His-tagged V1 protein expressed at 37°C and V2 protein expressed at 16°C. C, SDS-PAGE gel demonstrating proteins binding with V1 or V2. Pulldown experiments were performed using RKO cell lysate and His-tagged V1 or V2. 20 g of binding proteins and 5 l of V1 or V2 beads were separated by 12% SDS-PAGE and visualized with silver staining. D, Western blot analyses of MAT2A and MAT2␤-encoded proteins, which were present in the eluted proteins after V1 or V2 pulldown assay but not in the control (His-tagged c-Myc expression vector). TABLE 1 Proteins interacting with V1 and V2 Proteins were identified with capillary electrospray ionization-liquid chromatography/tandem mass spectrometry. The Xcorr score was calculated by using SEQUEST. RESULTS AND DISCUSSION Purification of MAT2␤ V1 and V2 and Identification of Interacting Proteins-To express recombinant human MAT2␤ V1 or V2, the full-length cDNA was cloned into the expression vector pET-28a (ϩ) as a His-tagged fusion protein. Fig. 1A shows a Coomassie Blue staining of purified proteins obtained from bacteria overexpressing either V1 or V2 grown at 37°C or 16°C subjected to SDS-PAGE. In most cases higher temperature increases the level of protein expression but decreases protein solubility. At present we don't know why the two variant proteins differ in the optimum growth temperature. MAT2␤ V1 expressed at 37°C and MAT2␤ V2 expressed at 16°C were further purified by gel-filtration chromatography. Imidazole gradient elution and an ion-exchange column did not further improve MAT2␤ V1 purity. The purity of MAT2␤ V1 was 92%, and the purity of V2 exceeded 99% as measured by performing densitometry of silver-stained gels. Fig. 1B confirms the presence of the V1 and V2 proteins on Western blot analysis using antibodies against His. These purified V1 and V2 proteins were then used as bait to identify interacting proteins from total RKO cell lysates. Fig. 1C shows silver staining of the proteins obtained. The eluted lysates were subjected to both Western blot analyses for MATII and ␤ subunit (Fig. 1D) and in solution proteomic analysis twice. Controls used His-tagged c-Myc overexpression vector to verify specificity of protein binding. There was agreement in 60% of the proteins identified from both in-solution proteomics, and they are listed in Table 1. A number of proteins was identified from this analysis to bind to both V1 and V2, and they include MATII (␣2), MAT2␤, DEAD box polypeptide 1, splicing factor 3b subunit 3, pre-mRNA cleavage factor 1, and several heat shock proteins (Table 1). HuR mRNA-stabilizing protein, asparaginyl-tRNA synthetase, cleavage and polyadenylation specificity factor 6 are proteins identified that bind only to V1. Those that bind only to V2 include stem cell growth factor precursor, and G-protein-coupled receptor kinase-interactor 1. Experimental limitations in mass spectrometry-based proteomics methods can result both in false positives and false negatives interactions (9). As a consequence, proteomics results must be validated using additional approaches and models. To confirm data obtained from proteomics, we next took RKO cell lysates that had been subjected to pulldown assay with either V1 or V2 protein (control was His-tagged c-Myc expression vector) and performed Western blot for HuR. Fig. 2A shows that HuR is definitely one of the proteins that interacted not only with V1 but also with V2. To further ensure that they interact, we also overexpressed V1 or V2 (using expression vector that has both V5 and His tags) in HuH-7 cells (express MAT2␤ minimally) for 48 h as we described (4), immunoprecipitated the cell lysate with anti-V5 or HuR antibodies, and then performed Western blot for HuR or V5, respectively. Fig. 2B shows that HuR interacted with these recombinant proteins when overexpressed. To see if there is endogenous interaction, we immunoprecipitated HuR in both RKO and HepG2 cells (both cell types express high levels of MAT2␤ variants) and Western blot analyses confirmed MAT2␤-encoded protein interact with HuR (Fig. 2, C and D). Intracellular Localization of MAT2␤ Variants-Given the fact that many of these proteins are mostly nuclear, we became intrigued by whether these variants may be present in the nucleus and whether they regulate HuR subcellular content. Importantly, HuR is known to stabilize the mRNA of many cyclins (cyclin D1 and cyclin A) that are required for cell cycle progression (10). If the MAT2␤ variants interact with HuR and regulate its subcellular content, this may be one mechanism by which these variants regulate growth. Western blot analyses (Fig. 3A) show that, although HuH-7 hardly expresses any MAT2␤, HepG2 and RKO cells express MAT2␤ in both the nucleus and cytoplasm, with nuclear fraction dominating over the cytoplasmic. MAT2␤ variants' localization was explored with HuH-7 cells that overexpress V5tagged V1 or V2 for 48 h. Western blot analyses show that, similar to the endogenous expression pattern, both variants are expressed in both the nucleus and cytoplasm with nuclear fraction dominating (Fig. 3B), and confocal microscopy confirms the presence of predominantly nuclear localization of V1 and V2 (Fig. 3C, green fluorescence detects V5, and nuclei are stained blue). Next we overexpressed V1 or V2 in HuH-7 cells and examined the effect on the subcellular localization of HuR. Fig. 4 shows that, when either V1 or V2 is overexpressed, there is more HuR in the cytoplasm. This is demonstrated by both Western blot analyses of nuclear and cytoplasmic fractions (Fig. 4A) as well as confocal microscopy using green fluorescent protein that detects HuR (Fig. 4B). Densitometric analysis shows V1 or V2 overexpression in HuH7 cells increased cytoplasmic HuR by 44 Ϯ 9% and 21 Ϯ 5%, respectively (p Ͻ 0.05 versus empty vector control from three independent experiments). Conversely, knockdown of endogenously expressed V1 or V2 in RKO cells decreased cytoplasmic HuR by 65 Ϯ 6% and 26 Ϯ 4%, respectively (p Ͻ 0.05 versus scrambled or SC control from three independent experiments). Taken together, these results provide compelling evidence that the level of V1 or V2 expression HuR levels in the cytoplasmic and nuclear extract prepared from HuH-7 cells that were transfected with V1 or V2 expression vectors for 48 h. ␣-Tubulin and histone 3 were used as loading controls. B, immunofluorescent detection of HuR (green by the Alexa Fluor 488-conjugated secondary antibody) in HuH-7 cells that were transfected with either empty vector or V1 or V2 expression vectors for 48 h. Nuclei were visualized with Hoechst staining (blue). C, effect of V1 or V2 knockdown on subcellular HuR levels. RKO cells were treated with siRNA against V1, V2, or scrambled control (SC) for 48 h, and HuR protein expression was examined in cytoplasmic and nuclear extracts as above using ␣-tubulin and histone 3 as loading controls for the respective compartments. Western blots are representative of three independent experiments. D, Western blot analysis for MAT2␤ levels in the cell lysate of HuH-7 cells that were transfected with V1 or V2 expression vectors for 48 h as compared with paired HCC and adjacent non-transformed liver (NL) tissue. Actin served as loading control. modulates HuR subcellular content, with higher V1 or V2 expression increasing cytoplasmic HuR content. To see whether the level of the overexpressed V1 or V2 is physiologically relevant, we compared MAT2␤ protein level in HuH-7 cells overexpressing V1 or V2 to those in paired HCC and adjacent non-transformed liver tissues (Fig. 4D). The magnitude of increase in MAT2␤ protein level is ϳ150-fold in V1or V2-overexpressing HuH-7 cells, because the baseline expression is absent. Likewise, MAT2␤ is not expressed in normal liver (2,4), and the level of MAT2␤ protein in HCC specimens is comparable to HuH-7 cells overexpressing V1 or V2. There is a faint MAT2␤ band seen in NL1, which may be due to the fact that MAT2␤ is induced in cirrhosis, which is almost always present in the setting of HCC (2). Effect of MAT2␤ Expression on HuR Target Genes-We next examined the outcome of V1 or V2 expression on known HuR targets, such as cyclin D1 and cyclin A (10). Overexpression of either variant in HuH-7 cells increased the mRNA levels of cyclin D1 and cyclin A by ϳ50% (Fig. 5A) and conversely, knockdown of V1 or V2 in RKO cells decreased the mRNA levels of both cyclin A and cyclin D1, with the inhibitory effect much more pronounced on cyclin A (Fig. 5B). The Inductive Effect of V1 or V2 on Expression of Cyclins Requires Normal HuR Content-To see if the inductive effect of V1 and V2 on the expression of cyclins requires HuR, HuH-7 cells were co-transfected with V1 or V2 expression vector and siRNA against HuR. Fig. 6A shows that, when HuR expression is reduced by 50%, V1 or V2 overexpression no longer induced the expression of either cyclin A or D1. The Inductive Effect of V1 or V2 on Growth Requires Normal Cyclin A, Cyclin D1, and HuR Expression-To see if the increase in cyclin A, cyclin D1, and cytoplasmic HuR content is required for V1 or V2 to exert its growth inductive effect, HuH-7 cells were co-transfected with V1 or V2 and siRNA against cyclin A, cyclin D1, HuR, or scrambled control. The efficiency of cyclin A and D1 knockdown on respective expression was 54 and 70%, respectively. Although V1 or V2 overexpression was still able to increase growth in the presence of either cyclin A or cyclin D1 siRNA, the effect was blunted as compared with scrambled control (Fig. 6B). More importantly, when HuR was knocked down by 50%, the inductive effects of V1 or V2 on cell growth was nearly eliminated (Fig. 6B). The difference likely reflects the fact that HuR has many other targets besides just these two cyclins that may further contribute to decreased growth. Summary and Speculations-Taken together, of the targets identified by proteomics thus far that interact with MAT2␤ variants, we have confirmed that HuR interacts with these variants and its cytoplasmic content is increased when either MAT2␤ variant is overexpressed, resulting in increased cyclin D1 and cyclin A expression, which is consistent with our previous report of increased growth (4). Conversely, knockdown of endogenously expressed V1 or V2 reduced cytoplasmic HuR level and the expression of HuR targets. Our results support the conclusion that the ability of V1 and V2 to interact and modulate HuR subcellular content is a key mechanism for the effect these MAT2␤ variants have on growth. Because MAT2␤ is not expressed in all tissues and indeed, not expressed in HuH-7 cells, one can argue that it may not be important in controlling growth. However, in HCC MAT2␤ is greatly induced, and our current findings further support that this can enhance HCC growth. Whether MAT2␤ is induced in other cancers has not been examined and is worthy of investigation to see if this is a general mechanism to enhance cancer growth. Although both MAT2␤ variant proteins promote growth, they seem to differ in the magnitude of this effect. V1 exerts a stronger effect than V2 on cyclin expression and growth and may be related to the fact that it also exerts a stronger effect in modulating cytoplasmic HuR content. Interestingly, although both variants increase growth, only V1 regulates apoptosis and c-Jun N-terminal kinase signaling (4). There are also differences in the tissue distribution of these two variants, with some tissues expressing predominantly V1, whereas others express V2 (4). Taken together, these observations suggest there are differences in these proteins, and the underlying mechanisms for these differences remain to be fully elucidated. How MAT2␤ variants physically interact with HuR remains to be studied. These two variants differ only at their 5Ј-end. Given that they both interact with HuR, the interaction is likely to lie in the region shared by both. Whether or not the interaction is direct or as part of a complex is also not clearly established. These are areas that will need to be clarified in future investigation. In addition, there are many other interesting targets identified by proteomics that need to be further investigated. Many of these proteins are nuclear and involved in mRNA splicing, which suggests MAT2␤ variants have even more diverse functions than growth and death regulation. Given that, up until very recently, the only function known of MAT2␤ was to regulate the enzymatic activity of MATII, this gene has come a long way to claim its significance in biology and pathobiology.
5,782
2010-04-26T00:00:00.000
[ "Biology" ]
Adapted Speed Mechanism for Collision Avoidance in Vehicular Ad hoc Networks Environment The disrespect of the safety distance between vehicles is the cause of several road accidents. This distance cannot certainly be estimated at random because of some physical rules to be calculated. The more speed gets higher, the more stopping distance increases, mainly in danger case. Thus, the difference between two vehicles must be calculated accordingly. In this paper, we present a mechanism called Adapted Speed Mechanism (ASM) allowing the adaptation of speed to keep the necessary safety distance between vehicles. This mechanism is based on VANET network operation and Multi Agent System integration to ensure communication and collaboration between vehicles. So, it is necessary to perform real-time calculations to make adequate and relevant decisions. Keywords—VANET; multi-agent systems; safety distances; stopping distance; JADE framework I. INTRODUCTION Keeping enough distance from the vehicle ahead means respecting the balance of traffic.This is the best way to avoid a collision.In many cases, this distance is not respected.For example, on the motorway, where the speed is the fastest, nearly two-thirds of drivers do not always respect the safety distance, which generates several traffic problems. The use of new communication technologies such as the infrastructure and services offered by the vehicle network (VANET) and the integration of intelligent agents can avoid such problems and improve the quality and safety of driving. Vehicular Ad hoc Network (VANET) is a particular type of MANET where mobile nodes are smart vehicles equipped with computers (OBU), network cards and sensors [1].Like any other Ad hoc network, vehicles can communicate with each other (for example: exchanging traffic information) or with base stations called roadside unit (RSU) which can be placed all along the roads (seeking information or accessing other networks ...). VANET networks are based on communication and information exchange between vehicles (V-to-V) [2], and between vehicles and roadside unit (V-to-I) (Examples: signals, intersections lights, etc.) or external network elements (Satellites, WiMAX, LTE…) (Figure 1).VANETS networks are characterized by great dynamics and mobility nodes, repetitive changes in the network topology and very variable network density.VANETs are expected to implement a variety of wireless technologies such as Dedicated Short Range Communications DSRC, which is a type of Wi-Fi.Other more wireless technologies are Bluetooth, Cellular, Satellite, and WiMAX. The main applications of VANET networks can be classified in three categories [3][4]: 1) Application in prevention and road safety: VANETs help to prevent collisions and work on the roads, to detect obstacles (fixed or mobile) and to distribute weather information by sending warning messages.It can be used for example to alert a driver to the happening of an accident, and then he can exercise some prudence and forethought when heading to the accident either by changing his direction or doubling his vigilance. 2) Application for traffic optimization and help in driving: Car traffic can be greatly improved through the collection and sharing of data collected by the vehicles, which becomes a technical support for drivers.For example, a car can be notified in case of abnormal slowdown situations [5] (cork, traffic jam, rockslide or works). 3) Applications for driver and passenger comfort: Vehicular networks can also improve the comfort of drivers and passengers.This comfort is illustrated by the internet access, messaging, inter-vehicle chat, etc… [6].Passengers in the car can play in networks, download MP3 files, send cards to friends and access to other services.Hence, our ultimate goal is to design a mechanism that could help solve some of the most common road traffic www.ijacsa.thesai.orgproblems, such as the safety distance between vehicles, and improving road safety, then making it smarter. The remainder of this paper is organized as follows: Section 2 gives an overview of Multi-Agent Systems.Section 3 describes our Adapted Speed Mechanism (ASM).Section 4 provides the simulation results.Finally, in Section 5 we conclude our results with a view on the future trends. II. AGENT AND MULTI-AGENT SYSTEMS An agent is an autonomous physical or abstract entity that is able to act and perceive by itself and about its environment.The agent can communicate with other agents and whose behavior is the result of its observations, its knowledge, and interactions with other agents [7] [8].The agent has its own resources and skills.The agent can both offer services and possibly reproduce some of them. A multi-agent system (MAS) is a community of autonomous agents evolved in a common environment (Figure 2), according to modes of cooperation, competition or even conflict to achieve a global objective [9][10].These agents constitute a complex system that includes intelligence which could be described as collective. The agents in a multi-agent system have several important characteristics: [11]  Autonomy: agents are partially independent and selfaware;  Local views: no agent has a full global view;  Decentralization: no agent is designated for controlling. Multi-agent systems can manifest self-organization, selfdirection and other controlling paradigms.They can also relate complex behaviors even when the individual strategies of all their agents are simple. MAS tend to find the best solution for their problems without any intervention.There is a high similarity here to physical phenomena, such as energy minimizing, where physical objects tend to reach the lowest energy possible within the physically constrained world [12].The systems also tend to prevent propagation of faults, self-recover and be fault tolerant, mainly due to the redundancy of components [13].MAS are applied in the real world to graphical applications such as computer games.Agent systems have already been used in films.They are used in coordinated defense systems.Other applications include transportation, [14] logistics, [15] graphics and GIS.It is widely advocated for use in networking and mobile technologies, to achieve automatic and dynamic load balancing, high scalability and self-healing networks. MAS have also applications in the field of artificial intelligence where they reduce the complexity of solving a problem by dividing the necessary knowledge into subsets, by associating an independent intelligent agent with each of these subsets and coordinating the activity of these agents [16].This is called distributed artificial intelligence. III. ADAPTED SPEED MECHANISM (ASM) Our mechanism aims to force the driver to drive with adequate speed to keep the safety distance (Figure 3) between two vehicles to avoid a possible collision.This is based on the exchange of information between vehicles via the VANET network infrastructure and by the intelligent agents located in the vehicles that deal with message management and calculation. Principle: at a time t, the vehicle (B) sends its speed, its position, and its length to the vehicle (A) and vice versa ((A) to (B)) and thanks to the calculations carried out based on the kinematics equations, the vehicle (A) will have results concerning the time and distance of a likely collision, and then adjust its speed to avoid it.The equations of rectilinear motion uniformly accelerated are given by [17]:  {\displaystyle \mathbf {a} } is the uniform rate of acceleration;  ( ) ( ) A likely collision will occur when: Due to the exchanged positions, the distance separating the two vehicles is given by the following formula: Suppose Vehicle (A) rolls with speed ( ) , the mechanism consists on calculating the suitable slowing down deceleration for having the appropriate speed at of the safety distance to avoid a possible collision. The mechanism therefore must make sure to calculate the following equation: Where ( ) is the safe distance. The table below (Table I) shows the recommended safety distances according to the type of road: A. Safety Distance The safety distance is the distance that must be between two vehicles.It depends on the speed at which you ride.But generally, it is admitted that this distance is given by the distance traveled by your car for 2 seconds [18]. B. Stopping Distance Stopping distance is the distance traveled by your vehicle between the moment you perceive the danger and the moment when your vehicle is finally stopped.The stopping distance is composed of the reaction path and the braking path [19].  Reaction path The reaction path is the distance your car travels when you see the danger and the moment you press the brakes.A caring and healthy driver has a reaction time of 1 second. There is a formula to get the reaction path.  Braking path The braking path is the distance your vehicle traveled between the moment you pressed the brake and when the car was completely stopped. We must distinguish two cases.The case of the dry pavement and the case of the wet pavement which considerably lengthens the braking distance.  Dry pavement:  Wet pavement The stopping distance is the addition of the reaction and braking paths.Then: IV. SIMULATION AND RESULTS To simulate our approach we have developed an application with java language using JADE Framework (Java Agent DEvelopment) [20].The Topology is a simple road composed of two lanes and consists of vehicles deployed in the different tracks.Thus, the integration of the agents in the vehicles aims at the exchange of messages in real time and made the necessary calculations according to the kinematics equations.Based on agents' properties such as cooperation, coordination and negotiation [22], agents form a distributed and cooperative environment that allows drivers to drive with optimal speeds to avoid possible collisions.We will analyze two situations: the vehicle B is in a stop state, and the second case, the vehicle B rolls with a speed lower than that of vehicle A. We will seek the minimum deceleration necessary to avoid the collision.The table II below shows the initial parameters: At the moment of collision, the problem is to solve the following system of equations: Therefore, we will have the following results: Figure 5 shows the stopping distances according to the speeds in case of Dry and wet road.We note that as the speed increases the stopping distance increases too.In case of wet road, to stop takes more distance than in case of dry road.).We notice that for the case of (a = 0) that is to say the vehicle (A) rolls with a constant speed ( ( ) ), the collision between the two vehicles will occur after 1.67 seconds.We also notice that we will never have a collision for a deceleration of (a = -12 m/s 2 ) or less.Thus, the recommended deceleration to avoid a possible collision is: a min <-9 m/s 2 .Based on the results shown in figure 7 above, if the vehicle (B) runs with a speed ( ( ) ) and vehicle (A) still runs with a constant speed ( ( ) ) (a = 0), a collision with the vehicle (B) will occur in 2.5 seconds.For cases (a = -6, -9, and -12m/s 2 ), a collision will never occur.The recommended optimal deceleration to avoid a possible collision is: a min <-4m / s 2 . V. CONCLUSION In this work, we presented a mechanism called Adapted Speed Mechanism (ASM) allowing the adaptation of speed to maintain the recommended safety distance between the vehicles in order to avoid possible accidents.This mechanism is based on VANET network operation and the integration of agents in vehicles.Thus, these agents form an intelligent system for communication and collaboration between vehicles by changing useful and necessary information to ensure driving in good conditions.In future work, we will improve our system to take into account other factors that can influence the calculation of the safety distance such as weather conditions, road conditions as well as vehicles condition. initial displacement from the origin;  ( ) is the displacement from the origin at time ;  ( ) is the initial velocity; www.ijacsa.thesai.org {\displaystyle \mathbf {v} (t)} ( ) is the velocity at time {\displaystyle t} ; As mentioned above, we have created two agents: Vehicle (A) Agent and Vehicle (B) Agent.Each agent is contained in a platform (Container) composed of an Agent Management System (AMS) and a Directory Facilitator (DF) as showed in figure 4. JADE agents use messages that conform to FIPA ACL (FIPA Agent Communication Language) specifications [21]. Figure 6 Figure 6 illustrates the different variations of traveled distances related to the function of time and according to different decelerations for the case of fixed vehicle (B) ( ( )).We notice that for the case of (a = 0) that is to say the vehicle (A) rolls with a constant speed ( ( )), the collision between the two vehicles will occur after 1.67 seconds.We also notice that we will never have a collision for a deceleration of (a = -12 m/s 2 ) or less.Thus, the recommended deceleration to avoid a possible collision is: a min <-9 m/s 2 .Based on the results shown in figure7above, if the vehicle (B) runs with a speed ( ( )) and vehicle (A) still runs with a constant speed ( ( ) ) (a = 0), a collision with the vehicle (B) will occur in 2.5 seconds.For cases (a = -6, -9, and -12m/s 2 ), a collision will never occur.The recommended optimal deceleration to avoid a possible collision is: a min <-4m / s 2 . TABLE I . SAFETY DISTANCE ACCORDING TO SPEED LIMIT
3,028.2
2018-01-01T00:00:00.000
[ "Computer Science" ]
Delay-Sensitive NOMA-HARQ for Short Packet Communications This paper investigates the two-user uplink non-orthogonal multiple access (NOMA) paired with the hybrid automatic repeat request (HARQ) in the finite blocklength regime, where the target latency of each user is the priority. To limit the packet delivery delay and avoid packet queuing of the users, we propose a novel NOMA-HARQ approach where the retransmission of each packet is served non-orthogonally with the new packet in the same time slot. We use a Markov model (MM) to analyze the dynamics of the uplink NOMA-HARQ with one retransmission and characterize the packet error rate (PER), throughput, and latency performance of each user. We also present numerical optimizations to find the optimal power ratios of each user. Numerical results show that the proposed scheme significantly outperforms the standard NOMA-HARQ in terms of packet delivery delay at the target PER. Introduction Most of the advancements in wireless cellular communication, such as third-generation (3G), fourth-generation (4G) and 4G long-term evolution (LTE), are primarily focused on the human-centered communication for enabling enhanced mobile broadband (eMBB) communication [1]. The fifth-generation (5G) of mobile standards envisions to include massive machine-type communications (mMTC) and ultra-reliable low-latency communications (URLLC) into its area of focus, apart from eMBB [2,3]. In eMBB, usually, the file size is large, and the packet reliability is given priority over the packet level latency [4]. URLLC would be a key enabler of various mission-critical applications, such as telesurgery, tactile Internet, factory automation, and smart grids [5]. URLLC traffic is delay-sensitive; therefore, a delayed packet is considered as an erroneous packet. URLLC has two conflicting performance requirements, i.e., low latency, which requires user plane latency below 1 ms, and high reliability, which requires a packet error rate (PER) less than 10 −6 for a packet of size 32 bytes, with or without retransmission [5,6]. mMTC is the key enabler of the Internet of Things applications, such as smart metering, smart agriculture, etc. These services involve massive connectivity and low-power communications to support billions of devices, mainly transmitting short messages [7]. Energy efficiency and massive connectivity are required to enable mMTC. Therefore, mMTC design should be scalable and supportive of providing various latency and reliability performance [8] to an immense number of devices, which would increase to 50 billion by 2030 [9]. In 5G, short packet communications are considered to be an effective way to enable low-latency communications for URLLC and mMTC applications. Conventional communication protocols are mainly designed based on the Shannon's capacity formula, which is suitable only when the packet length is considered infinite. Such designs usually lead to significant performance losses when the packet length is short [7]. Recently, several performance bounds have been developed for error rate performance in the finite blocklength regime, e.g., the normal approximation [10]. In particular, the coding gain is reduced at finite blocklengths, as packets experience a finite number of channel observations and the gap to Shannon's limit is increased [6,10]. To compensate for the coding gain, available diversity sources, such as space and frequency, can be utilized [11,12]. Due to limited resources and the fact that many URLLC applications will operate over unlicensed bands, frequency diversity is not a viable solution. Instead, one can utilize retransmission techniques, such as hybrid automatic repeat request (HARQ) [13,14], which increase latency [15,16]. Non-orthogonal multiple access (NOMA) schemes can be used to exploit channel diversity and resource utilization by simultaneously allocating a channel to multiple users [17]. NOMA has been actively investigated in the past decade, since it can effectively provide higher throughput and flexibility in comparison to orthogonal multiple access (OMA) techniques [7,18,19]. In NOMA, multiple devices can share the same radio resources using the superposition of signals. Successive interference cancellation (SIC) is used to separate the signal of each user at the receiver [20]. NOMA can be implemented by either power or code sharing between users [21]. NOMA is mainly analyzed in the asymptotic blocklength regime and, more recently, in the finite blocklength regime, proving that NOMA effectively provides better resource utilization and energy efficiency [22][23][24]. In [22,25], the authors show that NOMA outperforms OMA and reduces the latency under a finite blocklength regime. In NOMA, more users can be served over limited channel resources, resulting in higher spectral efficiency, which reduces latency as well [22]. These features of NOMA make it a potential candidate technique for URLLC and mMTC scenarios. In HARQ, in case of packet failure, the receiver feedbacks a negative acknowledgment (NACK) and requests for retransmission. In contrast, upon receiving an ACK, the transmitter sends a new packet. Upon a retransmission request, the transmitter can either send a duplicate copy of the packet, known as chase combining HARQ (CC-HARQ), or send more redundancy through forward error correcting code, known as incremental redundancy HARQ (IR-HARQ) [26]. The receiver combines the retransmission with failing packets to increase the decoding reliability. With CC-HARQ, maximum ratio combing (MRC) is used to increase the effective signal-to-noise ratio (SNR), whereas with IR-HARQ, code combing is used to increase reliability. In the asymptotic blocklength regime, when the channel is perfectly known at the transmitter, rate adaptation via adaptive modulation and coding can be used to reduce the retransmission requests [27]. However, in the finite blocklength regime, the retransmission requests are more probable due to the high error rate of finite length codes [28]. Both IR-HARQ and CC-HARQ are actively being investigated in the finite blocklength regime [13,15]. The delay performance is optimized for single user with HARQ in the finite blocklength regime in the Rayleigh fading channel in [29]. HARQ was recently analyzed with NOMA in the downlink set up with two users in [30][31][32][33], where the outage performance was analyzed in the infinite blocklength regime with rate and power adaptation. HARQ-enabled NOMA is also studied in [22,23] to evaluate its usefulness in enabling URLLC and mMTC. In [34], the authors analyzed HARQ in an uplink NOMA setting, focusing on enabling retransmissions to be distinguishable from the regular transmission to facilitate grant-free HARQ communication. Moreover, in [35], the authors adjust power levels among users to reduce retransmission requests. Retransmission with HARQ causes additional delays in communication. Efforts have been made to improve retransmission quality resulting in throughput gain [36]. However, the throughput gain only translates to average delay performance improvements [37]. URLLC and many mMTC scenarios require low-latency with a per-packet delay guarantee. In this paper, we consider an uplink NOMA system paired with HARQ for short packet communications, where the target per-packet latency of each user is the priority. Although the HARQ process improves reliability, it increases latency and causes packet queuing. The primary motivation of this work is to increase reliability without causing latency by maintaining per-packet arrival deadlines. We propose a novel NOMA-HARQ approach, where the retransmission of each packet is served non-orthogonally with the new packet in the same time slot. We use a Markov model (MM) to analyze the dynamic of the uplink NOMA-HARQ with one retransmission and characterize the PER, throughput, and latency performance of each user. We also present numerical optimizations to minimize PER and find the optimal power ratios of each user. Numerical results show that the proposed scheme significantly outperforms the standard NOMA-HARQ in terms of the packet delivery latency at the target PER. The rest of the paper is organized as follows. In Section 2, the system model and preliminaries on NOMA and HARQ are presented. Section 3 presents the proposed delaysensitive NOMA-HARQ scheme, where its reliability and delay analysis are discussed. Section 4 presents numerical results. Finally, Section 5 concludes the paper. System Model and Preliminaries We consider an uplink power-domain NOMA scenario, where N u users can simultaneously send their messages to the base station (BS). Similar to [38], we consider time division duplex system (TDD) so that BS synchronizes the uplink transmission of each user by sending a beacon signal at the beginning of each time slot. The channel between the i-th user, 1 ≤ i ≤ N u , and the BS, denoted by g i , is modeled by large-scale path-loss and small-scale Rayleigh fading [39]. We assume that BS knows the channel state information (CSI) of each user perfectly. The i-th user encodes and modulates its k i -bit message into a packet of length n symbols and sends it to the BS. Let y(t) denote the received signal at the BS at time t given as: where x i (t) ∈ C is the transmitted complex symbols from the i-th user and w(t) ∼ CN (0, 1) is the additive white Gaussian noise (AWGN). We assume that E[|x i (t)| 2 ] = 1. Let P i be the received power of user i at the BS given as where P t,i is the transmit power of user i, |g i | 2 = h i r −ρ i , h i is the small-scale fading with exponential distribution, i.e., h i ∼ exp(1), r i is the distance between the i-th user and the BS, and ρ is the path-loss exponent. We assume block fading channel model, such that the channel remains constant over a time block and changes independently between the blocks. The BS can pair users according to their CSIs and SNR levels to meet the desired level of reliability. BS usually pairs near and far users to exploit their power difference for better SIC decoding. We assume that paired users have certain finite channel gains, so that their transmit power does not exceed the maximum energy budget. Let 0 < w i < 1 denote the ratio of powers between the paired users, such that P i = w i P c , where P c denotes the total received power at the BS from paired users, where w 1 + w 2 = 1 (We use parameter w i to simplify presentation regarding effect of power difference in the total received power. Otherwise, if the total transmit power constraint on each user is used we need to specify channel gains while presenting the results.), when N u = 2. We assume that w 1 > w 2 to treat user 1 as a near user and user 2 as a far user. In practical settings, there may be channel estimation errors that could lead to degradation in optimal user pairing as well as the SIC performance of multiple access systems (In future publications, we would incorporate channel estimation errors and its impact on the performance.). The receiver first decodes user 1 while treating the message of other users as noise. If user 1's signal is successfully recovered, it is then removed from the received signal and user 2's signal is then decoded and removed from the received signal. This continues to decode all N u users. Each user reports its decoding status using an instantaneous ACK. Upon receiving an ACK, the user sends a new packet; otherwise, upon receiving a NACK, it retransmits the previous packet, through either CC-HARQ or IR-HARQ, in the next time slot. Generally, user 1 is decoded first due to its higher received power at the BS, unless other users have more copies due to retransmissions. Figure 1a shows packet transmission with NOMA and standard HARQ (S-NOMA-HARQ) [22] when N u = 2. We use normal approximation [10], to characterize the PER in the finite blocklength regime. For CC-HARQ, the bound in [10] can be used with accumulated SNR after MRC, as follows: and the bound in [40] for parallel AWGN channels can be used to calculate the PER for IR-HARQ [40], as follows: where Γ (m) = [γ 1 , · · · , γ m ] is the vector of signal to interference plus noise ratios (SINRs) for m copies of a packet, V(γ j ) = 1 − (1 + γ j ) −2 log 2 2 (e) is the channel dispersion and Q(.) is the standard Q-function. k i is the length of user i's message and n is the length of the codeword in each transmission. Accordingly, the rate of user i in the first transmission is Delay-Sensitive NOMA-HARQ As can be seen in Figure 1a, in S-NOMA-HARQ, each i-th user conducts its retransmission with its maximum power in the new time slot. This causes new arriving packets to be delayed when retransmission is requested. We propose a delay-sensitive NOMA-HARQ (D-NOMA-HARQ) designed for delay-sensitive applications that avoid the excess delay due to retransmissions and adhere to the packet deadlines. More specifically, user i conducts its retransmission with its new arriving packet non-orthogonally. That is, when retransmission is requested from user i, it will superimpose the retransmission packet and new packet with power fractions α i andᾱ i , respectively, whereᾱ i = 1 − α i . The D-NOMA-HARQ scheme for two users with a maximum one retransmission, is shown in Figure 1b. Reliability and Throughput Analysis of Two-User D-NOMA-HARQ We use an MM, as shown in Figure 2, whose states are represented by a vector 1, e} is the current state of user i. State 0 refers to a packet success without any retransmission, State 1 refers to a packet success after single retransmission, and State e refers to packet failure after single retransmission. Lemma 1. For the two-user D-NOMA-HARQ with a maximum of 1 retransmission, i.e., m = 2, the probability of transitioning from state J u to state J v , denoted by π u→v , ∀u, v ∈ {1, 2, 3, 4, 5, 6} is given by where γ (i) u denotes the SINR corresponding to the i-th user at the u-th vector state,γ (i) z denotes the SINR of i-th user during retransmission, where z ∈ {1, 2} indicates variation in SINR due to different combination of users states. The SINRs are given byγ 6 =ᾱ 2 P 2 α 1 P 1 +α 2 P 2 +1 . Moreover, (.) is given in (3) and (4) for CC-HARQ and IR-HARQ respectively. Proof. Similar to ([41], Lemma 1), we know that 1 − γ are representing the probabilities that packet of user i is decoded without retransmission, with one retransmission, or is failed the decoding, respectively. These correspond to the probabilities of user i being at states J i = 0, J i = 1, and J i = e, respectively. Secondly, the state transition probability for states J u (u = [1, · · · , 6]) is the product of marginal probabilities of each user's state J i . For example, when J 1 = 0 and J 2 = 0, 1 or e, the system state transits from J u to J 1 , J 2 and J 3 with probabilities 1 − γ respectively. When user 1 is successful after a single retransmission, i.e., J 1 = 1 then user 2 can only be in two states i.e., J 2 = 1 or e. Consequently, the marginal probabilities of J 2 = 1 and , respectively. Therefore, the system state transits from J u to J 4 and J 5 with probabilities γ , respectively. Similarly, the system state transits form J u to J e with probability γ . This is due to the fact that when user 1 is in error, user 2 is for sure in error. Note that, in D-NOMA-HARQ, the i-th user conducts its retransmission at power α i P i and the remainingᾱ i P i is dedicated to the new arriving packet. After retransmission user 1 always decoded under the interference of the other user. Therefore its SINR during retransmission is given asγ The retransmission SINR of user 2 depends on the state of user 1. When receiver has two copies of user 2 and only a single copy of user 1 new transmission, then user 2 can be decoded first considering user 1's new arriving packet as interference. In this situation, the power of interference signal of user 1 could be P i orᾱ i P i based on the state of user 1, i.e., J 1 = 0 or J 1 = 1, respectively. Consequently, the SINR of user 2 during retransmission, when J 1 = 0 and J 1 = 1, isγ When the system is at state J 1 , the SINR of user 1 is γ as user 1 is decoded first. After removing interference of user 1, user 2 is decoded with SNR γ , which indicates that the previous packet of user 2 was recovered after retransmission and the new packet is the only interference. After removing packet of user 1, the new packet of user 2 experiences SNR γ 2 2 =ᾱ 2 P 2 . When the system is at state J 3 , the previous packet of user 2 is not decoded so user 1 experiences a higher interference as γ After removing the interference due to user 1 , user 2 only experiences interference from its retransmission packet, i.e., γ 4 = γ 2 2 , since for both users previous packets are decoded successfully, and their interference is removed. At state J 5 , user 2 is in error; therefore user 1 is decoded under its interference as γ (1) 5 =ᾱ 2 P 2 α 2 P 2 +1 . Finally, when the system is at state J 6 user 1 is also not decoded so it causes interference for user 2, i.e., γ (1) Remark 1. Let Π = [π u→v ] denotes the state transition matrix for the D-NOMA-HARQ system for 2 users and 1 retransmission. P stat = [p 1 , · · · , p 6 ] T denotes the stationary distribution corresponding to the MM in Figure 2. The PER of user i , denoted by ξ i , is given by where E 1 = {6} and E 2 = {3, 5, 6}. This follows directly from the fact that the stationary distribution of the system can be characterized by the eigenvector of matrix Π T , corresponds to eigenvalue 1 and the PER is simply the stationary probability of user i being in the error state. Remark 2. With D-NOMA-HARQ the throughput of user i , denoted by η i (n, k i ), is accordingly given by This is because with D-NOMA-HARQ, user i sends a new packet of length n with k i message bits in each time slot and received correctly at the receiver with the error rate ξ i . Packet Delivery Delay Profile of D-NOMA-HARQ As with D-NOMA-HARQ there is no queuing, each user sends at most N + 1 packets when N packets are scheduled for the transmission. The delay profile of each user can be characterized as follow where S 1 = {1, 2, 3} and S 2 = {1} denote the states of user 1 and 2, respectively, when a packet is successful without any retransmission. Packet Delivery Delay Profile of S-NOMA-HARQ Authors in [22] evaluated S-NOMA-HARQ with single retransmission and derived the PER ( [22], Equation (14)) and throughput ( [22], Equation (17)) for a given power allocation ratio. In particular, packet success probability of user i without retransmission denoted by p s i is derived in ( [22], Equation (16)). Since each retransmission delays the transmission of the new packet by one time slot, the delay profile of S-NOMA-HARQ for delivering N packets can be calculated as follows Because each packet of user i will be successful with single transmission with probability p s i , and each retransmission causes delay of a time slot with probability 1 − p s i . In S-NOMA-HARQ, since packets are orthogonal to each other, the packet delivery delay follows a binomial distribution. Generalized N u User Setup We can extend the model for general N u users. When single retransmission is allowed, each i-th user, 1 ≤ i ≤ N u , can have J i ∈ {0, 1, e}, packet states. Consequently, there will be maximum 3 N u vector states denoted as J u = [J 1 , J 2 , · · · , J N u ], u = [1, · · · , 3 N u ]. We use SIC decoding, where users are decoded based on their level of received powers. We assume user i is relatively closer to the BS than user j, when j > i. For example user 1 is considered near user, and user 2 is considered far user. We assume that with an equal number of received packets, the total received power of user i is always higher than user j. Therefore, the transmit power constraint of each user is implemented as w j ≤ w i and α i ≤ α j . w i is the fraction of total received power from user i, i.e., ∑ N u i=1 w i = 1 and α i is the power of retransmitting signal, i.e., α i ∈ [0, 1]. Therefore, when the i-th user is in error, i.e., J i = e the j-th user also cannot be decoded, i.e., J j = e, where i < j ≤ N u . As a result, if the BS failed to decode user 1, then all the subsequent users would also fail the decoding due to their less received power than user 1. Similarly, if a user is successfully decoded only after retransmission, no subsequent user can succeed with single transmission. This assumption reduces the total number of states in the Markov model. The reduction in the maximum number of Markov model states denote as U is significant, for example, from 9 to 6 and 27 to 10 for N u = 2 and N u = 3, respectively. Lemma 1 can be extended to the general N u users case. Lets consider a Markov model whose states are denoted as J u , for u = [1, · · · , U], where U is the total number of states. Let P (i, u) denotes the probability of user i transits to state , i.e., J i = , where ∈ {0, 1, e}, when the system state is J u . Note that each J u state corresponds to a specific packet state of user i, i.e., J u = [J i , · · · , J N u ]. For example when N u = 2, J 1 = [0, 0] and J 6 = [e, e]. The state transition probability for states J u , is the product of marginal probabilities of each i-th user's state J i = . We can define the state transition probabilities of a general N u user setup. When a user i, ( 1 ≤ i ≤ N u ), is decoded with a single transmission, its probability is given as P 0 (i, u) = 1 − (γ (i) u ). Consequently, user j, that comes next in decoding order can be decoded with probabilities P 0 (j, u) z ), respectively, for being in state J j = 0, J j = 1 and J j = e. If the i-th user is recovered after retransmission, i.e., J i = 1, whose probability is , the subsequent user j can only have two states, i.e., J j = 1 and J j = e, with probabilities P 1 (j, u) z ), respectively. Finally, when decoding of user i is failed, i.e., J i = e, which happens with probability P e (i, u) = (γ (i) u ,γ (i) z ), subsequent users would definitely fail the decoding, i.e., P e (j, u) = 1. At N u = 3 when all the users are successfully decoded with single transmission, the system transits to stat J v = [0, 0, 0]. In general, the system transits from state J u to J v with probabilities as follows: where γ (i) u andγ (i) z are the SINR during first transmission and retransmission. Now, we provide general guidelines to calculate the associated SINRs of the state transition probabilities for general N u users. The SINRs can be calculated based on the SIC decoding order. In general, a user with the highest received power is decoded first under the interference of other users with less power. Furthermore, if a user's message is decoded successfully, its interference is removed; otherwise, it causes interference. Furthermore, when BS receives more copies due to the retransmission of a user, it is given priority over usual decoding order. This is because, with more copies, a weaker user could be decoded with better quality than a strong user. For example, when user j is retransmitting and has one more packet due to single retransmission than user i, it is decoded prior to user i. Consequently, if user j is decoded successfully its interference is eliminated, and user i experience less interference. These different states of user refer to the typical state in the Markov model dented as J u . In general, SINR of user i during its first transmission is given as: where 1 ≤ i ≤ N u and I j ∈ {i + 1, · · · , N u + 1}, P N u +1 = 0, α N u +1 = 0 for notation consistency and I k ⊆ I j is the set of indices referring to users that are successfully decoded after retransmission. Moreover, P j − α j P j =ᾱ j P j . The α i P i term in the denominator of (10) is the interference caused due to i-th user's previously retransmitting packet. This would be zero if the packet has been decoded successfully. For example, when N u = 2 γ and γ (1) 5 shows this case in Lemma 1. If there is no retransmitting packet, then SINR changes due toᾱ i = 1, as shown in γ (1) 2 in Lemma 1 for N u = 2 case. Moreover, ∑ k α k P k is the interference removed due to successfully decoding users with index set I k . However, if the decoding failed then the interference cannot be removed. Consequently, the SINR could change to γ (i) u = P i ∑ j P j +1 ∀j ∈ I j , which shows that the BS decodes the message of the i-th user under the interference of subsequent j users with maximum power. User i always conducts its retransmission under the interference of its own new arriving packet and packets of subsequent users. The general expression of SINR of user i during retransmission can be written as: where 1 ≤ i ≤ N u . Upon successful decoding of user i after retransmission, BS can remove its interference amounts to α i P i , ∀i < j for decoding user j. With all the state transition probabilities, the stationary distribution corresponding to the MM can be calculated using standard methods. Finally, accumulating the stationary probabilities corresponding to the erroneous packet states of each user (J i = e), the PER and throughput can be obtained for N u users similar to Equations (5) and (6). Remark 3. Note that, in the general N u users case, we keep the maximum allowed retransmission of failing packet to one. In URLLC, the number of retransmissions is kept small to minimize the delay. However, the model in this paper can be extended for maximum M retransmission case with (M + 2) N u MM states, where in general, the success probability of the i-th user at the m-th retransmission can be written as ([γ . Then, various combinations of the product of m terms according to individual user's states defines the state transition matrix. We skip the details to keep the presentation simple. Numerical Results In the simulations, we consider an uplink NOMA with two users and allow a maximum of 1 retransmission using HARQ. Using the MM, the PER, delay, and throughput performance are analyzed for each user with various transmission rates R i , SNRs, powersplitting ratios w i for NOMA, and α i for non-orthogonal HARQ and length of packet n. For simplicity of presentation, we set k 1 = k 2 = k, unless specified otherwise. For simulation, we primarily focused on the CC-HARQ scheme for the detailed analysis and compared it with IR-HARQ for some special cases. When IR-HARQ is employed, the retransmission parity length can be adjusted at the cost of a slight signaling overhead. However, this can increase packet-level latency. In contrast, when CC-HARQ is employed, the whole packet is repeated to increase reliability. Therefore, CC-HARQ is more suitable for URLLC applications due to a simpler design with less signaling overhead. Figure 3 shows error rate performance comparison of proposed D-NOMA-HARQ and S-NOMA-HARQ with different SNRs and rates. We fix the packet length n = 100 and vary k = 100 and k = 50 to model rate R = 1 and R = 0.5, respectively. As shown in this figure, the PER performance of D-NOMA-HARQ is worse than S-NOMA-HARQ, and there exists an SNR performance gap. More specifically, the SNR gaps for target PER of 10 −4 at rate R = 1 are about 6 dB and 8 dB for user 1 and 2, respectively. However, by reducing the rate to R = 0.5, the PER performance gap reduces to about 3 dB and 1 dB for user 1 and 2, respectively, for the target PER performance range 10 −4 to 10 −6 . This is because D-NOMA-HARQ is designed for target delay performance and uses less power and time slots by serving retransmission requests non-orthogonally with new arriving packets, whereas in S-NOMA-HARQ the retransmission is conducted with full power in the new time slot. Thus overall, D-NOMA-HARQ conducts its transmissions with efficient power and bandwidth utilization. Another cause of PER performance loss is that in D-NOMA-HARQ, non-orthogonal sharing of packets causes interference if SIC is unsuccessful. On the other hand, S-NOMA-HARQ allocates full resources, i.e., time slots, and the SNR is same for its transmissions and retransmissions. Fortunately, with a lower rate, i.e., R = 0.5, the non-orthogonal retransmission and new transmission can be decoded with a higher success rate using SIC; therefore, the SNR gap between D-NOMA-HARQ and S-NOMA-HARQ decreases. Figure 4 shows the throughput versus SNR performance comparison between S-NOMA-HARQ and the proposed D-NOMA-HARQ at two different rates, i.e., R = 1 and R = 0.5. As can be seen in the figure, S-NOMA-HARQ achieves higher throughput than D-NOMA-HARQ, when the SNR is low. With D-NOMA-HARQ at low SNRs, SIC decoding of packets is more erroneous because of overlapping transmission and retransmission packets. Whereas, the reliability of S-NOMA-HARQ is superior to D-NOMA-HARQ because HARQ retransmissions are conducted independently to regular transmission with maximum power and separate time slot. However, as the SNR increases and the packet can be recovered using SIC more reliably, excessive retransmission of S-NOMA-HARQ results in throughput saturation. On the other hand, D-NOMA-HARQ maintains a steady gain in throughput with SNR and eventually attains the same level of throughput as achieved with S-NOMA-HARQ. Note that D-NOMA-HARQ throughput performance is better than S-NOMA-HARQ at a specific rate point and SNR region, such as R = 1 and SNRs from 8 dB to 15 dB. This throughput gain of D-NOMA-HARQ over S-NOMA-HARQ can be seen in Figure 4 clearly, at SNR = 10 dB and R = 1. Because at this SNR and rate, the maximum number of retransmitted packets can be recovered using SIC with very high reliability. Finally, the throughput performance of D-NOMA-HARQ and S-NOMA-HARQ eventually becomes similar, when the SNR is very high, such that no retransmission is required and all the packets are successful with only a single transmission. Effect of SNR and Rate on PER, Throughput, and Delay We present the delay performance comparison between proposed D-NOMA-HARQ and S-NOMA-HARQ in Figure 5. We assume that N = 1000 packets are scheduled to be transmitted, and each packet has a specific deadline to reach the receiver. We assume feedback, decoding, and other processing delays to be zero and only account for the retransmission delay. The overall delay of 1000 packets is normalized to 1. Therefore if all the packets are delivered by their respective deadline, the delay overhead is zero. However, if a packet is received after one unit of delay due to retransmission, it incurs a delay overhead. With a specific retransmission rate R and SNR, each scheduled user has an average PER reliability, as can be seen in Figure 3. On the contrary, packet-level delay derived in (8) is not an average delay measure and depends upon the number of scheduled packets. However, the error rate of each of N scheduled packets can be found by (5). The D-NOMA-HARQ design does not allow a packet to be delayed by more than one time slot. In contrast, each retransmission in S-NOMA-HARQ delays all the subsequent packets by one slot. Therefore the delay performance of D-NOMA-HARQ is much better than S-NOMA-HARQ. In both S-NOMA-HARQ and D-NOMA-HARQ a packet is discarded if the maximum retransmission limit is reached and the receiver is unable to decode. Figure 5. Delay performance comparison between D-NOMA-HARQ and S-NOMA-HARQ with w 1 = 0.6 (w 2 = 1 − w 1 ) with CC-HARQ when m = 1, α 1 = 0.5, and α 2 = 0.4, at various SNRs and rates. As shown in Figure 3, at rate R = 1 and SNR = 12 dB, S-NOMA-HARQ achieves PER ≈ 10 −8 ; however, it delays a significantly high number of packets (shown in Figure 5). In contrast, D-NOMA HARQ provides reliability of PER ≈ 10 −4 with a much lower packetlevel latency guarantee. Also, by increasing the SNR or reducing the rate, D-NOMA-HARQ can provide the desired packet-level performance without violating the packet level delay deadlines. For example, as can be seen in Figure 3, at rate R = 0.5 with SNR ≈ 4 dB, D-NOMA-HARQ achieves PER ≈ 10 −6 . In contrast, S-NOMA-HARQ achieves a much better PER. However, Figure 5 shows that at rate R = 0.5 and SNR = 4 dB, the delay performance of S-NOMA-HARQ is much inferior than D-NOMA-HARQ. More specifically, S-NOMA-HARQ achieves the delay performance of D-NOMA-HARQ at R = 0.5 at SNR = 8.1 dB (additional 4.1 dB SNR for each user) as shown in Figure 5. Whereas D-NOMA-HARQ achieves the PER performance of S-NOMA-HARQ at R = 0.5 with only 1 dB and 3.5 dB additional SNR for user 1 and 2, respectively, as shown in Figure 3. This proves that D-NOMA-HARQ is much superior to S-NOMA-HARQ in providing a certain level of PER reliability and packet-level delay performance. Effect of Packet Length n on PER and Throughput One can observe the effect of increasing the packet length while keeping the rate R = k/n fixed. Figure 6 shows the PER performance variation of the proposed D-NOMA-HARQ and S-NOMA-HARQ with the increasing packet length. We use two different packet lengths as n = 100 and n = 500. It is clear from the figure that the reliability performance of both schemes increases with the packet length. This performance gain with increasing length follows the normal approximation and finite packet length assumption [10]. When the packet length is short, i.e., n = 100, the reliability gain with SNR is small due to poor SIC decoding capability. When the packet length increases, the receiver can utilize longer codewords to decode the packets better and remove the successfully decoded packet from non-orthogonal packets using SIC. Therefore, when the packet length is large, i.e., n = 500, the PER performance of D-NOMA-HARQ steadily improves with SNR. Figure 7 shows throughput performance of the proposed D-NOMA-HARQ and baseline S-NOMA-HARQ with packet lengths n = 100 and n = 500. As shown in this figure, the throughput performance of both schemes improves with increasing packet length, especially in the low SNR regime. When packet length is large, S-NOMA-HARQ performance starts to saturate earlier in the medium SNR range. This saturating performance is because, with S-NOMA-HARQ, the excessive retransmission penalty is larger when n is large. Since D-NOMA-HARQ conducts its retransmission by shared power and time resource, it does not incur retransmission overhead. Therefore, D-NOMA-HARQ achieves steady throughput performance gain even when the packet length is large. Nevertheless, we see user 1's saturating throughput performance trend with D-NOMA-HARQ. This is mainly because of excessive retransmission power fraction value for user 1, i.e., α 1 , when n is larger. In practice, the user needs to adopt the power of retransmission with packet length. Therefore, when the packet length is larger, smaller values of retransmission powers α i are sufficient and vice versa. The effect of power-sharing parameters and their optimization are discussed in detail in the following In practical systems, users adopt the transmission rate according to their respective channel quality to increase reliability. Each user can transmit at a different rate according to their respective quality of service by changing k. We set R 1 = k 1 /n and R 2 = k 2 /n by fixing the blocklength n and varying k 1 and k 2 for user 1 and user 2, respectively. We assume k 1 < k 2 to allow user 1 with higher decoding reliability than user 2. Figures 8 and 9 show the PER and throughput performance of each user with various k 1 and k 2 . As can be seen in the figure, a higher value of k 1 leads to lower PER reliability. The figures show that by choosing different k for each user, different levels of reliability can be achieved. The PER is reduced with smaller values of k; however, it reduces the maximum achievable throughput as well as can be seen in Figure 9 . This is because at SNR = 4 dB and n = 100 a higher rate can be chosen with an acceptable PER reliability. Also, the throughput steadily increases with k 1 ; however, setting k 1 very high decreases the PER reliability and throughput performance starts to saturate. We can see in Figure 8 that at k j ≥ 75 results in severe PER degradation, and consequently, throughput performance starts to saturate for both users. Since user 1 is operating at higher transmission power than user 2 the throughput loss is higher in user 2 when k j is very high. 11 show respectively the PER and throughput performance of D-NOMA-HARQ when N u = 3. We show the performance variation with different code rates R. As can be seen in Figure 10, when the code rate is high, the PER performance is poor in low SNRs and reduces slowly in the medium and high SNRs. The corresponding throughput is also poor in low SNRs and increase gradually when R = 0.7. This is because when the number of users are increased, each user experiences relatively higher interference. However, by reducing the rate, i.e., R = 0.6 and R = 0.5, the early saturation of PER can be avoided. This results in higher throughput in low SNRs. Therefore, with increasing the number of users with D-NOMA-HARQ, proper code rates should be chosen to achieve the target PER and throughput for all users. We use w 1 = 0.6 and 0.51 (w 2 = 1 − w 1 ) to show performance variation of each user with different power fractions. The parameter w i indicates the power fraction of NOMA paired users sending packets on the same channel with either S-NOMA-HARQ or D-NOMA-HARQ. In addition to w i , α i is used to indicate the power fraction of the retransmission packet for D-NOMA-HARQ. First, we present the effect of parameter w i on the performance of S-NOMA-HARQ in the following. Figure 12 shows the PER of S-NOMA-HARQ and D-NOMA-HARQ with different w i . As shown in this figure, with S-NOMA-HARQ, when the power difference between users is small, i.e., w 1 = 0.51, both users achieve similar PER performance as the SNR increases. More specifically, at SNR ≈ 3 dB, both users achieve PER of about 10 −6 with S-NOMA-HARQ. However, by increasing the power difference between users (w 1 = 0.6), user 1 achieves much higher reliability at the cost of a small increase of PER of user 2. When the power difference between users is more considerable (w 1 = 0.6), the SIC is better, resulting in PER and throughput performance improvement. However, increasing w i too much leads to PER performance disparity among users, where a user with higher power get much higher reliability, while the PER reliability of other user decreases. More specifically, at w 1 = 0.6, the PER of user 1 is less than 10 −8 while user 2's PER is slightly higher than 10 −6 . On the other hand, with D-NOMA-HARQ, a larger w i can be placed between users to improve the SIC decoding, while user performance disparity can be reduced with the adjustment of parameter α. As shown in Figure 12, with D-NOMA-HARQ at w 1 = 0.6 both users achieve similar PER performance. Note that another drawback of S-NOMA-HARQ is that a slight increase in PER of a user results in much lower latency performance. Therefore, increasing w i too much is also prohibitive. Figure 13 shows the delay performance of each user with S-NOMA-HARQ at SNR 4dB with their PER performance. Using higher power difference between users, i.e., larger w 1 , the PER reliability of 1 user increases beyond the need while the other user suffers from slight PER performance degradation. However, the slight PER performance degradation with S-NOMA-HARQ results in significant packet-level latency performance degradation. Next, we present the effect of parameters w i and α 1 on the PER of D-NOMA-HARQ, when N u = 2. Figure 14 shows the PER performance of both users with various w i and α i settings. Note that the choice of w i and α i greatly affect the PER performance of D-NOMA-HARQ. By fixing α 1 and w 1 , α 2 is varied to see the performance variation at different settings. As can be seen, at smaller values of w 1 , e.g., 0.55, both users achieve similar PER performance; however, due to the small power difference among users at w 1 = 0.55, SIC is not performing well. By Increasing w 1 to 0.6, PER can be reduced further. Furthermore, when α i is large, e.g., 0.7, excessive power is assigned to conduct retransmission, leaving insufficient power for new arriving packets to be decoded successfully. However, by reducing α i to 0.6, a further PER reduction is achieved. Optimization of w i and α i We now consider an optimization problem to find the optimal power-splitting ratios, w i (w 2 = 1 − w 1 ) and α i , to minimize the worst PER among both users when using D-NOMA-HARQ. For a given SNR and rate R = k j /n, the optimization problem is summarized below: s.t. C 1 . 0 ≤ α 2 ≤ α 1 ≤ 1, C 2 . 0.5 < w 1 ≤ 1, condition C 1 is to limit the search space for practical optimization, and C 2 is to allocate a higher power to user 1. We numerically solved (12) by fixing w i and finding α 1 and α 2 . Table 1 shows optimal values of α 1 and α 2 when R = 0.5 at different SNRs and w i . As can be seen in the Table, IR-HARQ performance is slightly better than CC-HARQ. Furthermore, as SNR increases, higher values of α i can be chosen to improve the performance. Also, with a higher value of w 1 , PER performance improves due to better SIC, with proper choice of α i . The parameter w i refers to user pairing. As BS is assumed to know the CSI of each user, it can choose w i by pairing user with specific power difference to meet the target PER requirements of each user. As shown in Table 1, when w 1 = 0.55, the PER of 10 −6 can be achieved for both users by choosing α 1 = α 2 = 0.6. The advantage of D-NOMA-HARQ is that this level of reliability can be achieved with much less latency compared with S-NOMA-HARQ. Compared to standard NOMA, D-NOMA-HARQ has a higher SIC decoding complexity, which also incurs some delay in the decoding. This is because the S-NOMA-HARQ receiver performs SIC to separate the signals of different users, whereas, in D-NOMA-HARQ, retransmissions and regular packets of a user are also separated using SIC. However, the additional decoding delay is much less than the delay caused due to packet retransmissions, feedback, processing, and queuing delays. Fortunately, the SIC complexity can be significantly reduced using efficient parallel decoding techniques. However, the exact analysis is beyond the scope of this work and would be more relevant when actual encoder and decoder such as low-density parity-check codes (LDPC) are being used. The exact complexity analysis can be performed by adding associating SIC cost to the retransmission states of the MM in Figure 2. The analysis can be extended to incorporate the decoding delay simply by adding delay penalty at packet retransmission state J = 1 of the MM. Furthermore, D-NOMA-HARQ requires multi-bits feedback signaling due to more received signals in a time slot during retransmission than S-NOMA-HARQ. Conclusions In this paper, we proposed a multiuser uplink strategy for delay-sensitive applications. NOMA was used to allow simultaneous transmission of users' packets and also to allow retransmission to share resources with new arriving packets to limit delay. In this way, the target reliability is achieved without causing queuing to any of the users. We analyzed the throughput, PER, and delay performance of the proposed scheme by using a MM. We also defined and solved an optimization problem to find power-sharing parameters to minimize the PER. Results show that the proposed scheme significantly outperforms the standard NOMA-HARQ scheme in terms of the packet delivery delay.
10,775.8
2021-07-01T00:00:00.000
[ "Computer Science", "Engineering" ]
DISCRETE N-BARRIER MAXIMUM PRINCIPLE FOR A LATTICE DYNAMICAL SYSTEM ARISING IN COMPETITION MODELS . In the present paper, we show that an analogous N-barrier max- imum principle (see [3, 7, 5]) remains true for lattice systems. This extends the results in [3, 7, 5] from continuous equations to discrete equations. In or- der to overcome the difficulty induced by a discretized version of the classical diffusion in the lattice systems, we propose a more delicate construction of the N-barrier which is appropriate for the proof of the N-barrier maximum principle for lattice systems. As an application of the discrete N-barrier maximum principle, we study a coexistence problem of three species arising from biology, and show that the three species cannot coexist under certain conditions. 1. Introduction and main results. This paper introduces a generalization of the N-barrier maximum principle (NBMP) from second order differential operators, such as [3,6], to the second order difference operators in the following boundary value problem for the two-component lattice dynamical system ( [12,13]) where x ∈ R and the parameters h, d, k, a 1 , a 2 are positive constants. For the limiting case h → 0 + , the NBMP of (BVP) has been established in [3,6]. (BVP) is a discrete version of itself with h → 0, which arises in finding a traveling wave solution of the form (u(y, t), v(y, t)) = (u(x), v(x)), x = y − θ t (1) to the Lotka-Voletrra system of two competing species where u(y, t) and v(y, t) represent the density of the two species u and v, respectively, and R is the habitat of the two species. For the problem arising in ecology as to which species will survive in a competitive system, traveling wave solutions serve an important role in understanding the competition mechanism of species. In (1), θ is the propagation speed of the traveling wave, which is an important index to understand the competition mechanism. When θ > 0 in (BVP), u survives and v dies out eventually; when θ < 0 in (BVP), v survives and u dies out eventually. Clearly, (2) has four constant equilibria: e 1 = (0, 0), e 2 = (1, 0), e 3 = (0, 1) and is the intersection of the two lines 1−u−a 1 v = 0 and 1−a 2 u−v = 0 whenever it exists. The asymptotic behavior of solutions (u(x, t), v(x, t)) for (2) with initial conditions u(x, 0), v(x, 0) > 0 can be classified into the following four cases. In this paper, we restrict ourselves to the case where one of (i), (ii), (iii) and (iv) in Theorem 1.1 occurs. To establish the discrete NBMP for (BVP), we first observe that, without loss of generality we may assume θ ≥ 0 by lettingx = −x and interchanging the boundary conditions at ±∞. Throughout this paper we shall assume unless otherwise stated, that θ ≥ 0. The sign of θ determines which species is stronger and can survive in the ecological system (BVP). The main contribution of the discrete NBMP for (BVP) is to provide a priori lower bounds for the linear combination of the components of (u(x), v(x)). More precisely, our discrete NBMP gives an affirmative answer to the following question. Q: For any h > 0, can we establish the discrete NBMP for (BVP), i.e. can we find nontrivial lower bounds depending on the parameters in (BVP) , v(x)) solves (BVP) and α, β are arbitrary positive constants? When h → 0 + and d = 1, upper and lower bounds of u(x) + v(x) can be given by the classical elliptic maximum principle ( [4]). When h → 0 + and d ̸ = 1, an affirmative answer to an even more general problem of estimating α u(x) + β v(x) is given in [3]. From an economic point of view, one motivation for addressing the above question is as follows. Suppose that the two species U and V in (BVP) are commercial farming animals or cash crops which are grown for profit. Let u and v represent the units of U and V, respectively. The price of each U unit is α and each V unit is It turns out from the construction of the N-barrier (see Section 3) and the proof of Theorem 1.2 (see Section 4) that an analogous discrete NBMP remains true for more general nonlinearity or reaction terms in (BVP * ). In addition, it is easy to see that a trivial upper bound of p(x) is where (u(x), v(x)) solves (BVP) without assuming the monotonicity of u(x) and v(x). Employing our N-barrier method, a sharper upper bound of p(x) can be found by using the monotonicity of u(x) and v(x) established in [12]. We remark that the NBMP in [3] is recovered by letting h → 0 + in Theorem 1. 2. Under certain restrictions on the parameters, we obtain nonexistence of traveling wave solutions for the Lotka-Volterra system of three competing species ( [11]), i.e. nonexistence of solutions of the following problem in R (see Theorem 1.3) as an application of the discrete NBMP. This application makes our discrete NBMP more biologically appealing. Here u(x), v(x) and w(x) represent the density of the three species u, v and w respectively; d i (i = 2, 3), σ i , c ii (i = 1, 2, 3), and c ij (i, j = 1, 2, 3, i ̸ = j) are the diffusion rates, the intrinsic growth rates, the intra-specific competition rates, and the inter-specific competition rates respectively. Except the propagation speed of the traveling wave θ, these parameters are all assumed to be positive. From the viewpoint of the study of competitive exclusion ( [1,14,15,17,20,22]) or competitor-mediated coexistence ([2, 19, 21]), (N) originates from the investigation of the problem when one exotic species (say, w) invades the ecological system of two native species (say, u and v) that are competing in the absence of w. For the continuous case when h → 0 and w is absent, (N) becomes a two-species system with the asymptotic behavior at ±∞ Under the condition of strong competition (or the bistable condition) such a system admits a unique monotone solution (u(x), v(x)) with u(x) being monotonically decreasing and v(x) being monotonically increasing in x ( [16,18]). However, the situation changes dramatically for the discrete case h > 0. Under the same condition of strong competition (6) and boundary conditions (5), there exists no traveling solution of (N) with w being absent and θ ̸ = 0 if d 1 and d 2 are sufficiently small ( [12]). On the other hand, under the monostable condition (i.e. u is stronger than v and competitive exclusion occurs) (N) with w being absent admits a solution (u(x), v(x)) satisfying u ′ (x) < 0 and v ′ (x) > 0 if and only if θ ≥ θ min for some constant threshold θ min > 0. A similar conclusion can be drawn for the monostable condition under which v is stronger than u and competitive exclusion occurs ( [13]). Under either (7) or (8), we see that when w is absent in (N), u(x) and v(x) dominate the neighborhoods around x = −∞ and x = ∞ respectively. This fact leads us to consider the situation that when w as an exotic species invades (N), the wave profile of w(x) remains pulse-like, i.e. w(±∞) = 0 and w(x) > 0 for x ∈ R, if the three species coexist since w will prevail over u and v only on the region where u and v are not too dominant. Under certain conditions on the parameters, this conjecture turns out to be true for the continuous case h → 0. As indicated in [8,9,4], existence of (N) has been proved by means of the tanh method as well as numerical experiments. When h → 0, nonexistence of (N) under certain conditions on the parameters has been established by using the NBMP for (N) with h → 0 ( [4,3]). In this paper, we also apply the discrete NBMP (Theorem 1.2) to show the following nonexistence of solutions to (N) for some h > 0. andJ Assume that the following hypotheses hold: Here a 1 , a 2 , and k are the parameters which appear in (BVP), and Λ * (c 31 , c 32 , σ 3 ) is given by Remark 2 (Theorem 1.3). • It follows from the assumptions in Theorem 1.3 that M 1 * andM 1 * are quadratic in σ 3 , while M 2 * andM 2 * are linearly dependent on σ 3 . Also,λ * 1 does not depend on σ 3 . Thus it is readily seen that [H3] • According to the assumptions in Theorem 1.3,ū depends on a 2 andv depends on a 1 . Moreover, M 1 and M 2 depend on a 1 , andM 1 andM 2 depends on Biological interpretation of Theorem 1.3: It is readily seen that [H3] is true if σ 3 is sufficiently small when we fix other parameters. Ecologically, this means that when the intrinsic growth rate σ 3 is small enough, the three species u, v and w cannot coexist in the ecological system (N) under certain parameter regimes. The remainder of this paper is organized as follows. We collect in Section 2 some preliminary results including the L 2 estimates of u ′ and v ′ which turn out to be crucial in proving Theorem 1.2. Section 3 is devoted to the construction of the N-barrier for (BVP * ). In Section 4, we make use of the N-barrier constructed in Section 3 to establish Theorem 1.2. As an application of Theorem 1.2, we show in Section 5 the nonexistence result of three species in Theorem 1.3. Finally, the elementary proofs of certain results in Section 2 are given in the Appendix (Section 5). Preliminaries. Throughout this paper, we use the notations collected in the following definition. (iii) (Gaussian-like kernel with compact support) where (v) (Convolution with ϕ) From Definition 2.1 (i) and (ii), it is readily seen that . For other fundamental properties concerning the functions given in Definition 2.1 that will be used in the subsequent sections, see Lemma 2.2 below and its proof in the Appendix (Section 5). Without causing confusion as to what we refer to, we use ) . For convenience of notation, we let Suppose that (u(x), v(x)) satisfies the boundary conditions Then On the other hand, we obtain by letting k = 1 and replacing a 2 with a 1 in (17). To establish Proposition 1 below, we make use of (17) and (18). Proposition 1 (i) and (ii) are used to prove the L 2 estimates of u ′ and v ′ given by Proposition 1 (iii). and where M(a, d, k, θ) = 3 16 Proof. It will turn out from the following proof that the estimates of u follows immediately from those of v by letting a 2 = a 1 and d = k = 1 in (23) From Definition 2.1 (i), it is easy to see that when Let Integrating the second equation in (BVP) from z 2 − mh to z 1 and using (27) lead to where we have used (26). Let it follows from (28) that On the other hand, summation of (26) from m = 0 to m = N − 1 gives Dividing the last equation by N , we use (30) and the fact that Then the desired estimate follows from Lemma 2.2 (i). This completes the proof of the second part of Proposition 1 (i). Proposition 1 (ii) follows from multiplying the second equation in (BVP * ) by v and integrating the resulting equation from z 2 to z 1 : where Proposition 1 (i) has been used. We notice that it is not necessary to assume z 1 − z 2 ≤ 1 for the estimates in (ii) to hold. With the aid of Proposition 1 (i) and (ii), we prove Proposition 1 (iii). Multiplying the first equation in (BVP * ) by v ′ (x) and integrating the resulting equation from z 2 to z 1 yield where Young's inequality a b ≤ a 2 2 ϵ + ϵ b 2 2 with ϵ > 0 has been used. Therefore, Now it suffices to give an estimate for the first term on the right hand side of (38). Integrating by parts yields Lets 1 < z 2 <s 2 < s 1 < z 1 < s 2 withs 2 −s 1 = s 2 − s 1 = 1 and s 1 −s 2 < 1. Double integration of (39) with respect to z 1 from s 1 to s 2 and with respect to z 2 froms 1 tos 2 gives rise to where integration by parts has been used. Rearranging (40) and using the fact that From Definition 2.1 (iii) we have Clearly ψ(x − y) = ϕ(x − y) h 2 is an even function with compact support in y ∈ we further consider in (44). For convenience of notation, let Performing integration by parts on I gives where we have used the fact that ψ(h) = 0. Using the assumption v ′ (x) < 0, the fact that ψ is even, and Proposition 1 (ii), we are led to On the other hand, (See Figure 3) Where we have used the fact that ψ ′ (x − y) = − 1 h 2 when x − h < y < x from Definition 2.1 (iv) and Ψ(x) is given by (See Figure 4) (49) Figure 3. Domain of the integral s2 s1 s2 s1 I 2 (z 2 ) dz 2 dz 1 in (48). Remark 3. Some remarks related to Proposition 1. • Note that the assumption z 1 −z 2 ≤ 1 is not used in deriving Proposition 1 (ii), while it is assumed in deriving Proposition 1 (iii). In addition, Proposition 1 (i) is a point-wise estimate. • It follows from (35) that Moreover, it is easy to see from Figure 5 that the distance between the two lines is given by Suppose that for some x 1 , x 2 ∈ R, When we plot (S u (x), S v (x)) or for x ∈ [x 1 , x 2 ] in the first quadrant of the S u S v -plane, we see from Figure 5 that (88) ensures (87) leads to L = 0. Then the two lines S û u + S v v = 1 and S ū u + S v v = 1 coincide, and therefore this case reduces to that in [3,6]. where Proof. Suppose that we can find some (u 0 , v 0 ) ̸ = (u * , v * ), (1, 0), (0, 1) in the first quadrant of the uv-plane such that (u 0 , v 0 ) ∈ F 0 but (u 0 , v 0 ) / ∈ F * . It follows that either However, both cases lead to a contradiction since F (u 0 , v 0 ) = 0. Therefore, F 0 ⊂ F * . In other words, Lemma 3.1 asserts that the graph of F (u, v) = 0 in the first quadrant of the uv-plane lies between the two lines 1−u−a 1 v = 0 and 1−a 2 u−v = 0. This fact will be used in proving Theorem 1.2.
3,635.8
2020-01-01T00:00:00.000
[ "Mathematics" ]
A Digital Ecosystems Model of Assessment Feedback on Student Learning The term ecosystem has been used to describe complex interactions between living organisms and the physical world. The principles underlying ecosystems can also be applied to complex human interactions in the digital world. As internet technologies make an increasing contribution to teaching and learning practice in higher education, the principles of digital ecosystems may help us understand how to maximise technology to benefit active, self-regulated learning especially among groups of learners. Here, feedback on student learning is presented within a conceptual digital ecosystems model of learning. Additionally, we have developed a Web 2.0-based system, called ASSET, which incorporates multimedia and social networking features to deliver assessment feedback within the functionality of the digital ecosystems model. Both the digital ecosystems model and the ASSET system are described and their implications for enhancing feedback on student learning are discussed. Introduction The term ecosystem originated from the work of Tansley and Clapham early last century to describe an ecological community of organisms (i.e., plants and animals) living within and sharing a physical environment (Wills, 1997). Ecosystems comprise a complex network of interactions and energy flows that occur between organisms and between organisms and their physical environment.Because ecosystems are dynamically evolving entities, they are subject to periodic disturbances and consequently are in a regular state of flux. The principles of ecosystems also describe activities involving complex interactions between people (Briscoe, 2009).For instance, the characteristics of biological ecosystems have been modelled in software programs to simulate socio-economic growth in e-business contexts (Dini et al., 2005) and for the entertainment industry (Bennett, 2006) and the principles of ecosystems are well suited to describe online social networking as this popular activity fosters complex, self-organising interactions among people, and between people and internet technology. Like any biologically-based ecosystem, its digital web correlate has itself evolved to become Web 2.0.Where Web 1.0 provided a more top-down, hierarchical approach for delivery of content from author to user, the Web 2.0 upgrade puts greater emphasis on: user-generated content, data and content sharing, and collaboration.These features allow new ways of interacting with web-based applications so that the web can be used as a platform for generating, re-purposing and consuming content (Franklin & van Harmelen, 2007).Web 2.0 is now so pervasive on the internet that it has become the de facto state of the web, spawning the term 'social web'. The social aspects of Web 2.0 have allowed websites such as YouTube, Facebook, Twitter and Wikipedia to dominate the online world.These, and similar websites, share characteristics of digital ecosystems in that they: involve complex interactions between self-organising communities; are essentially decentralised in terms of content production; are self-sustaining and populated with user-generated content; and allow dynamic interactions between people and technology.Social media and networking websites are both reactive to change as well as effecting change, as was evidenced by the influence of Twitter during the Arab Spring of 2011 (Stepanova, 2011). There are opportunity gains to applying the concepts of digital ecosystems to educational contexts (Reyna, 2011) and numerous studies have been made into the use of popular social networking sites to support formal learning in higher education (Novak et al., 2012).We wanted to explore one particular aspect of the learning process, that of feedback on student learning, and see if it could be modelled and supported within a digital ecosystems learning framework. The terms assessment and feedback encompass a wide variety of theories and practices.In a review of assessment theories, Taras (2012) paid particular attention to 'assessment for learning'.In terms of feedback, there is much interest in students' comprehension and use of feedback, especially for feeding forward to future learning (Sadler, 1989;QAA, 2006;Weaver, 2006;Murtagh & Baker, 2009).We have previously reviewed the literature around assessment feedback and highlighted the importance of feedback as a participatory dialogic process between tutors and students (Orsmond et al., 2011).Current feedback practices do not always offer adequate opportunities for on-going dialogue, making learning less effective and contributing to low student satisfaction with feedback (Bloxham & Campbell, 2010).Universities have attempted to improve student satisfaction levels through policies to provide more rapid and higher quality feedback, but Nicol (2010) suggests this cannot be achieved solely by increasing the amount of feedback provided, as students also seek opportunities to discuss their work.However, the task of supporting tutor-student dialogues around each assessment using traditional, face-to-face, one-to-one approaches can be prohibitively resource-intensive (Bloxham & Campbell, 2010), particularly in terms of increased workload demands on tutors. Social networking sites may offer low-cost yet effective opportunities for engaging students and tutors in on-going dialogue.As social media sites are built around a participatory approach, they allow greater collaboration through sharing and creation of knowledge in web communities.Combining interaction with multimedia has numerous benefits as studies have shown that video-based communication made tutors appear more real, present, and familiar to students in ways similar to face-to-face instruction (Borup et al., 2012) thereby offering a far more engaging process than just reading tutors' written comments. As the power and influence of internet technologies continue to grow, so does the literature base supporting the use of social media in education and in supporting the transition of students to higher education (DeAndrea et al., 2012).Indeed, many universities already use social sites such as Facebook, YouTube and Twitter to support and communicate with students pre-and post-entry.However, Brown (2012) found, perhaps not surprisingly, that academics felt the blanket use of Web 2.0 to promote student learning may not be appropriate at all times and in all contexts.While the use of social networking sites in higher education offers numerous benefits, there are issues around using students' social spaces by academics (Grosseck, 2009) and loss of control of, sometimes sensitive and personal, university-related material when uploaded to public spaces. As an alternative approach to using existing, open social websites, we developed a bespoke, online resource that brought together the technologies underlying the social web but applied in an academic context, specifically to support tutor-student and student-student dialogue within a digital ecosystems framework to focus on assessment feedback.The system was called ASSET (ASSessment Enhancement Tool), and Crook et al., (2011) reported on its use with 27 staff and 297 students across all disciplines at Reading University and found highly positive student and tutor attitudes to using video-based assessment feedback.Here, we discuss how we have used a digital ecosystems approach to conceptually model feedback on student learning and the subsequent development of ASSET as a web-based system that can support a participatory and dialogic learning process. A Digital Ecosystems Model of Teaching and Learning Biological ecosystems are divided into the biotic or 'live' matter component (i.e., living organisms) and abiotic or 'non-living ', physical component (e.g., air, soil, water, sunlight, minerals etc.).Reyna (2011) proposed a digital teaching and learning ecosystem (DTLE) model in which the 'biotic' components comprised the 'teaching niche' (i.e., lecturers, tutors, learning technologists) and 'learning niche' (i.e., students).The 'abiotic' component comprised the physical devices used to access content (e.g., computers, laptops, tablets etc.), the internet connection (e.g., broadband, 3G, 4G etc.), the e-learning interface and the content, either static (i.e., resources loaded onto the e-learning site) or dynamic (i.e., communication tools, collaborative tools and assessments).The model also lends itself to digital correlates of biological aspects such as biodiversity, symbiotic relationships, conservation, balance and adaptation (Reyna, 2011). Introduction to the Model We adapted the DTLE concept to incorporate the principles of self-regulated learning (Nicol & Macfarlane-Dick, 2006) and the GOALS framework (Orsmond et al., 2011) to produce a practical web-based model for practitioners to improve dialogic feedback on student learning from assessments (Figure 1). Figure 1.A model of dialogic feedback on student learning through assessment within a digital ecosystem architecture See section 3.2 for explanation of the component processes within the model. Two Domains The model consists of two domains in which tutor and student interactions occur.The traditional teaching and learning domain (1, in Figure 1) consists of the normal tutor-peer interactions that occur during lectures, seminars, tutorials etc. and students undertaking self-directed learning in the library or home.The second domain is the digital ecosystem (2) offered by Web 2.0's interactive functionality. Learning Processes within the Traditional Domain Here, the assessment task (3) triggers a learning process (4) which involves: understanding the general purpose and the requirements of the assignment task, and determining strategies to deliver the task.These processes are supported by the student: grasping the objectives of the assignment; appreciating how the assessment aligns with learning outcomes; referring to exemplars; addressing marking criteria; gaining advice from tutor's comments in lectures and tutorials; drawing from previous experience and feedback; monitoring and assessing own progress preparing the task through self-generated feedback; and discussing the task with peers (dashed-circle in Figure 1).All these processes are derived from the principles of self-regulated learning and the GOALS framework (Nicol and Macfarlane-Dick, 2006;Orsmond et al., 2011).The process of self-regulated learning is generated internally as the student monitors their own progress towards the completion of the assessment task.Externally derived feedback from tutors and peers also makes a contribution. When the assignment is submitted (5), the tutor assesses, marks and provides individual written or verbal feedback (6) to the student.Feedback needs to be clearly understandable, constructive and encouraging whilst indicating the standard of work achieved and providing advice of how to improve future work. All these processes occur in the traditional learning domain of the conceptual model (Figure 1). Benefits Offered by the Digital Ecosystem Domain The learning processes outlined in 3.2.2 for the traditional learning domain can be enhanced by the multimedia and interactive capabilities of Web 2.0. Figure 1 inlays a conceptual web-based digital ecosystem (2) accessed through a login (7).Popular social media and video websites are built around a user-friendly design architecture with easy navigability and high accessibility (8) and the digital ecosystem needs to emulate these features to provide a learner-friendly interface.To support the learning process through Web 2.0 social media, the digital ecosystem needs to offer (9): multimedia content in the form of audio and video materials; text-based documents such as web-pages, Word files and PDFs; communication tools for comments, rating and messaging; uploading learner-as well as tutor-generated materials; the option to assume an alternative identity through a pseudonym; and functionality for students and tutors to interact actively or passively, learning from other people's comments.This two-way communication not only allows tutors to feedback to students but also allows students the opportunity to respond to the feedback (10).In this way, tutors can see if their comments have been properly understood thus providing them with the opportunity to improve the quality of their feedback if required. Provided that content is not removed, the legacy of materials in the digital ecosystem ensures that learners can view and learn from previous cohorts of students. Development of a Web-Based Correlate of a Digital Ecosystem to Support Learning from Feedback As the Web 2.0 functionalities in our digital ecosystems model are standard features within sharing-based websites such as YouTube, Facebook, Twitter, and Wikipedia, we wanted to explore bringing together the salient features of Web 2.0 to build a discrete, web-based, password-controlled, low-cost, digital ecosystem as a correlate of our conceptual model with the following design specifications:  Support for digital multimedia (audio and video) to enrich the user experience;  A web-interface resembling popular video websites to ease site navigation;  Unique user accounts to personalise interaction within the website;  Asynchronous interaction to allow users to choose when to engage;  A discussion forum for communication between users;  Facility for users to upload and share learning resources;  Ability for users to feedback by rating and commenting upon content;  Password-controlled access to monitor students and tutors engagement. Based on these specifications, a web-based system called ASSET was produced.ASSET was scripted in Perl, maintained on a server running the Ubuntu JeOS operating system with a backend MySQL database.The choice of programming language, platform and hardware were determined the authors' previous expertise developing Web 2.0 applications. Controlled Access through User Authentication Access to ASSET was through a login page where users entered their username and password.Users were appointed a role, principally student or tutor.Each role was assigned a set of permissions, determining the types of interaction associated with that role.User roles were defined by the site administrator.Because ASSET was LDAP-compliant, it authenticated logins through the university's database allowing users to use their university usernames and passwords. ASSET's login page also contained an embedded video file welcoming users to the resource, explaining the site's purpose and informing users how to gain access.Thumbnails of recently watched videos were also provided. Channel Webpage Once logged in, users were taken to a channel webpage.Resources within ASSET were arranged into channels with each containing a set of videos supporting a particular module assignment.The primary resources within ASSET were short, 2-3min videos produced by webcam, saved to disk and uploaded by tutors to support student learning from assessment.An example screenshot is shown in Figure 2 below.Playlist videos could be viewed individually (A) on its own web-page, which provided information, such as a detailed description, tags, and popularity in terms of number of views (C).The webpage also allowed user-interaction (8, in Figure 1) including: adding the video to their personal playlist (B); rating the video on a five-point scale for quality and usefulness (D); sharing comments and reading other people's comments (E); and finding related videos (F).The rating and comments facility provide valuable feedback to the video's originator. Because communication was asynchronous, tutors and students do not need to be logged in simultaneously, making engagement with ASSET more flexible. The Digital Ecosystem Concept Applied to Teaching and Learning The digital ecosystem model offers a way of conceptualising the complex interactions that occur between groups of people in the online space.As online and digital technologies progressively play a larger part in teaching and learning practice, a digital ecosystems approach provides a model of the learning process especially where it involves group learning and a complex interplay of relationships: between learner and learning materials, learner and tutor, and learner and peers (Laurillard, 1999).The inter-relationships between the 'animate' (i.e., students and tutors) and 'inanimate' (i.e., learning material and their physical media) share characteristics with biological ecosystems (Reyna, 2011).We found that the dialogic processes required of self-regulated learning from assessment feedback could be modelled within an ecosystem approach and, furthermore, be easily operationalized through a bespoke web-based software system called ASSET. Pedagogy of the Assessment Feedback Digital Ecosystem The pedagogy underpinning the model shown in Figure 1 is that learning from feedback is conceptualised not simply as acquisition of knowledge from tutors' feedback comments but more a process whereby students actively construct their own knowledge and develop their skills of learning from feedback through a shared process.This is in line with Shepard (2000) who viewed learning in the 21 st century as becoming more progressively constructivist but with assessment practice lagging behind and remaining focused on testing.Similarly, Leathwood (2005) saw that the process of assessment needing to be more socially constructed.By students interacting with feedback from assessments, they transform their understanding of the subject matter by discussing it with others, internalising meaning and making connections with what they already perceive to know.The tutor's role is therefore more to facilitate learning in an on-going dialogue rather than providing feedback long after the assignment has been submitted.Within a digital ecosystems approach, tutors also benefit from seeing reactions and responses from students.In this way, tutors assess the effectiveness of their feedback comments and hone their skills at giving feedback.An on-going and shared dialogue addresses the misalignment that sometimes occurs between students' perceptions and usage of feedback and tutors' intentions for the feedback, as highlighted by Orsmond & Merry (2011). Contribution of Self-Regulated Learning Models of self-regulated learning (Nicol & Macfarlane-Dick, 2006;Orsmond et al., 2011) largely describe the learning process of individual students within the traditional learning domain (Figure 1).In self-regulated learning the emphasis is as much on the process as on the product and involves individual students internally reflecting, monitoring and assessing progress, and obtaining feedback at each stage during preparation of the assignment.However, tutor support of individualised student self-regulation through traditional face-to-face approaches is prohibitively time consuming in terms of workload.The Web 2.0 features of digital ecosystems facilitate communication and sharing among communities of learners and tutors.This is important as students naturally collaborate and discuss their assignment tasks with close peers undertaking the same task.Such peer-to-peer learning can be highly beneficial but the number of peers that any one student can interact with is relatively few when using traditional face-to-face methods.A digital ecosystems approach using a social website up-scales individual self-regulation to group self-regulated learning.Our digital ecosystems model (Figure 1) incorporates a social networking approach allowing wider peer-peer and peer-tutor virtual interactions across the whole cohort of students or between one cohort and another.Involving a more diverse user population has several benefits.For instance as self-regulated learning requires drawing upon previous experience, for someone new to their discipline, there may be little or no previous personal experience to draw upon.Such learners would benefit from being part of a wider, more diverse learning network where those with previous experience are able to share their experiences with others from the same student cohort. Understanding through Discourse Assessment criteria, tutor comments and exemplars are usually provided to support students achieve their learning goals, but there is evidence showing considerable mismatches between tutors' and students' interpretations of assessment criteria and standards (Sadler, 2005;Weaver, 2006).These mismatches are less likely to come to light in the traditional learning domain because of restricted opportunities for discussion with tutors and peers but in the digital ecosystem, students can discuss their perceptions more easily with tutors and peers to clarify their understanding.An extensive body of literature has built up around the need for dialogue in assessment feedback which has been reviewed by Blair & McGinty (2012) who defined 'feedback dialogues' as 'a collaborative discussion about feedback (between lecturer and student or student and student) which enables shared understandings and subsequently provides opportunities for further development based on the exchange'.Two compelling reasons for the need for dialogue are that students find written feedback difficult to understand and that many lecturers find it difficult to explain what they mean (Chanock, 2000).Blair & McGinty (2012) suggested a conversational approach to tutor feedback whereby the student can expand on their ideas, ask questions, seek clarification and defend or explain their position.Such a discourse is a learning opportunity in itself and a digital ecosystem, such as offered by ASSET, not only promotes dialogue but allows it to be shared with other students in an efficient way and allows the sharing of a wider range of opinions.The facility to assume a pseudonym in ASSET encourages people to focus more closely on ideas, take more risks and participate more than in face-to-face discussions (Selfe & Meyer, 1991).On the other hand, when social cues are filtered through a pseudonym, the resulting anonymity may result in more self-centred and unregulated behaviour though Dwyer (2007) concluded that the use of anonymity and pseudonyms was a strategy for protection against negative social interaction.An advantage to having a bespoke, educational, password controlled system is that participants may not know the identities of all the other members but may be reassured that the other contributors are members of the same student group and the focus is on learning rather than random issues that might inflame unregulated behaviour. Operationalising the Digital Ecosystem through ASSET In addition to modelling self-regulated learning from assessment within a digital ecosystems framework, we successfully operationalized the model to produce a web-based system called ASSET with video-sharing and social media capability.Though many of the features of ASSET can be found on popular social media websites, we wanted to bring this functionality together to produce a discrete system which was academic in nature rather than purely a social site.The system was password-controlled as we wanted to restrict access to only those students and tutors involved in the assessment tasks.Login-access also allowed us to track user engagement and for users to customise the appearance of the website and content to their own preference. In a previous study, we populated ASSET with multimedia files, particularly videos supporting assessment feedback across numerous disciplines and involved nearly 30 tutors and 300 students (Crook et al., 2011).In that study, we reported on the benefits of video feedback through ASSET to enhance stakeholder engagement with the feedback process (Crook et al., 2011).Questionnaires indicated that both staff and students perceived video technology as being advantageous especially in providing generic feedback.Furthermore, that video addressed some of the major issues with feedback provision, namely: lack of student engagement with feedback, staff workload, timeliness and quality of feedback received by students (Gibbs and Simpson, 2004).We also showed that video positively changed how tutors thought about and developed feedback for their students; and for students, video enhanced their active engagement with the feedback they received Crook et al., (2011). The benefits of the ASSET digital ecosystem to support learning from assignments went beyond the beneficial effects of video as a medium for feedback.Rather, the communication tools within ASSET supported group discourse about feedback.If ASSET were to be used to support online group discussion while preparing an assignment, student learning would benefit as students would gain a better understanding of the nature of the task and the expected standard for the work (Rust et al., 2003).In ASSET, discussion occurs at the level of individual videos allowing the message from the video to be teased out for added clarification.Studies on YouTube have shown that the facility for commenting on videos allows a 'video-thread' to develop that is initially triggered by the video but takes various directions as comments are made on other people's comments (Adami, 2009).Likewise, ASSET feedback videos allow discussion to be triggered not solely based on the video but around comments on the tutors' feedback as well as on comments by other learners.The principle of allowing students to comment on tutors' feedback was applied by Gomez & Osborne (2007) who designed an assessment whereby students were assessed not just on the quality of their essay-writing skills but also on the quality of their written responses to the assessors' feedback.In this way, tutors saw if students understood the feedback given to them and how the students planned to act on the feedback for future assignments (i.e., feed-forward).Gomez & Osborne (2007) found that allowing students to formally respond to assessment feedback was of immense value in honing student skills in analysing and interpreting feedback.By seeing how students responded to feedback, tutors were able to improve the quality of their comments. The episodic, varied and discrete nature of many assignment tasks, together with a long wait for feedback, are reasons why feedback on one assignment is often not applied to subsequent work.ASSET's repository feature allows generic videos and conversations about assignments to be saved from one year to the next, thereby serving as legacy resources and concurs with the finding that "the availability of feedback stored online for future reference augmented by the opportunity for, and expectation of, further dialogue provides the greatest benefit to future learning" (Hepplestone et al., 2009). The way that videos were arranged and displayed on ASSET was based on popular video-sharing websites. Video sites, such as YouTube, are highly user-friendly in terms of: ease of accessing videos, how videos are displayed on the webpage, and ability to search, bookmark, comment upon and find related videos.Feedback to the producer of the video in terms of number of views, rating and comments provide valuable information to producers and visitors.YouTube would not be as popular as it is, if the website showed its video content as a list of download links that the user had to scroll through, then download or play through a separate media player. The milieu of a video-sharing site together with the sense of community interaction contribute to a successful digital ecosystem.ASSET's layout and functionality were designed to replicate the look and feel of social multimedia sites for these reasons. Our evaluation of the ASSET video system in providing and supporting assessment feedback showed a high level of user satisfaction, with 80% of students reporting that they liked this system for obtaining feedback (Crook et al., 2011).This is in contrast with studies of more traditional forms of providing assessment feedback where students were either dissatisfied or ignored feedback in the learning process (Fletcher et al., 2012).There are many reasons for the popularity of ASSET, especially around the benefits of video which some felt came closer to a 'face-to-face' interaction with the tutor than written comments.The benefits of video for feedback are discussed at some length in our previous paper (Crook et al., 2011).Here, we have focused on the social networking features of ASSET that form a major feature of a digital ecosystem.Reports show that 73% of US teens and 47% of US adults use social networking sites (Lenhart et al., 2010) and online social networking is forming an increasing part of higher education.Studies have shown that the use of social media in university courses improves learning outcomes as well as helping students gain social acceptance and adapt to university culture (Yu et al., 2010).ASSET, through its social media features, allows students to upload videos, rate and make comments and thereby provides a platform for students to put across their views.The ability for students to respond to tutors fits well with Boud's contention (Boud, 2000) that: 'The only way to tell if learning results from feedback is for students to make some kind of response to complete the feedback loop (Sadler, 1989).This is one of the most often forgotten aspects of formative assessment.Unless students are able to use the feedback to produce improved work, through for example, re-doing the same assignment, neither they nor those giving the feedback will know that it has been effective' (Boud, 2000, p158).A major feature of our digital ecosystem model and the ASSET system is that it provides a facility for students to respond to tutor feedback as well as tutors being able to see if students have comprehended the feedback correctly.The participatory and interactive nature of ASSET allows for more varied types of assessment to be developed that take advantage of its social media features and that helps nurture self-regulated learning and the establishment of online learning communities (Dominguez-Flores & Wang, 2011). At the time of conducting this project, there were few off-the-shelf web-solutions to delivering the functionality offered by ASSET, hence the need to develop a bespoke system.As Web 2.0 becomes the de facto state of the web, access to websites with multimedia and interactive capabilities are becoming easier and therefore the potential for applying digital ecosystems approaches within the curriculum to a wider range of learning activities may eventually become the de facto state of higher online education. Figure 2 . Figure 2. A screenshot of a typical channel webpage In ASSET 4. 4 Figure 3. Screenshot of video homepage of ASSET
6,266.4
2013-03-25T00:00:00.000
[ "Computer Science", "Education" ]
CHD1L: a novel oncogene Comprehensive sequencing efforts have revealed the genomic landscapes of common forms of human cancer and ~ 140 driver genes have been identified, but not all of them have been extensively investigated. CHD1L (chromodomain helicase/ATPase DNA binding protein 1-like gene) or ALC1 (amplified in liver cancer 1) is a newly identified oncogene located at Chr1q21 and it is amplified in many solid tumors. Functional studies of CHD1L in hepatocellular carcinoma and other tumors strongly suggested that its oncogenic role in tumorigenesis is through unleashed cell proliferation, G1/S transition and inhibition of apoptosis. The underlying mechanisms of CHD1L activation may disrupt the cell death program via binding the apoptotic protein Nur77 or through activation of the AKT pathway by up-regulation of CHD1L-mediated target genes (e.g., ARHGEF9, SPOCK1 or TCTP). CHD1L is now considered to be a novel independent biomarker for progression, prognosis and survival in several solid tumors. The accumulated knowledge about its functions will provide a focus to search for targeted treatment in specific subtypes of tumors. Introduction Cancer is a disease of genome. International efforts in cancer genomic research have revealed that numerous somatic mutations, genomic rearrangement and structure variants in various type of cancer [1]. Approximately three to six genetic events are necessary to transform a normal cell into a cancer cell [2]. On average, two to eight somatic driver mutations occur in a typical tumor; the remaining are passenger mutations that confer no selective growth advantage. Critical genetic changes (in combinatory effect) reprogram the normal cell growth in several core signaling pathways to change the cell fate, cell survival and genome maintenance [1]. Driver genes are recently suggested to be categorized into "mut-driver genes" and "epi-driver genes". Mut-driver genes contain a sufficient number or type of driver gene mutations; while epi-driver genes are expressed aberrantly in tumors but not frequently mutated; they are altered through changes in DNA methylation or chromatin modification that persist as the tumor cell divides [1]. A collection of all causalities of malignant transformation (also called the cancer initiatome [3]) measured with conventional molecular biological techniques or whole genome sequencing technologies will help us to find solutions to conquer cancer. Chromosomal rearrangements during tumorigenesis have been found to be common genomic abnormalities including amplifications, deletions or translocations that may result from a catastrophic shattering of one or more chromosomes followed by misjoining of the scrambled fragments upon repair, and kataegis [4]. Amplification of 1q21 is one of the most frequent genetic alterations in many solid tumors, including bladder [5], breast [6], nasopharyngeal carcinoma [7], hepatocellular carcinoma [8], esophageal tumor [9], fibrosarcoma of bone [10], colorectal carcinoma [11]. Chromodomain helicase/ATPase DNA binding protein 1-like gene (CHD1L) is a recently identified oncogene that is frequently amplified in hepatocellular carcinoma (HCC) [12]. CHD1L exhibits an oncogenic role during malignant transformation. Overexpression of CHD1L protein in tumors is considered to be a biomarker of poor prognosis and short tumor-free survival time. In this review, we will discuss more about the structure and function of CHD1L gene and its underlying molecular mechanisms during tumorigenesis. Finally, we will propose strategies for developing a CHD1L inhibitor for potential treatment. The structure of CHD1L gene Human CHD1L gene, also known as ALC1 (amplified in liver cancer 1), was identified by Ma et al. in 2008 [12]. This gene is located at Chr 1q21.1 (genomic coordinate: chr 1:146,714,292-146,767,443, strand (+)). CHD1L gene is 53,152 base pairs long and it contains 23 exons. Upstream of CHD1L gene is flavin containing monooxygenase 5 (FMO5) gene, and downstream of CHD1L gene is prostaglandin reductase pseudogene (LOC100130018) and a long intergenic noncoding RNA 624 (LINC00624) (illustrated in Figure 1). Six alternatively spliced transcript variants have been described for this gene (http:// www.ncbi.nlm.nih.gov/nuccore/?term=CHD1L). Interestingly, transcript variant 6 is a noncoding transcript; this is an example of coding RNAs that could serve as both coding and noncoding molecules depending on cellular context [13]. The corresponding proteins encoded from transcript variants are listed in Table 1. The full-length messenger RNA of CHD1L consists of 2,980 base pairs (3,036 bp in recent database) with a putative open reading frame coding an 897aa protein [12]. Protein sequencing analysis showed that CHD1L belongs to the SNF2-like family, containing a conserved SNF2_N domain, which is a helicase superfamily domain (helicase superfamily c-terminal domain (HELICc)), and a Macro domain [12] (Figure 1). The SNF2_N domain is composed of 280 amino acids, and the sequence homology between the SNF2_N domains of CHD1L and another SNF2-like family member, chromodomain helicase DNA binding protein 1 (CHD1), is 45% identical. The sequence homology of the HELICc domain (containing 107aa) between CHD1L and CHD1 is 59% identical [12]. Therefore, the name of chromodomain-helicase-DNA-binding protein 1-like, CHD1L, was given. A total of 64 different mutations have been reported in catalogue of somatic mutations in cancer (COSMIC) (http://cancer.sanger.ac.uk/ cosmic/gene/analysis?ln=CHD1L#dist). These mutations are classified into substitution mutations (nonsense, missense, synonymous), insertion frameshift mutation and other (mutations occur at intronic regions). Among these mutations, the substitution missense mutations account for 56.67% as shown in the distribution chart. Since substitution missense mutations change the amino acids of the protein, and may affect the CHD1L functions. We listed these mutational locations at cDNA level ( Figure 2). Recently, CHD1L mutations were detected in congenital anomalies of the kidneys and urinary tract (CAKUT) patient [14]. How these mutations change CHD1L biological functions in cancer cells remains to be explored. CHD1L expression was detected in different tissues using high density oligonucleotide microarray, in particular, it expresses at higher levels in early erythroid cells, CD34 cells, Figure 1 Genomic information of human CHD1L gene (chromodomain helicase/ATPase DNA binding protein 1-like gene). The genomic locus of CHD1L gene is located on the long arm q21 of Chromosome 1 with 53,152 base pairs in length. It is downstream of FMO5 gene (flavin containing monooxygenase 5) and upstream of LOC10030018 (prostaglandin reductase 1 pseudo gene) and LINC00624. CHD1L contains 23 exons (green boxes), which may be transcribed with six transcript variants. The full-length transcript (NM_004284.2) and encoded protein structure is illustrated. CHD1L protein is comprised of two helicase domains (yellow color), a C-terminal macro domain (blue color) and nuclear localization sequence (NLS, purple color). There are two putative phosphorylation sites at the relative C-terminus of the protein: phospho-serine at 636 and 891 amino acids, respectively. endothelial cells, dentritic cells and some leukemic cells (K562, HL60) [15]. Like other SNF2 chromatin remodeling proteins, CHD1L is localized to the nucleus. The functions of CHD1L CHD1L is a recently identified oncogene located at 1q21, a frequently amplified region in hepatocellular carcinoma (HCC) [12]. The biochemical functions of CHD1L were predicted based on the protein structure similarity with CHD1. The CHD1 family of proteins is characterized by the presence of chromo (chromatin organization modifier) domains and SNF2-related helicase/ATPase domains. CHD1 protein is able to bind DNA and regulate ATP-dependent nucleosome assembly, modification of chromatin structure and mobilization through their conserved double chromodomains and SNF2 helicase/ATPase domain [16]. Sequence comparison showed that CHD1L contains SNF2-N domain and a helicase superfamily domain; therefore, CHD1L has also been hypothesized to play important roles in transcriptional regulation, maintenance of chromosome integrity and DNA repair. But unlike CHD1, CHD1L does not contain a chromodomain, which can recognize methylated histone tails. Instead, CHD1L contains a macro domain [12], which is an adenosine 5′-dephosphate (ADP)-ribose/ polymer of ADP-ribose (PAR)-binding element [17]. Thus, CHD1L possesses a PAR-dependent chromatin remodeling activity and facilitates DNA repair reactions within a chromatin context [18]. ATPase and chromatin remodeling activities of CHD1L are strongly activated by the poly (ADP-ribose) polymerase Parp1 and its substrate NAD + via transient interaction between intact macrodomain and chromatin-associated proteins, including histones and Parp1 [19]. This CHD1L nucleosome remodeling activity depends on the formation of a stable CHD1L-PARylated PARP1-nucleosome intermediate [20]. In addition to the PAR binding, the C-terminal macro domain (residues 600-897) of CHD1L is able to bind the protein Nur77: a critical member of a p53-independent apoptotic pathway. This binding subsequently inhibits the nucleus-to-mitochondria translocation of Nur77, which is the key step of Nur77-mediated apoptosis. Retention of Nur77 protein in the nucleus by CHD1L results in preventing release of cytochrome c from mitochondria and blocking the initiation of apoptosis [21] (Figure 3-(II)). Moreover, the chromatin-remodeling function of CHD1L plays important role in the earliest cell divisions of mammalian development [22]. CHD1L appears to function as a transcription factor. A chromatin immunoprecipitation-based cloning strategy unveiled that CHD1L confers DNA-binding capability to activate gene expression of direct targets that are relevant to oncogenesis. ARHGEF9 (Rho guanine nucleotide exchange factor 9), which encodes a specific guanine nucleotide exchange factor for the Rho small GTPase Cdc42, was identified as a CHD1L target gene [23]. CHD1L protein also directly binds to the promoter region (nt −733 to −1,027) of TCTP (translationally controlled tumor protein) [24] and the promoter region (nt −1662 to +34) of SPOCK1 (sparc/osteonectin, cwcv, and kazal-like domains proteoglycan 1) [25], subsequently activating these target genes' transcription. The transcriptional regulation of these genes by CHD1L could partially explain the mechanisms of CHD1L oncogenic role in Figure 3 The underlying mechanisms of oncogenic role of CHD1L during tumorigenesis. CHD1L is amplified at Chr1q21 region and overexpressed in tumors (I). The macro domain of CHD1L protein interacts with Nur77 and inhibits the latter's nuclear to mitochondrial translocation and the subsequent Nur77-mediated caspases' activation and cell death (II). On the other hand, CHD1L protein may directly bind to the promoter regions of target genes and activate gene transcription, such as ARHGEF9, TCTP and SPOCK1, leading to various biological effects such as cell survival, invasion, metastasis and genome instability (III-V). cancer development which will be discussed in detail later. Collectively, CHD1L interacts with other proteins or regulates target gene expression to execute its biological effects. CHD1L and Cancer Amplification of 1q21 region was reported in multiple solid tumors [5][6][7][9][10][11]. In hepatocellular carcinoma (HCC), amplification of 1q21 is the most frequent genetic alteration, being detected in 58%-78% of primary HCC cases by comparative genomic hybridization [8]. This phenotype leads cancer biologists to wonder why this region is amplified and what genes are misregulated in this region. In 2008, Ma et al. [12] first isolated CHD1L as a target gene within the 1q21 amplicon using chromosome microdissection/hybrid selection approach. Recently, several genes including CHD1L in regions for amplification at 1q21-24 in urothelial carcinoma were identified by array-CGH for high-resolution zoom-in oligonucleotide array analyses [26]. From HCC studies, CHD1L was not only detected to be amplified via FISH, but its mRNA and protein were also overexpressed in the examined samples [12]. Additionally, CHD1L-transfected cells possessed a strong oncogenic ability with increased colony formation in soft agar and the tumorigenity in nude mice. This phenotype could be effectively suppressed by small interfering RNA against CHD1L [12]. To further investigate the in vivo oncogenic role of CHD1L, a transgenic mouse model that ubiquitously expresses CHD1L was generated by Chen et al. [27]. Spontaneous tumor formation was found in 10/41 (24.4%) transgenic mice, including 4 HCCs, that were not found in their 39 wild-type littermates. Furthermore, overexpression of CHD1L in hepatocytes could promote tumor susceptibility in CHD1L-transgenic mice [27]. The oncogenic role of CHD1L in tumorigenesis in vitro and in vivo was also observed in colorectal carcinoma [11]. CHD1L expression in HPV-infected immortalized cervical cells appears to accelerate the malignant transformation with NNK chemical exposure [28]. All of this evidence strongly suggests that CHD1L functions as a driver gene during cancer development. The clinical significance of amplification and overexpression of CHD1L have been evaluated in solid tumors, including HCC [29], ovarian carcinoma [30], colorectal carcinoma [11], and bladder cancer [31]. All these studies demonstrated that CHD1L is a novel biomarker for prediction of progression, prognosis and survival ( Table 2). For example, we found that CHD1L protein expression was significantly higher in bladder cancer than in adjacent noncancerous tissues. CHD1L overexpression was significantly correlated with histologic grade and tumor stage. The Kaplan-Meier survival analysis revealed that survival time of patients with higher CHD1L expression was significantly shorter than that with lower CHD1L expression [31]. The role of CHD1L in chemotherapy response of patients with HCC was also investigated [32]. CHD1L could selectively inhibit apoptosis induced by 5-fluorouracil (5-FU) but not doxorubicin. The phenotype of chemo-resistance could be reversed by short hairpin siRNAs against CHD1L in vitro cell culture and in vivo mouse model [32]. Taken together, CHD1L is a novel oncogene and could be used as an indicator of poor prognosis and chemo-resistance. The mechanisms of CHD1L-driven oncogenesis Driver genes (mut-driver genes or epi-driver genes) confer selective growth advantage that can be classified into 12 signaling pathways, which regulate three core cellular processes: cell fate, cell survival and genome maintenance [1]. Does the CHD1L gene have these features in tumor cells? Functional studies showed that overexpression of CHD1L could promote cell proliferation, accelerate G1/S phase transition and inhibit apoptosis [11,12]. In a transgenic mouse model, CHD1L could facilitate DNA synthesis and G1/S transition through the up-regulation of Cyclins (A, D1 and E), CDK2, 4, and down-regulation of Rb, p27 (Kip1), and p53 [27]. CHD1L-mediated transcriptional activation of target genes seems to play crucial roles during cancer development. Functional studies in vitro and in vivo showed that CHD1L contributed to tumor cell migration, invasion, and metastasis by increasing cell motility and inducing filopodia formation and epithelial-mesenchymal transition (EMT) via ARHGEF9-mediated Cdc42 activation. Therefore, CHD1L-ARHGEF9-Cdc42-EMT might be a novel pathway involved in HCC progression and metastasis [23] (Figure 3-(IV)). As mentioned before, TCTP and SPOCK1 are direct transcriptional targets of CHD1L. CHD1L-mediated overexpression of TCTP was detected in 40.7% of human HCC samples. Clinically, overexpression of TCTP was significantly associated with the advanced tumor stage and overall short survival time of HCC patients. In multivariate analyses, TCTP was determined to be an independent marker associated with poor prognostic outcomes. Functional studies in vitro and in vivo demonstrated that TCTP has tumorigenic abilities, and overexpression of TCTP induced by CHD1L contributed to the mitotic defects in tumor cells. The mechanism of mitotic defect from overexpression of TCTP is that TCTP promotes the ubiquitin-proteasome degradation of Cdc25C during mitotic progression, which caused the failure in the dephosphorylation of Cdk1 on Tyr15 and decreased Cdk1 activity. As a consequence, the sudden drop of Cdk1 activity in mitosis induced a faster mitotic exit and chromosome missegregation, which led to chromosomal instability. Depletion of TCTP can prevent the mitotic defect. Collectively, CHD1L-TCTP-Cdc25C-Cdk1 is a novel molecular pathway, which causes the malignant transformation of hepatocytes with the phenotypes of accelerated mitotic progression and the production of aneuploidy [24] (Figure 3-(V)). CHD1L-mediated upregulation of SPOCK1 can prevent apoptosis of HCC cells through activating Akt signaling pathway, blocking release of cytochrome c and activating caspase-9 and caspase-3. These effects were abolished with an Akt inhibitor. Additionally, HCC cells with overexpression of SPOCK1 have higher levels of matrix metallopeptidase 9, these cells were more invasive, and developed more metastatic nodules in immunodeficient mice than HCC cells with lower SPOCK expression [25] (Figure 3-(III)). Taken together, CHD1L activates cell survival pathways and inhibits programmed cell death signaling, resulting in cell fate change (malignant transformation) through complex mechanisms. Targeting CHD1L for potential treatment Identification of cancer driver genes can lead to better diagnosis and successful targeted therapies. We are proposing here to develop small molecules to target degradation of oncogenic CHD1L gene and its encoded products. Hopefully this strategy could disrupt the putative pathways or counterparts, in turn, restore the normal cellular functions. Compelling experimental data in vitro and in vivo showed that knockdown of CHD1L gene expression using specific RNA interfering molecules could change the cancer cell behaviors through inducing apoptosis (Figure 3-(I)). Silencing CHD1L expression in HCC by the corresponding shRNA has a great therapeutic potential in HCC treatment, especially to increase the chemo-sensitivity combined with 5-FU chemotherapy [32]. siRNA-based therapy is emerging as a promising approach for a treatment. Several siRNAmediated therapies are in clinical trial [33]. We propose that CHD1L-shRNA should be investigated for its utility as a targeted therapy based on the current preclinical evidence. Alternatively, because CHD1L contains a macrodomain, interacting with multiple counterparts (e.g., PARP1 and Nur77) to execute its biological effects; therefore, targeting macro domains might enhance the effectiveness of radiotherapy and chemotherapy [34]. The ideas could be: 1) utilizing PARP1 inhibitor; 2) designing small molecules to prevent Nur77 binding, resulting in increasing apoptotic pathways (Figure 3-(II)); 3) inhibiting target genes of CHD1L to inactivate downstream pathways ( Figure 3-(III-V)). Ideally, combining all these strategies may have additive or synergistic effects. One might utilize the computer-aided drug design with highthroughput screening of known small molecule library (repositioning drug discovery) technology to achieve this goal in an efficient and inexpensive manner. Conclusion Since CHD1L gene was isolated from the amplicon of Chr1q21 in tumors, the functional studies point to the oncogenic role of CHD1L in solid tumors, particularly in hepatocellular carcinoma. The unique protein structure of CHD1L with macro domain interacting with other protein partners executes a variety of biological functions such as DNA damage repair and anti-apoptosis. Moreover, CHD1L-mediated gene activation may confer regulatory function in malignant transformation. Better understanding of CHD1L genomic functions will likely pave the way for novel therapeutic strategies (siRNA, small molecules) to modulate critical signaling pathways in cancer.
4,013.8
2013-12-21T00:00:00.000
[ "Biology", "Medicine" ]
Oxidative stress and Parkinson’s disease Parkinson disease (PD) is a chronic, progressive neurological disease that is associated with a loss of dopaminergic neurons in the substantia nigra pars compacta of the brain. The molecular mechanisms underlying the loss of these neurons still remain elusive. Oxidative stress is thought to play an important role in dopaminergic neurotoxicity. Complex I deficiencies of the respiratory chain account for the majority of unfavorable neuronal degeneration in PD. Environmental factors, such as neurotoxins, pesticides, insecticides, dopamine (DA) itself, and genetic mutations in PD-associated proteins contribute to mitochondrial dysfunction which precedes reactive oxygen species formation. In this mini review, we give an update of the classical pathways involving these mechanisms of neurodegeneration, the biochemical and molecular events that mediate or regulate DA neuronal vulnerability, and the role of PD-related gene products in modulating cellular responses to oxidative stress in the course of the neurodegenerative process. Introduction Parkinson's disease (PD) is associated with the selective loss of dopamine (DA) neurons in the substantia nigra pars compacta (SNpc) and DA levels in the corpus striatum of the nigrostriatal DA pathway in the brain. This loss of DA causes a deregulation in the basal ganglia circuitries that leads to the appearance of motor symptoms such as bradykinesia, resting tremor, rigidity, and postural instability as well as non-motor symptoms such as sleep disturbances, depression, and cognitive deficits (Rodriguez-Oroz et al., 2009). The exact etiology of PD still remains elusive and the precise mechanisms that cause this disease remain to be identified (Obeso et al., 2010). At the cellular level, PD is related to excess production of reactive oxygen species (ROS), to alterations in catecholamine metabolism, to modifications in mitochondrial electron transporter chain (METC) function or to enhancement of iron deposition in the SNpc. The failure of normal cellular processes that occur in relation to the aging process are also believed to contribute to the increased vulnerability of DA neurons (Schapira and Jenner, 2011;Rodriguez et al., 2014). While the familial forms of PD, that have been described, involve mutations in a number of genes (Kieburtz and Wunderle, 2013;Trinh and Farrer, 2013), mitochondrial dysfunction, neuroinflammation and environmental factors are increasingly appreciated as key determinants of dopaminergic neuronal susceptibility in PD, and are a feature of both familial and sporadic forms of the disease (Ryan et al., 2015). In both cases, oxidative stress is thought to be the common underlying mechanism that leads to cellular dysfunction and, eventual cell death. ROS are continuously produced in vivo by all body tissues. However, oxidative stress occurs when there is an imbalance between ROS production and cellular antioxidant activity. Oxidants and superoxide radicals are produced as products of oxidative phosphorylation, making mitochondria the main site of ROS generation within the cell. ROS can affect mitochondrial DNA which can cause modulations in the synthesis of METC components like adenosine triphosphate (ATP) production as well as the leakage of ROS into the cell's cytoplasm (Brieger et al., 2012). Although the precise mechanism corresponding to ROS generation related to PD is still unknown, in this review, we summarize the major sources of oxidative stress generated by the DA neurons, like DA metabolism, mitochondrial dysfunction, and neuroinflammation (Figure 1). Dopamine Metabolism Selective degeneration of the DA neurons of the SNpc suggests that DA itself may be a source of oxidative stress (Segura-Aguilar et al., 2014). DA is synthesized from tyrosine by tyrosine hydroxylase (TH) and aromatic amino acid decarboxylase. Following this, DA is stored in synaptic vesicles after uptake by the vesicular monoamine transporter 2 (VMAT2). However, when there is an excess amount of cytosolic DA outside of the synaptic vesicle in damaged neurons, i.e., after L-DOPA treatment, DA is easily metabolized via monoamine oxidase (MAO) or by auto-oxidation to cytotoxic ROS . For example, mishandling of DA in mice with reduced VMAT2 expression was sufficient to cause DA-mediated toxicity and progressive loss of DA neurons (Caudle et al., 2007). This oxidative process alters mitochondrial respiration and induces a change in the permeability transition pores in brain mitochondria (Berman and Hastings, 1999). Also, the autooxidation of DA produces electron-deficient DA quinones or DA semiquinones (Sulzer and Zecca, 2000). Some studies have demonstrated a regulatory role for quinone formation in DA neurons in the L-DOPA-treated PD model induced by neurotoxins and in methamphetamine neurotoxicity (Asanuma et al., 2003;Miyazaki et al., 2006;Ares-Santos et al., 2014). DA quinones can modify a number of PD-related proteins, such as α-synuclein (α-syn), parkin, DJ-1, Superoxide dismutase-2 (SOD2), and UCH-L1 (Belluzzi et al., 2012;Girotto et al., 2012;da Silva et al., 2013;Hauser et al., 2013;Toyama et al., 2014;Zhou et al., 2014) and have been shown to cause inactivation of the DA transporter (DAT) and the TH enzyme (Kuhn et al., 1999;Whitehead et al., 2001), as well as mitochondrial dysfunction (Lee et al., 2003), alterations of brain mitochondria (Gluck and Zeevalk, 2004) and dysfunction in Complex I activity (Jana et al., 2007(Jana et al., , 2011Van Laar et al., 2009). Additionally, DA quinones can be oxidized to aminochrome, whose redox-cycling leads to the generation of the superoxide radical and the depletion of cellular nicotinamide adenine dinucleotide phosphate-oxidase (NADPH), which ultimately forms the neuromelanin known to be accumulated in the SNpc of the human brain (Ohtsuka et al., 2013(Ohtsuka et al., , 2014Plum et al., 2013). Significant increases in cysteinyl adducts of L-DOPA, DA, and DOPAC have been found in substantia nigra of PD patients, suggesting the cytotoxic nature of DA oxidation (Spencer et al., 1998). Also, DA terminals actively degenerated proportionally to increased levels of DA oxidation following a single injection of DA into the striatum (Rabinovic et al., 2000). Recently, it has been shown that increased uptake of DA through the DAT in mice results in oxidative damage, neuronal loss and motor deficits (Masoud et al., 2015). Mitochondrial Dysfunction Mitochondrial dysfunction is closely related to increased ROS formation in PD (Schapira, 2008). Oxidative phosphorylation is the main mechanism providing energy to power neural activity in which the mitochondria use their structure, enzymes, and energy released by the oxidation of nutrients to form ATP (Hall et al., 2012). Consequently, this metabolic pathway is the main source of superoxide and hydrogen peroxide, which, at the same time, lead to propagation of free radicals contributing to the disease. Complex I deficiencies of the respiratory chain account for the majority of unfavorable neural apoptosis generation and is considered one of the primary sources of ROS in PD. Complex I inhibition results in an enhanced production of ROS, which, in turn, will inhibit complex I. Reduction in complex I activity in the SNpc of patients with sporadic PD has been well described (Schapira et al., 1990;Hattori et al., 1991;Hattingen et al., 2009). Additionally, mitochondrial complex I deficiency in different brain regions (Mizuno et al., 1989;Parker et al., 2008), fibroblasts (Mytilineou et al., 1994), blood platelets (Krige et al., 1992;Blandini et al., 1998), skeletal muscle (Blin et al., 1994), and lymphocytes (Yoshino et al., 1992;Haas et al., 1995) of PD patients has been shown before as well. As such, complex I inhibitors like 1-methyl-4-phenyl-1,2,3,6tetrahydropyridine (MPTP) or rotenone show preferential cytotoxicity to the DA neurons (Blesa and Przedborski, 2014).The mechanism by which MPTP crosses the blood-brain barrier and is oxidized to 1-methyl-4-phenylpyridinium (MPP+) is well known (Blesa and Przedborski, 2014). The MPP+ accumulates in the mitochondria where it inhibits complex I in the METC, therefore disrupting the flow of electrons along the METC, which results in decreased ATP production and increased generation of ROS (Mizuno et al., 1987). Like MPTP, rotenone is another mitochondrial complex I inhibitor. Interestingly, rotenone toxicity is involved in oxidative damage to proteins and Lewy body-like inclusions (Betarbet et al., 2000;Sherer et al., 2003a,b;Greenamyre et al., 2010). The events downstream to complex I inhibition that lead to neuronal cell death by these toxins are still unknown (Schapira, 2010). Other evidence for mitochondrial dysfunction related to oxidative stress and DA cell damage comes from findings that mutations in genes of proteins like α-syn, parkin, DJ-1, or PINK are linked to familial forms of PD. The convergence of all of these proteins on mitochondrial dynamics uncovers a common function in the mitochondrial stress response that might provide a potential physiological basis for the pathology FIGURE 1 | Suggested physiological processes related to pathogenesis of Parkinson's disease (PD). Different pathways and their dysfunctions resulting from genetic modifications in PD-related genes and lead to an increased oxidative stress. Mutations or altered expression of these proteins result in mitochondrial impairment, oxidative stress, and protein misfolding. Also, dopamine metabolism may be oxidized to reactive dopamine quinones contributing to increased levels of reactive oxygen species. α-Synuclein becomes modified and accelerate its aggregation. Increased oxidative stress provokes impaired function of the UPS that degrades misfolded or damaged proteins and hereby further affecting cell survival. Environmental toxins impair mitochondrial function, increase the generation of free radicals, and lead to aggregation of proteins, including α-synuclein. Mitochondrial dysfunction by complex I inhibition affects by adding an increase in oxidative stress and a decline in ATP production, leading to damage of intracellular components and to cell death. Also, neuroinflammatory mechanisms might contribute to the cascade of consequences leading to cell death. In summary, all these several cellular mechanisms attributed to oxidative stress are implicated in the selective degeneration of dopaminergic neurons. of PD (Norris et al., 2015;van der Merwe et al., 2015). Overall, these observations show that mutations in these genes affect mitochondrial function and integrity and, are associated with increases in oxidative stress (Zuo and Motherwell, 2013). ROS influence proteasomal, lysosomal, and mitochondrial function, which, in turn, regulate the cellular response to oxidative damage (Cook et al., 2012). The correct elimination of damaged proteins by effective proteolysis and the synthesis of new and protective proteins are vital in the preservation of brain homeostasis during periods of increased levels of ROS. Consequently, this can lead to protein misfolding (i.e., α-syn), preventing the ability of some of these proteins to be unfolded and degraded by the systems that regulate protein clearance, like the ubiquitin proteasome system or autophagy. Indeed, protein misfolding, together with the dysfunction of these protein degradation systems, may play a key role in the appearance of deleterious events implicated in the neurodegenerative process of PD (Schapira et al., 2014). Parkin and PINK1 are localized in the mitochondria and their functions are tightly connected to the normal functioning of the mitochondria (Scarffe et al., 2014). PINK1 accumulates on the outer membrane of damaged mitochondria and recruits Parkin to the dysfunctional mitochondrion (Pickrell and Youle, 2015). In humans with parkin mutations, mitochondrial complex I activity is impaired (Müftüoglu et al., 2004). Overexpression of parkin in mice reduced DA neuronal cell loss induced by MPTP through the protection of mitochondria and the reduction of α-syn (Bian et al., 2012). On the other hand, parkin KO mice showed decreased amounts of several proteins that are involved in mitochondrial function and oxidative stress as well as increases in protein oxidation and lipid peroxidation (Palacino et al., 2004). Also, Drosophila, lacking, or deficient in parkin, exhibit mitochondrial deficits and high vulnerability to oxidative stress (Saini et al., 2010). PINK1 mutations in humans lead to mitochondrial defects and respiratory chain abnormalities (Hoepken et al., 2007;Piccoli et al., 2008). PINK1 KO in human and mouse DA neurons causes decreases in membrane potential and increases in ROS generation (Wood-Kaczmar et al., 2008). The decrease in mitochondrial membrane potential is not due to a proton leak, but to respiratory chain defects like complex I and complex III deficiency (Amo et al., 2011(Amo et al., , 2014. Therefore, PINK1 is required for maintaining normal mitochondrial morphology of SNpc DA neurons in culture and exerts its neuroprotective effect by inhibiting ROS formation (Wang et al., 2011). In animal models, studies show that the lack of PINK1 resulted in abnormal mitochondrial morphology, loss of SNpc DA neurons, reduction in complex I activity, and enhanced vulnerability to oxidative stress (Clark et al., 2006;Kitada et al., 2007;Gautier et al., 2008). These defects can be ameliorated and rescued by the enhanced expression of parkin (Yang et al., 2006;Exner et al., 2007). This last scenario seems to involve PINK1 and Parkin in a common pathway that regulates mitochondrial physiology and cell survival in which PINK1 seems to be functioning upstream of Parkin, at least as observed in Drosophila disease models (Clark et al., 2006). α-syn is a soluble protein that is highly enriched in the presynaptic terminals of neurons. Accumulation of α-syn as intracellular filamentous aggregates is a pathological feature of both sporadic and familial PD (Goedert et al., 2013). Accumulation of wild-type α-syn in DA neurons reduced mitochondrial complex I activity, elevated ROS production leading to cell death (Martin et al., 2006). It has been shown that α-syn inclusions elevate dendritic mitochondrial oxidative stress in DA neurons (Dryanovski et al., 2013). This mitochondrial dysfunction occurs many months before the occurrence of striatal DA loss (Subramaniam et al., 2014). The nuclear translocation of α-syn increases susceptibility of MES23.5 cells to oxidative stress (Zhou et al., 2013). Exposure to rotenone or other stimuli that promote ROS formation and mitochondrial alterations correlate well with mutant α-syn phosphorylation at Ser129 (Perfeito et al., 2014). Oxidative stress promotes uptake, accumulation, and oligomerization of extracellular α-syn in oligodendrocytes (Pukass and Richter-Landsberg, 2014) and induces posttranslational modifications of α-syn which can increase DA toxicity (Xiang et al., 2013). It has been suggested that the NADPH oxidases, which are responsible for ROS generation, could be major players in synucleinopathies (Cristóvão et al., 2012). DJ-1 is another gene reported to cause a familial early onset PD (Puschmann, 2013). DJ-1 binds to subunits of mitochondrial complex I and regulates its activity (Hayashi et al., 2009). Although a portion of DJ-1 is present in mitochondria matrix and inter-membrane space (Zhang et al., 2005), the degree of translocation of DJ-1 into mitochondria is stimulated by oxidative stress (Canet-Avilés et al., 2004). Mitochondrialtargeted sequence-conjugated DJ-1 has been shown to be more protective against oxidative stress-induced cell death (Junn et al., 2009). DJ-1 KO mice displayed nigrostriatal DA neuron loss (Goldberg et al., 2005). Also, these DJ-1 KO mice showed altered mitochondrial respiration and morphology, reduced membrane potential, and accumulation of defective mitochondria (Irrcher et al., 2010;Krebiehl et al., 2010;Giaime et al., 2012). These defects can be reversed by DJ-1 overexpression, which points to the specific role of DJ-1 in mitochondrial function (Heo et al., 2012). Recently, following oxidative stress, DJ-1 was shown to be involved in the oxidative stress response that leads to the upregulation of the proteasome, thus inhibiting its activity and rescuing partially unfolded proteins from degradation (Moscovitz et al., 2015). Neuroinflammation Neuronal loss in PD is associated with chronic neuroinflammation, which is controlled primarily by microglia, the major resident immune cells in the brain (Barcia et al., 2003) and, to a lesser extent, by astrocytes and oligodendrocytes (Perry, 2012). Microglial activation has been found with a greater density in the SNpc (Lawson et al., 1990) and in the olfactory bulb of both sporadic and familial PD patients (McGeer et al., 1988;Doorn et al., 2014a,b). Additionally, activated microglia have been found in the SNpc and in the striatum of PD animal models (Pisanu et al., 2014;Stott and Barker, 2014) and have been associated with different PD-associated gene/proteins like α-syn or LRRK2 (Daher et al., 2014;Sacino et al., 2014). In response to certain environmental toxins and endogenous proteins, microglia can shift to an over-activated state and release ROS which can cause neurotoxicity (Block et al., 2007). Accumulating evidence indicates that activation of different enzymes like NADPH oxidase (NOX2) in microglia is neurotoxic not only through the production of extracellular ROS that damage neighboring neurons but also through the initiation of redox signaling in microglia that amplifies the pro-inflammatory response (Surace and Block, 2012). Neuromelanin confers the dark pigmentation that is produced from DA oxidation and is so characteristic of the SNpc appearance. High levels of catecholamine metabolism in the midbrain are associated with increased levels of neuromelanin in the same region and, it is neuromelanin that is thought to be one of the molecules responsible for inducing chronic neuroinflammation in PD. Neuromelanin released from dying DA neurons in the SNpc activate microglia, increasing the sensitivity of DA neurons to oxidative stress-mediated cell death (Halliday et al., 2005;Li et al., 2005;Beach et al., 2007;Zhang et al., 2009). The ability of neuromelanin to interact with transition metals, especially iron, and to mediate intracellular oxidative mechanisms have received particular attention. Increased levels of iron result in increased ROS and increased oxidative stress and has been shown to be involved in aging and PD. Iron homeostasis is modulated by angiotensin in DA neurons and microglia, and glial cells play an essential role in the efficient regulation of this balance (Garrido-Gil et al., 2013). Dopamine neurons containing neuromelanin are especially more susceptible, indicating a possible role for neuromelanin in MPTP-toxicity (Herrero et al., 1993). MPTP induces a glial response, increased levels of inflammatory cytokines and microglial activation in mice (Członkowska et al., 1996;Jackson-Lewis and Smeyne, 2005) and monkeys (Barcia et al., 2004(Barcia et al., , 2009. Angiotensin is one of the most important inflammation and oxidative stress inducers, and produces ROS by activation of the NADPH-oxidase complex. It has been suggested that the inflammatory response in the MPTP model could be mediated by brain angiotensin and microglial NADPH-derived ROS (Joglar et al., 2009). Moreover, oral treatment with NADPH oxidase antagonists mitigates the clinical and pathological features of parkinsonism in the MPTP marmoset model (Philippens et al., 2013). Also, microglia play an important role in mediating rotenone-induced neuronal degeneration through NADPH (Gao et al., 2003(Gao et al., , 2011Pal et al., 2014). Rotenone increased microglial activation in both the SNpc and striatum in rats (Sherer et al., 2003a), activated microglia via the NF-κB signaling pathway (Gao et al., 2013) and induced neuronal death by the microglial phagocytosis of neurons (Emmrich et al., 2013). Parkinson's disease-associated proteins like α-syn, parkin, LRRK2, and DJ-1 have also been reported to activate microglia (Wilhelmus et al., 2012). Extracellular α-syn released from neuronal cells is an endogenous agonist for Toll-like receptor 2 (TLR2), which activates the microglial inflammatory responses (Kim et al., 2013a). An increased number of activated microglia and increased levels of TNF-α mRNA and protein were detected in the striatum and in the SNpc of mice over-expressing WT human α-syn (Watson et al., 2012). Moreover, in α-syn KO mice, microglia secreted higher levels of proinflammatory cytokines, TNF alpha and IL-6 (interleukin-6) compared to WT mice (Austin et al., 2006). Intracerebral injection of recombinant amyloidogenic or soluble α-syn induces extensive α-syn intracellular inclusion pathology that is associated with a robust gliosis (Sacino et al., 2014). LRRK2 increases proinflammatory cytokine release from activated primary microglial cells which results in neurotoxicity (Gillardon et al., 2012). In contrast, LRRK2 inhibition attenuates microglial inflammatory responses (Moehle et al., 2012). Additionally, lipopolysaccharide induces LRRK2 up-regulation and microglial activation in mouse brains (Li et al., 2014) but they down regulated Parkin expression via NF-κB (Tran et al., 2011). Abnormal glial function is critical in parkin mutations, increasing vulnerability to inflammation-related nigral degeneration in PD (Frank-Cannon et al., 2008) and its role increases with aging (Solano et al., 2008). DJ-1 expression is up-regulated in reactive astrocytes in PD patients (Bandopadhyay et al., 2004). DJ-1 negatively regulates inflammatory responses of astrocytes and microglia by facilitating the interaction between STAT1 and its phosphatase SHP-1 (Kim et al., 2013b). Astrocyte cultures from DJ-1 KO mice treated with lipopolysaccharide have increased NO production and an up-regulation of different pro-inflammatory mediators like COX-2 and IL-6 (Waak et al., 2009). Conclusion The elements that potentially cause oxidative stress in PD are still unknown. DA metabolism, mitochondrial dysfunction and neuroinflammation all play critical roles in the etiology of this disease. Exposure to environmental factors or mutations in PDassociated genes of patients with either sporadic or familial PD may cause mitochondrial dysfunction that ultimately results in PD. All of these share common linkages and influence each other greatly. Limiting the early inflammatory response will reduce further both elevated oxidative stress and microglial activation that are key to slowing the death of the neurons in the SNpc. Development of potential drugs able to delay the neurodegenerative process is crucial to ameliorating the deleterious effects of oxidative stress in neurodegenerative diseases. Neuroprotective therapies will need to target multiple pathological pathways such as mitochondrial dysfunction and neuroinflammation in the next few years.
4,739
2015-07-08T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Thermoelectric Generator Using Polyaniline-Coated Sb2Se3/β-Cu2Se Flexible Thermoelectric Films Herein, Sb2Se3 and β-Cu2Se nanowires are synthesized via hydrothermal reaction and water evaporation-induced self-assembly methods, respectively. The successful syntheses and morphologies of the Sb2Se3 and β-Cu2Se nanowires are confirmed via X-ray powder diffraction (XRD), X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, field emission scanning electron microscopy (FE-SEM), and field emission transmission electron microscopy (FE-TEM). Sb2Se3 materials have low electrical conductivity which limits application to the thermoelectric generator. To improve the electrical conductivity of the Sb2Se3 and β-Cu2Se nanowires, polyaniline (PANI) is coated onto the surface and confirmed via Fourier-transform infrared spectroscopy (FT-IR), FE-TEM, and XPS analysis. After coating PANI, the electrical conductivities of Sb2Se3/β-Cu2Se/PANI composites were increased. The thermoelectric performance of the flexible Sb2Se3/β-Cu2Se/PANI films is then measured, and the 70%-Sb2Se3/30%-β-Cu2Se/PANI film is shown to provide the highest power factor of 181.61 μW/m·K2 at 473 K. In addition, a thermoelectric generator consisting of five legs of the 70%-Sb2Se3/30%-β-Cu2Se/PANI film is constructed and shown to provide an open-circuit voltage of 7.9 mV and an output power of 80.1 nW at ΔT = 30 K. This study demonstrates that the combination of inorganic thermoelectric materials and flexible polymers can generate power in wearable or portable devices. Introduction In recent years, thermoelectric materials have been studied for use in the thermoelectric generator (TEG) or Peltier cooler. In particular, inorganic thermoelectric materials based on Bi 2 Te 3 [1,2], PbTe [3,4], SnSe [5,6], Cu 2 Se [7,8], skutterudites [9,10], and Zintl phases [11,12] have been studied during the past few decades. Although such inorganic thermoelectric materials exhibit better performance than their organic counterparts, they are difficult to use in wearable or portable devices due to their rigid (inflexible), brittle, heavy, costly, and toxic properties. Conversely, organic thermoelectric materials such as the conducting polymers PEDOT:PSS [13][14][15], polyaniline (PANI) [16][17][18], polythiophene [19], and polypyrrole [20,21] can exhibit lightweight, low-cost, non-toxic, and flexible properties but display low efficiency compared to their inorganic counterparts. To overcome these difficulties associated with using inorganic or organic thermoelectric materials alone, hybrid inorganic/organic thermoelectric materials have been studied in the most recent decades. In addition, the electrical conductivities of various hybrid materials have been further improved by various coating methods. For example, C. Meng et al. reported a promising improvement in the thermoelectric performance of carbon nanotubes up to 4-5 times by enwrapping the base material in PANI to provide a size-dependent energy-filtering effect [17]. In addition, D. Park et al. reported enhanced thermoelectric properties using Ag 2 Se nanowire/Polyvinylidene fluoride composite film via a solution mixing method. These studies show a combination of inorganic thermoelectric materials and polymers can be used for improvement thermoelectric performances [22]. Antimony selenide (Sb 2 Se 3 ) is a chalcogenide material that is easy to synthesize in various structures such as thin films [23], nanosheets [24], and nanorods/wires [25,26]. Although Sb 2 Se 3 has a large Seebeck coefficient of 750 µV/K, the extremely low electrical conductivity of 10 −4 S/m [26] is a limitation in thermoelectric applications. To address this problem, an alloy of Sb 2 Se 3 with copper selenide (Cu 2 Se) is proposed herein. Also a chalcogenide, Cu 2 Se complements Sb 2 Se 3 by exhibiting a high electrical conductivity along with a low Seebeck coefficient. Similarly to Sb 2 Se 3 , Cu 2 Se is easy to synthesize in various structures, including films [13], nanoplates [27], and nanowires [28]. In our previous work, β-Cu 2 Se nanowires were synthesized and combined with Sb 2 Se 3 nanowires to make the rigid disk shape composite to improve thermoelectric performance [29]. In addition to our previous work, to further improve the electrical conductivity of the Sb 2 Se 3 /β-Cu 2 Se composite, the conducting polymer polyaniline (PANI) (with an electrical conductivity of 360 S/cm [16]) was used to coat the composite surface. Moreover, in our previous work, it was found that the rigid and brittle nature of the resulting inorganic thermoelectric composites makes them difficult to use in preparing flexible films. To address this problem, a flexible thin film of polyvinylidene fluoride (PVDF) is developed. The flexible thin film with 70% β-Cu 2 Se and 30% Sb 2 Se 3 nanowires is shown to provide a power factor of 181.61 µW/m•K 2 . This film is then used to fabricate a thermoelectric device with an output voltage of 7.9 mV and an output power of 80.1 nW at a temperature difference of 30 K. These results demonstrate that the Sb 2 Se 3 /β-Cu 2 Se/PANI flexible thin film can be used as a TEG for flexible devices. The selenium and antimony precursors were reduced using hydrazine monohydrate and synthesized to Sb 2 Se 3 nanowires using a previously reported method [24]. In detail, potassium antimony tartrate (0.605 g) and sodium selenite (0.51 g) were completely dissolved in distilled water (100 mL) with magnetic stirring. Hydrazine monohydrate (30 mL) was then added, and the mixture was transferred to a Teflon-lined autoclave with tetrahydrofuran (40 mL). The sealed autoclave was heated to 135 • C for 9 h, then the product was centrifuged at 10,000 rpm for 1 h, washed several times with distilled water and ethanol, then dried overnight in a vacuum oven at 70 • C. Synthesis of β-Cu 2 Se Nanowires The β-Cu 2 Se nanowires were synthesized via a previously reported method [28]. In detail, a mixture of Se powder (0.45 g) and NaOH (15 g) in distilled water (60 mL) was heated at 90 • C to completely dissolve the Se powder. Then, a 0.5 M Cu(NO 3 ) 2 solution (5 mL) was added, and the mixture was heated to dryness in an oven at 140°C for 12 h. The precipitated product was then collected using hot distilled water, washed several times with hot distilled water and ethanol, then dried overnight in a vacuum oven at 60°C. Synthesis of Sb 2 Se 3 /β-Cu 2 Se/PANI Composite Films and Fabrication of a TEG Device The SDBS (0.06 g) was dissolved in 1 M HCl solution (10 mL), followed by sonication for 30 min to prepare a homogeneous solution. Using an ice bath to maintain a temperature of 273 K, the aniline monomer (0.02 g) was then added to the solution with steady stirring for 12 h. Then, APS (0.04 g) was dissolved in HCl solution (5 mL) and slowly added to the prepared solution. The product was then washed three times with distilled water and dried overnight in a vacuum oven at 60°C to obtain the polyaniline powder. Using the same polymerization procedure, Sb 2 Se 3 /β-Cu 2 Se/PANI powders were synthesized by adding various ratios of Sb 2 Se 3 and β-Cu 2 Se to 1 M HCl solution (10 mL), SDBS (0.06 g), and aniline monomer (0.02 g). The obtained Sb 2 Se 3 /β-Cu 2 Se/PANI powders were then added to 1 M ammonia solution (10 mL) and magnetically stirred for 24 h to prepare the emeraldine base PANI product. The products were then washed several times with distilled water and dried overnight in a vacuum oven at 60°C. To improve the electrical conductivity of the emeraldine base PANI, CSA was used as a dopant in m-cresol (10 mL) in a mole ratio of 1:2 with stirring for 24 h. The obtained powder was dried overnight in a vacuum oven at 60°C. To synthesize the Sb 2 Se 3 /β-Cu 2 Se/PANI film, the Sb 2 Se 3 /β-Cu 2 Se/PANI powder (0.05 g) and polyvinylidene fluoride (PVDF) (0.025 g) were added to DMF solution (1 mL) at a weight ratio of 2:1 and sonicated for 1h to generate a homogenous mixture. The mixture was then drop-casted onto a glass substrate (18 mm × 18 mm) and dried at 60°C for 24 h. To fabricate a TEG device, the Sb 2 Se 3 /β-Cu 2 Se/PANI film was cut into five (18 mm x 6 mm) strips with a thickness of 100 µm. These were then pasted onto a polyimide film and connected with copper wire. Silver (Ag) adhesive (ELCOAT P-100, CANS) was used to connect the copper wire and film. Characterization The crystalline structures of the prepared nanowire powders and de-doped PANI were examined by X-ray diffraction (XRD; D8 Advance, AXS Bruker, Billerica, US) under Cu K α radiation (λ = 0.154056 nm) at 40 kV and 40 mA over a 2θ range of 10−80 • at a scan rate of 1 • s −1 . The binding energies of the synthesized nanowires and Sb 2 Se 3 /β-Cu 2 Se/PANI powders were determined via X-ray photoelectron spectroscopy (XPS; K-Alpha, Thermo Fisher Scientific, Waltham, USA) using a 1486.6 eV Al K α X-ray source. Fourier-transform infrared (FT-IR) spectroscopy (PerkinElmer Spectrum One) was conducted to confirm the synthesis of PANI. Raman spectra were recorded using a Raman spectrometer II (DXR2xi, Thermo Fisher Scientific, Waltham, USA) with a near infrared laser operating at 532 nm and a CCD detector. Field emission scanning electron microscopy (FE-SEM; SIGMA, Oberkochen, Germany), and field emission transmission electron microscopy (FE-TEM; JEM-F200) were used to visualize the shape and microstructures of the Sb 2 Se 3 and Cu 2 Se nanowire samples. Energy-dispersive X-ray spectroscopy (EDS) was used to obtain the elemental mappings of the nanowire powders (JEM-F200, JEOL Ltd., Akishima, Japan). The thermoelectric properties, Seebeck coefficients, and electrical conductivities were measured in the direction parallel to the pressing direction. A four-probe method involving a homemade device with a pair of thermocouples and a pair of voltmeters was used to quantify the electrical conductivity (σ) between room temperature (RT) and 473 K, and the Seebeck coefficient was calculated from the relationship in Equation (1): where ∆V is the change in the thermal electromotive force and ∆T is the temperature difference. In addition, the power factor (PF) was calculated using Equation (2): The properties of the generator were measured using a homemade device with thermocouples and a multimeter (SENIT, A830L). Crystallin Structure and Morphology of Sb 2 Se 3 Nanowires The Sb 2 Se 3 nanowires with diameters of 100−200 nm and lengths of 1−2 µm were successfully synthesized via the hydrothermal reaction, as shown in Figure 1a. where ΔV is the change in the thermal electromotive force and ΔT is the temperature difference. In addition, the power factor (PF) was calculated using Equation (2): The properties of the generator were measured using a homemade device with thermocouples and a multimeter (SENIT, A830L). Crystallin Structure and Morphology of Sb2Se3 Nanowires The Sb2Se3 nanowires with diameters of 100−200 nm and lengths of 1−2 µm were successfully synthesized via the hydrothermal reaction, as shown in Figure 1a. In addition, the FE-TEM images of the Sb2Se3 nanowires are presented in Figure 1b,c. Here, the lattice fringes of the Sb2Se3 nanowires are 0.365 nm in size, which corresponds to the (1 3 0) crystal plane [30]. The EDS mappings of the Sb2Se3 nanowires in Figure 1d,e indicate a stoichiometric atomic ratio of Sb:Se = 42.01:57.99. Moreover, the XRD pattern of the Sb2Se3 nanowires in Figure 1f reveals a diffraction peak with lattice constants of a = 1.168 nm, b = 1.172 nm, and c = 0.397 nm, corresponding to the orthorhombic structure (JCPDS #15-0681, a = 1.1633 nm, b = 1.1780 nm, and c = 0.3985 nm). The absence of any second phase peaks demonstrates the high purity of the nanowires, and the strong intensities of the (h k 0) planes indicate that the Sb2Se3 particles have a 1-dimensional nanowire structure. Further, the XPS spectra of the Sb2Se3 nanowires are presented in Figure 1g−i. Here, the XPS wide scan spectra exhibit the Sb 3d and Se 3d peaks with no O 1s peak, thus indicating that the Sb2O3 phase was not produced during the synthesis, in agreement with the above interpretation of the XRD pattern. Further, the high-resolution Sb 3d spectrum in Figure 1h exhibits the Sb 3d5/2 and Sb 3d3/2 peaks at binding energies of 529.5 and 538.8 eV, respectively, in agreement with the previously reported data [24]. Similarly, the binding energies of Se 3d5/2 and Se 3d3/2 are located at 53.91 and 54.73 eV, which is in close agreement with the previously reported data [25]. The atomic ratio of Sb:Se obtained from In addition, the FE-TEM images of the Sb 2 Se 3 nanowires are presented in Figure 1b,c. Here, the lattice fringes of the Sb 2 Se 3 nanowires are 0.365 nm in size, which corresponds to the (1 3 0) crystal plane [30]. The EDS mappings of the Sb 2 Se 3 nanowires in Figure 1d,e indicate a stoichiometric atomic ratio of Sb:Se = 42.01:57.99. Moreover, the XRD pattern of the Sb 2 Se 3 nanowires in Figure 1f reveals a diffraction peak with lattice constants of a = 1.168 nm, b = 1.172 nm, and c = 0.397 nm, corresponding to the orthorhombic structure (JCPDS #15-0681, a = 1.1633 nm, b = 1.1780 nm, and c = 0.3985 nm). The absence of any second phase peaks demonstrates the high purity of the nanowires, and the strong intensities of the (h k 0) planes indicate that the Sb 2 Se 3 particles have a 1-dimensional nanowire structure. Further, the XPS spectra of the Sb 2 Se 3 nanowires are presented in Figure 1g−i. Here, the XPS wide scan spectra exhibit the Sb 3d and Se 3d peaks with no O 1s peak, thus indicating that the Sb 2 O 3 phase was not produced during the synthesis, in agreement with the above interpretation of the XRD pattern. Further, the high-resolution Sb 3d spectrum in Figure 1h exhibits the Sb 3d 5/2 and Sb 3d 3/2 peaks at binding energies of 529.5 and 538.8 eV, respectively, in agreement with the previously reported data [24]. Similarly, the binding energies of Se 3d 5/2 and Se 3d 3/2 are located at 53.91 and 54.73 eV, which is in close agreement with the previously reported data [25]. The atomic ratio of Sb:Se obtained from the XPS spectra is 41.21:58.79, which is close to that obtained from the EDS mapping data and to the stoichiometric ratio. The Raman spectrum of the Sb 2 Se 3 nanowires is provided in Figure S1 of the Supplementary Information. Here, the peaks located at 118, 188, and 208 cm −1 are consistent with the Sb 2 Se 3 phase [31,32], whereas the peak at 252 cm −1 is consistent with the Sb 2 O 3 phase. As this second phase was not observed in the XRD pattern and XPS spectra, the small Raman peak can be attributed to oxidation of the Sb 2 Se 3 surface to Sb 2 O 3 by the high-density laser (1 mW/µm 2 ) of the Raman II instrument [31]. Crystallin Structure and Morphology of β-Cu 2 Se Nanowires The β-Cu 2 Se nanowires with diameters of 100−200 nm diameter and lengths of 1-2 µm were obtained as shown in Figure 2a. the XPS spectra is 41.21:58.79, which is close to that obtained from the EDS mapping data and to the stoichiometric ratio. The Raman spectrum of the Sb2Se3 nanowires is provided in Figure S1 of the Supplementary Information. Here, the peaks located at 118, 188, and 208 cm −1 are consistent with the Sb2Se3 phase [31,32], whereas the peak at 252 cm −1 is consistent with the Sb2O3 phase. As this second phase was not observed in the XRD pattern and XPS spectra, the small Raman peak can be attributed to oxidation of the Sb2Se3 surface to Sb2O3 by the high-density laser (1 mW/µm 2 ) of the Raman II instrument [31]. Crystallin Structure and Morphology of β-Cu2Se Nanowires The β-Cu2Se nanowires with diameters of 100−200 nm diameter and lengths of 1-2 µm were obtained as shown in Figure 2a. Further, the FE-TEM images in Figure 2b,c exhibit 0.201 nm lattice fringes corresponding to the (2 2 0) crystal plane. In addition, the EDS mapping images in Figure 2d,e indicate an atomic ratio of Cu:Se = 68.14:31.86, which is close to the stoichiometric ratio. Meanwhile, the XRD pattern of β-Cu2Se in Figure 2f exhibits a diffraction peak with a lattice constant of a = 0.5692 nm, corresponding to the cubic structure (JCPDS #06-0680, a = 0.5759 nm). The absence of any peaks of the second phases in the XRD pattern corresponding to other phases indicates the high purity of the β-Cu2Se nanowires. For further characterization, the XPS spectra of the β-Cu2Se nanowires are presented in Figure 2g,i. Here, the presence of the Cu + oxidation state is indicated by the Cu 2p3/2 and Cu 2p1/2 peaks located at binding energies of 933.66 and 954.56 eV, respectively ( Figure 2h). The relatively weaker peaks at 932.38 and 952.33 eV indicate the presence of the Cu 2+ oxidation state, but other phases such as CuO are not observed. In addition, the Se 3d5/2 and Se 3d3/2 peaks are located at binding energies of 54.01 and 54.94 eV, respectively (Figure 2i). Further, the atomic ratio of Cu:Se is seen to be 64.76:35.24, which is in agreement with that obtained from the EDS mapping ratio and with the stoichiometric ratio. The Raman spectrum of the β-Cu2Se nanowire is provided in Figure S2. Here, the peak at 260 cm −1 corresponds to the previously reported Raman data for β-Cu2Se [33]. In contrast to the Raman spectrum of the Sb2Se3 nanowires ( Figure S1), no oxidized peak is observed Further, the FE-TEM images in Figure 2b,c exhibit 0.201 nm lattice fringes corresponding to the (2 2 0) crystal plane. In addition, the EDS mapping images in Figure 2d,e indicate an atomic ratio of Cu:Se = 68.14:31.86, which is close to the stoichiometric ratio. Meanwhile, the XRD pattern of β-Cu 2 Se in Figure 2f exhibits a diffraction peak with a lattice constant of a = 0.5692 nm, corresponding to the cubic structure (JCPDS #06-0680, a = 0.5759 nm). The absence of any peaks of the second phases in the XRD pattern corresponding to other phases indicates the high purity of the β-Cu 2 Se nanowires. For further characterization, the XPS spectra of the β-Cu 2 Se nanowires are presented in Figure 2g,i. Here, the presence of the Cu + oxidation state is indicated by the Cu 2p 3/2 and Cu 2p 1/2 peaks located at binding energies of 933.66 and 954.56 eV, respectively (Figure 2h). The relatively weaker peaks at 932.38 and 952.33 eV indicate the presence of the Cu 2+ oxidation state, but other phases such as CuO are not observed. In addition, the Se 3d5/2 and Se 3d3/2 peaks are located at binding energies of 54.01 and 54.94 eV, respectively (Figure 2i). Further, the atomic ratio of Cu:Se is seen to be 64.76:35.24, which is in agreement with that obtained from the EDS mapping ratio and with the stoichiometric ratio. The Raman spectrum of the β-Cu 2 Se nanowire is provided in Figure S2. Here, the peak at 260 cm −1 corresponds to the previously reported Raman data for β-Cu 2 Se [33]. In contrast to the Raman spectrum of the Sb 2 Se 3 nanowires ( Figure S1), no oxidized peak is observed for the β-Cu 2 Se nanowires. This confirms the high purity of the synthesized β-Cu 2 Se nanowires. Confirmation of PANI Coated Sb 2 Se 3 /β-Cu 2 Se Nanowire Powders The XRD pattern of the de-doped PANI is presented in Figure 3a and is in agreement with the previously reported data [34]. for the β-Cu2Se nanowires. This confirms the high purity of the synthesized β-Cu2Se nanowires. Confirmation of PANI Coated Sb2Se3/β-Cu2Se Nanowire Powders The XRD pattern of the de-doped PANI is presented in Figure 3a and is in agreement with the previously reported data [34]. In addition, the FT-IR spectrum of the de-doped PANI is presented in Figure 3b. Here, the peaks at 1587 and 1490 cm −1 are attributed to the C=C stretching vibrations of the quinoid and benzenoid ring, respectively; the peaks at 1300 and 1240 cm −1 indicate the C-N stretching of the benzenoid ring, and the peak at 1151 cm −1 indicates the N=quinoid-ring=N vibrational mode [35]. Taken together, the FT-IR and XRD results demonstrate the successful synthesis of the de-doped PANI with the emeraldine structure incorporating both the benzenoid and quinoid rings. By comparison, the FT-IR spectrum of the composite Sb2Se3/β-Cu2Se/PANI material in Figure 4a reveals the appearance of peaks at 1591, 1490, 1300, 1240, and 1151 cm −1 due to the PANI coated on the surface of Sb2Se3 and β-Cu2Se nanowires. In addition, the FT-IR spectrum of the de-doped PANI is presented in Figure 3b. Here, the peaks at 1587 and 1490 cm −1 are attributed to the C=C stretching vibrations of the quinoid and benzenoid ring, respectively; the peaks at 1300 and 1240 cm −1 indicate the C-N stretching of the benzenoid ring, and the peak at 1151 cm −1 indicates the N=quinoid-ring=N vibrational mode [35]. Taken together, the FT-IR and XRD results demonstrate the successful synthesis of the de-doped PANI with the emeraldine structure incorporating both the benzenoid and quinoid rings. By comparison, the FT-IR spectrum of the composite Sb 2 Se 3 /β-Cu 2 Se/PANI material in Figure 4a reveals the appearance of peaks at 1591, 1490, 1300, 1240, and 1151 cm −1 due to the PANI coated on the surface of Sb 2 Se 3 and β-Cu 2 Se nanowires. Further, the FE-SEM images of the composite materials with various ratios of Sb 2 Se 3 and β-Cu 2 Se nanowires in Figure S3 reveal the change in morphology and increased roughness of the Sb 2 Se 3 /β-Cu 2 Se/PANI nanowire surface. In addition, the FE-TEM images in Figure 4b-f indicate that the PANI is coated on the surface of the Sb 2 Se 3 /β-Cu 2 Se nanowires with a uniform thickness of 4−5 nm. For further characterization, the N 1s and C 1s peaks in the XPS spectra of the 70%-Sb 2 Se 3 /30%-β-Cu 2 Se/PANI composites are presented in Figure 4g,h. Here, the peaks at 398.15, 400.04, and 402.25 eV are respectively attributed to the −N= bonds, the −NH− bonds, and the N + species of the emeraldine base PANI [36]. Meanwhile, the binding energies of 284.38 and 286.31 eV are attributed to the C−C/C−H bonds and the C−N bonds, respectively, of the emeraldine based PANI [36]. Taken together, the FE-TEM and XPS results demonstrate the successful formation of PANI on the surface of the nanowires. Polymers 2021, 13, 1518 7 of 11 C-N stretching of the benzenoid ring, and the peak at 1151 cm −1 indicates the N=quinoid-ring=N vibrational mode [35]. Taken together, the FT-IR and XRD results demonstrate the successful synthesis of the de-doped PANI with the emeraldine structure incorporating both the benzenoid and quinoid rings. Thermoelectric Properties of Sb 2 Se 3 /β-Cu 2 Se/PANI Flexible Films and TEG Properties The Sb 2 Se 3 /β-Cu 2 Se/PANI films were synthesized from the Sb 2 Se 3 /β-Cu 2 Se/PANI powders with various ratios of Sb 2 Se 3 and β-Cu 2 Se as described in Section 2.2.3. The flexible properties of the obtained films are indicated in Figure S4, while the Seebeck coefficients (S) and electrical conductivities (σ) of the Sb 2 Se 3 /β-Cu 2 Se nanowires and Sb 2 Se 3 /β-Cu 2 Se/PANI films are indicated in Figure 5a,b. Here, the pure Sb 2 Se 3 and β-Cu 2 Se exhibit Seebeck coefficients of 400 and 3−4 µV/K, respectively, while the Seebeck coefficient of the Sb 2 Se 3 /β-Cu 2 Se composite is seen to decrease with increasing proportion of β-Cu 2 Se. Meanwhile, the electrical conductivity of the Sb 2 Se 3 /β-Cu 2 Se composites is seen to increase with the increasing proportion of β-Cu 2 Se due to the high electrical conductivity of β-Cu 2 Se (45.3 S/cm −1 ). Compared to the non-coated Sb 2 Se 3 /β-Cu 2 Se composite, the Sb 2 Se 3 /β-Cu 2 Se/PANI film exhibits a lower Seebeck coefficient and a higher electrical conductivity due to the high electrical conductivity of PANI (i.e., 360 S/cm) [16]. These trends in the Seebeck coefficient and electrical conductivity can be explained by a parallel-connected model, as described in the Supporting Information. Although the complicated interfacial interactions can distort the electrical conductivity curves and, thus, lead to inaccuracy, the parallel-connected model can be considered a useful guideline [37][38][39]. The results of the parallel-connected model are indicated by the dashed line in Figure 5a,b, while the Seebeck coefficients, electrical conductivities, and power factors of the Sb 2 Se 3 /β-Cu 2 Se nanowires and Sb 2 Se 3 /β-Cu 2 Se/PANI films over the temperature range of room temperature to 473 K are presented in Figure 5c−e. In all cases, the Seebeck coefficients and electrical conductivities are seen to increase with the increasing temperature. In addition, the maximum power factor (PF), as calculated using Equation (2), is 181.61 µW/m·K 2 for the 70%-Sb 2 Se 3 /30%-β-Cu 2 Se/PANI film. Thermoelectric Properties of Sb2Se3/β-Cu2Se/PANI Flexible Films and TEG Properties The Sb2Se3/β-Cu2Se/PANI films were synthesized from the Sb2Se3/β-Cu2Se/PANI powders with various ratios of Sb2Se3 and β-Cu2Se as described in Section 2.2.3. The flexible properties of the obtained films are indicated in Figure S4, while the Seebeck coefficients (S) and electrical conductivities (σ) of the Sb2Se3/β-Cu2Se nanowires and Sb2Se3/β-Cu2Se/PANI films are indicated in Figure 5a,b. The flexible film of thermoelectric materials can be used as a TEG for wearable or portable devices. Hence, the highest-performing material in the present study, namely the 70%-Sb 2 Se 3 /30%-β-Cu 2 Se/PANI film, was used to fabricate a TEG, as shown in Figure 6a. The open-circuit voltage (V oc ) and output power of the device are indicated in Figure 6b,c. The open-circuit voltage of the fabricated TEG was measured under a temperature difference of ∆T = 30 K and reached a value of 7.9 mV. The theoretical value of the open-circuit voltage was calculated using Equation (3): where N is the number of TEG legs [39]. The output power (P) was calculated using Equation (4): where I, R load , and R in are respectively the output current, the load resistance, and the internal resistance of the homemade TEG 37 . The R in and R load values were both 770 Ω, and the calculated maximum output power was 80.1 nW at ∆T = 30 K. The output power (P) was calculated using Equation (4): where I, Rload, and Rin are respectively the output current, the load resistance, and the internal resistance of the homemade TEG 37 . The Rin and Rload values were both 770 Ω, and the calculated maximum output power was 80.1 nW at ΔT = 30 K. Conclusions Two nanowire materials, namely Sb 2 Se 3 and β-Cu 2 Se, were synthesized via hydrothermal reaction and water evaporation-induced self-assembly methods, respectively. The conducting polymer, PANI was then formed on the Sb 2 Se 3 and β-Cu 2 Se nanowire surfaces in order to improve their electrical conductivities. Composite PANI-coated materials with various ratios of Sb 2 Se 3 and β-Cu 2 Se were produced, and their thermoelectric properties were measured. The 70%-Sb 2 Se 3 /30%-β-Cu 2 Se/PANI film was shown to provide the highest power factor of 181.61 µW/m·K 2 at 473 K. In addition, a thermoelectric generator was fabricated from five legs of the 70% Sb 2 Se 3 /30% β-Cu 2 Se/PANI film and was found to provide an open-circuit voltage of 7.9 mV and an output power of 80.1 nW at ∆T = 30 K. This study demonstrates that the fabricated flexible TEG which combines the high performance of inorganic thermoelectric materials with flexibility of a polymer has potential application as a next-generation power generator for wearable or portable devices. In addition, this study can also influence other electronic devices requiring compact power generators.
6,500.2
2021-05-01T00:00:00.000
[ "Materials Science" ]
Design and Implementation of an Information System to Support Quality Management in Small and Medium Scale Enterprises Success of a quality management system in the ISO 9001:2008 certified small and medium scale enterprises (SMEs) were conquered by effective practice of ISO 9001:2008 procedures at all levels. The advantages contributed by ISO 9001: 2008 for the small and medium scale enterprises were focused in this study. When ISO 9001:2008 certified small and medium scale enterprises were assisted with an information system (DSS-EII) to create awareness and govern, then it results in improvement of the quality throughout the enterprise. In this study an information system was developed to facilitate the automation in documenting ISO forms and records, to identify the bottleneck regions and to provide the solution to overcome the predicted quality issues. It was found that the information system assisted to understand and achieve maximum benefits of ISO 9001:2008. The information system provided appropriate solution to the quality problems via Back Propagation Neural network (BPN). Developed information system was integrated to enterprises resource planning software used in the enterprise. Inputs were given to information system from daily records and enterprises resource planning database. Data were shifted to all the necessary ISO forms and records with necessary computation. Few data updating in ISO forms was accepted only when actual value was greater than or equal to and predefined target value. The developed information system performs function like documenting automation and provides solutions to be followed to solve an issue. This research work introduces a new system to facilitate QMS practices in SMEs and provides relevant solutions for frequently occurring problems using back propagation neural network. The proposed methodology assures integrated assistance, employees training, documentation automation and comfortable working environment. INTRODUCTION Economic development of the country includes contribution of SMEs and they should be ready to face the rigours of international competition as suggested by Charoenrat et al. (2013).Day wise SMEs schedule to execute production plans and adhere to delivery dates.The importance in manufacturing components is preferred and operators were motivated to work for it.Unlike in large scale industries, updating Quality Management System (QMS) documents and follow up of procedures as detailed in ISO manual are least preferred in SMEs.Floyde et al. (2013) indicated that organising and managing knowledge within organisation is an important factor for success.The Enterprises Resources Planning (ERP) software automates documentation and provides reports related to general records of each department.SMEs are manually documenting the QMS forms and records.Charoenrat et al. (2013) proposed that skill of workers' can be enhanced only by educating and training them in working skills.In an enterprise for verifying the effectiveness of the manual practice in documentation, management review report has to be analysed.Top management team of SMEs mainly analyse the performance of their enterprise from the information present in management review report.To study the problems and improve the efficiency of the management system, an Indian medium scale plastic manufacturing industry "Suba Plastics private limited" was selected and their review analysis form was considered as the initial step for this study.Data on the details of daily delivery adherence for a period of three months (January-March 2011) showed 18% lacking when compared to fixed target of the enterprise.The actual daily delivery adherence was ranged from 80.6 to 82.3% only over a period of three months. Reason for the deviation is mainly due to shortages in quantities for despatch.Increased rejection rate and non-availability of raw materials during end period of production caused shortage in the delivery quantities.Possible reasons for the situation were predicted using scheduled questionnaire investigation.Kivijarvi (1997) insisted a much closer relationship between management theory and DSS development for good results.Many root causes were recorded and corrective action plan was proposed.But still a question raised was that accuracy of calculated percentage of actual delivery adherence.To get answer for this, the top management accepted to automate documentation and to visualize its benefits.Heras and Arana (2010) compared the effectiveness of two models generated for same purpose in an industry and concluded that benefits by applying both models are similar. In any enterprise or in an Industry it is important to adhere the target date fixed for delivering end products and should not be deviated for each order.But in the selected Indian plastic components manufacturing enterprise the target was deviated by 18% for the year 2011.It needs an additional period of three months for completing pending orders which increased the financial requirement and consumed more time.This was the major problem identified in the enterprise and the question need to be answered here is to find out the optimum solutions in order to overcome this situation in future orders without any additional resources, as well as to reduce the target deviations and more investment for resolving. Further evaluation of cost analysis revealed that an additional 10% of total production cost should be spent to produce remaining 18% of the components.For each order, a deviation occurred and overall cost involved also increased.Additional problems such as switching over of employees to other concerns due to increased work load were addressed.Hiring and training of the new employees was again a reason for delay in production.The responsibility and work load has to be shared by available employees till the new appointment has been made.It leads to stress among employees and results in uncomfortable situations.Based on the identified problems the objective of this study has been formulated to design and develop an information system for supporting ISO procedure and documentation as well as to implement the developed system and analyze the result. METHODOLOGY The design proposed by Hernad and Gaya (2013) for documentation of ISO forms and records was followed, which guarantees continuous improvement of the enterprise.Following are the steps used in solving the identified problems: • Identification of ISO records and interrelation-ship evaluation • Data entry methods and ERP • Automation of documentation • Implementation • Analysis and comparison. ISO RECORDS IDENTIFICATION AND INTERRELATIONSHIP EVALUATION In the selected Indian plastic components manufacturing enterprise-'Suba Plastic Private Limited', all ISO documents were collected.A total of 165 documents that were available in all departments of the enterprise are listed in Table 1. Records were categorised into groups according to their interrelationship with each other.Production holds the following documents-process sheet, component file, packing and labelling standard, work instructions, monthly production plan, daily moulding report and mould pick up sheet, mould drop sheet, mould correction report, tool history card, machine utilization chart and rejection analysis register. First category was based on frequency of updating the Daily moulding report and rejection analysis report data that has to be entered on daily basis.All data related to each machine should be entered.A total of 4 modules were available in the enterprise and each module was equipped with a group of moulding machines.Out of the 17 available documents in production, daily moulding report data was used to compute four other reports as shown in Fig. 1.Rejection quantity data from daily moulding report assisted in finding the reason for rejections in internal Fig. 1: Illustration for interrelation ship between documents-a sample Data entry methods and ERP: Data was recorded manually in each department.The operator in charge for particular record was used to fill the corresponding data observed.Most of available records were filled in paper mode and transferred to excel format.Data relevant to daily processes was also recorded in the available ERP software.Bose et al. (2008) integrated ERP with supply chain management to improve operations and to foster paperless environment.Chien et al. (2007) explained that ERP systems lead to satisfactory organizational outcome only through effective team work.While examining few reports automatically generated by ERP indicated that, reports were in the standard prescribed ERP formats and didn't support any ISO format.Three methods of data entry carried out are: the manual entry in printed document, excel format and through ERP.Among the three, ERP database supported ISO documents by issuing available data directly.A link was established to enable data transfer from ERP database to ISO documents.Bose et al. (2008) further confirmed that ERP, when integrated with other systems lead to several tangible benefits.All individual excel document entries via local area network could be inter connected and the backup of electronic data was stored periodically. Automation of documentation: Erdem and Goecen (2012) developed a model and integrated with decision support system for enabling fast decision making environment Fig. 2. ISO forms and records as per the interrelationship was segregated category wise in MYSQL database.Using Hyper Text Pre Processor (PHP) as front end application and visual basic for computation as back end, an automation system named Decision Support System ERP Integrated Information (DSS-EII) was developed.DSS-EII holds features like data entry for ISO forms, worldwide online access enhancing data, process support, error free speedy calculations, report generation, warning through generated reports and solution generation.Petroni (2000) insisted the application of a supporting model for an already certified company to perform still better.Each module was categorised and facilitates the provision for comparing present target and actual values and enhanced the provisions to indicate the reason for the deviation Analytical formulas were set to calculate necessary form of values to reports.Production Plan Vs Actual percentage (Document number: SP/PRD/R11) calculation need planned quantity and actual quantity produced.Machine utilization percentage indicates required available machine hour and actual machine utilization details. Production Vs Actual percentage = (Quantity Produced /Scheduled Quantity) × 100 (1) Machine Utilization percentage = (Machine running hours/Available hours) ×100 (2) Linking report with a set of formula and data base resulted in understanding accurate hourly production status of enterprises.Next task considered was about developing solutions to frequently occurring problems.A detailed study was made and problems were categorised according to its occurrence, into three types.Category I comprised of problems which occur frequently, Category II included problems that occur moderately and Category III included problems that occur very rarely.This study mainly focused in solving frequently occurring problems by providing appropriate solutions.These problems contributed more in terms of cost of poor quality.(2010) who used multi-layer perception with an error back propagation method in deriving solutions for engineering models.BPN model computes local gradients using error terms multiplied with derivatives of activation function.Error was calculated in output layer jth neuron by using the following formulae as per (Sengupta, 2009): where, e j (n) = Error at jth neuron at output layer d j (n) = Desired output y j (n) = Actual output: The instantaneous value of error is calculated by Average value of square energy is calculated by: {J{˳ {J{ The partial derivative is calculated using the following equations: Chen et al. (2010) used Gradient steepest method in BPN to adjust connection weights and reduces inaccuracy of neural networks.Local gradient for 3 layer architecture, when j th neuron belongs to output layer by multiplying error with derivative of activation function is computed: Change of weight is applied to current weight using chain rule so that: BPN model is trained for desired output.Training for each pair and set of input patterns are provided.Gunasekaran et al. (2003) noticed that BPN being most widely used neural network, needs a distinct training to be imbibed for usage.Yin et al. (2011) has also given weights in random at the beginning stage of training. IMPLEMENTATION The developed DSS -EII model was demonstrated to the management and its benefits were explained.SME managers agree that the best benefits will be assured in enterprise through ISO procedures.Urbonavicius (2005) has also had the same opinion.DSS-EII was installed in top management members' computer and global access facility was enabled.All computers in enterprise were sharing data via local area network.Training was conducted to all data entry operators about the method of providing input, maintaining documents, report generation, input problem statements, viewing of appropriate solutions, identifying bottleneck areas using graphic models of interpreted data and decision making.Sample data were given and hands on training were conducted to input the data and verify user efficiency.All employees were instructed about providing right data to data entry operators.Importance of the developed system and its benefits were highlighted to employees at all levels. Initially, data entered were verified and then stored in database.The initial sample reports generated were compared with manual calculations.All essential data stored in ERP was linked to the developed system to avoid multiple recordings of the same data.Batch files were generated to fetch data from ERP to DSS-EII.Fakoya and Poll (2013) proved that ERP integration leads to better results which were indicative from the firm's financial situations.ERP data was stored in accessible format and data was routed to DSS-EII database.ERP provided required data and supported for improvement of DSS-EII. Decision Support System-dERP Integrated Information.The developed system set specification page to input data related to number of machines and number of components to be manufactured.Figure 3 shows the page in which total numbers of machines and components to be manufactured can be entered Figure 4 shows the module wise assigned targets for each machine.Similarly individual machine operator provides data to data entry operator and enter from user login. Figure 5 shows the module wise actual components produced from each machine.This page is accessible by each operator from user login.This enables the operator to update the quantity produced from respective machine each hour. Figure 6 indicates the efficiency of each module and overall efficiency of enterprise.This calculation is based on the actual components produced and fixed target. User can enter the type of problem in the "Enter the identified bottleneck".It assists the user to get automatic solutions for the problem (Fig. 7). ANALYSIS AND COMPARISON Information on the quantity of rejection before implementing DSS-EII was collected.Table 2 shows rejection percentage from April 2010 to January 2011.Rejected quantities inside enterprise as well as the supplied quantity rejection were observed higher as per records (Fig. 8 and Table 3). DSS-EII streamlined all records and indicated hourly deviation from target values.It suggested solutions for the problems identified.The management viewed the status from their computer system and follow up actions were generated.Altogether, the developed system drastically controlled situations within the enterprise to a favourable level.As a result, after a period of three months, the rejection rate was less than 0.01% and the timely delivery of goods produced has been achieved. CONCLUSION Without disturbing the existing production system, a supporting system was implemented to govern the processes and to generate reports.This resulted in improvement of enterprise performance.Peres and Stumpo (2000) mentioned that in some countries SMEs reached productivity, equivalent to that of large enterprises by following a new economic model.With minimum cost and ease of training, a medium scale enterprise's profit level was increased with minimum rejection quantities.Kengpol and Boonkanit (2011) proved that ISO integrated decision model supported decision making, assisted in manufacturing eco effective new product and reduce bias.Huge investment needed for training the employees to maintain the ISO Quality management System was eliminated by the developed model.Sen et al. (2009) introduced a decision support system for both qualitative and quantitative objectives of the enterprise.Similarly the developed DSS-EII model educates and trains the employees by providing current production quantities and descriptive solution. Model developed created benefit of facing ISO surveillance unannounced audit scheduled twice in a year with all updated ISO documents.At any moment enterprise can provide details about the current rejection rate and details related to finished products.Also the management can view production status every hour through web based accessibility. Fig. 2 : Fig. 2: Algorithm to demonstrate working mechanism rejection analysis report and also the quantity of the production data in department objective monitoring chart for production.Similarly it calculates raw material rejection quantity from the cost of poor quality work sheet.Using data from daily moulding report, it computes, plan against actual percentage in the management information system form.Based on above mentioned existing interrelationship, all the records were charted.Detailed mapping of interrelationship between documents in a square matrix was created. Fig. 3 : Fig. 3: Number of machines available and components to be produced Table 1 : Department wise total documents collected from Suba plastics private limited S. Some of poor qualities observed include, Cavity Damage, R/O and F/O Problem, Shining problem, Offset problem and Sleeve Flash, Rib Flash Found, R/O Problem and Dimension Problem, Offset Problem, ECN Correction, Dimension and Runner Gate Damage, Runner Gate Damage, Dimension Problem, Core Pin Problem, Cavity Problem and Flash Problem.Solutions were different for each problem based on the reason for occurrence.Hence a detailed list of solutions for each problem was recorded.It consumed more time in recording solutions for each reason.Back propagation model was developed to automatically generate solutions for each problem.Shell (2003) explained effective functioning of back propagation in output time prediction model developed by Chen et al. Table 2 : Data on monthly internal rejections
3,857
2015-01-01T00:00:00.000
[ "Computer Science", "Business" ]
Biodiversity conservation in climate change driven transient communities Species responding differently to climate change form ‘transient communities’, communities with constantly changing species composition due to colonization and extinction events. Our goal is to disentangle the mechanisms of response to climate change for terrestrial species in these transient communities and explore the consequences for biodiversity conservation. We review spatial escape and local adaptation of species dealing with climate change from evolutionary and ecological perspectives. From these we derive species vulnerability and management options to mitigate effects of climate change. From the perspective of transient communities, conservation management should scale up static single species approaches and focus on community dynamics and species interdependency, while considering species vulnerability and their importance for the community. Spatially explicit and frequent monitoring is vital for assessing the change in communities and distribution of species. We review management options such as: increasing connectivity and landscape resilience, assisted colonization, and species protection priority in the context of transient communities. Introduction The increase of greenhouse gas concentrations caused by human activities enhances the absorption of radiation in the atmosphere leading to an elevation of air temperatures and changing precipitation patterns (Suggitt 2017;Felton et al. 2020). The land surface has warmed on average by 1.53 °C (1.38-1.68 °C) over approximately the last 120 years, with an acute warming observed from 1980 onwards (IPCC 2019). The last several years are also the Communicated by Dirk Sven Schmeller. warmest ever recorded (IPCC 2019). Anthropogenic climate change has affected Earth's biota on all continental territories (Foden, 2019;Hoffmann et al. 2019;Visser and Gienapp 2019). These changes affect species survival and colonization both directly and indirectly. This may disrupt food webs, change competitive interactions between species, and put major stress on species survival (Bowers and Harris 1994;Davis et al. 1998;Ings 2009;Visser and Gienapp 2019). Species respond to climatic change by shifting their distributions and/or adapting and surviving in their changing habitats (Bonebrake, 2018;Foden et al. 2019;Hulme 2005;Parmesan 2006;Schippers et al. 2011), or else fail to adapt and become extinct (Keith 2008;Tomiolo and Ward 2018) (Fig. 1). Species may adapt to new conditions through phenotypic plasticity (i.e., behavioural, morphological or physiological responses to environmental change) or by selection for beneficial and heritable traits (Lawler 2009;Visser 2008) (Fig. 1). Alternatively, 'spatial escape', a rearrangement in space, depends on species mobility, geographical possibilities and landscape connectivity (Schippers et al. 2011) (Fig. 1). This will only be successful if potential habitat becomes suitable for colonizing species, and if the colonizing species can deal with the community formed by both the resident and other newly colonizing species (Pearson and Dawson 2003). All in all, climate change triggers a cascade of processes inducing changes in species distribution and community composition (Fig. 2) and may cause local and even global extinction of species (Figs. 1,2). We call these changing communities 'transient', i.e. communities with a constantly changing species composition and equilibrium. In this paper, we disentangle the mechanisms and consequences of climate change for terrestrial species in these 'transient' communities and explore the consequences for biodiversity conservation. Direct effects of greenhouse gases Greenhouse gases are not known to have any direct effects on terrestrial animal species at the projected concentrations. For terrestrial plants, however, the increase in atmospheric carbon dioxide is expected to enhance the internal CO 2 concentration in leaves causing higher photosynthetic rates (Lloyd and Farquhar 2008;Zotz et al. 2005). This decreases the water loss per unit carbon gain and thus increases the water use efficiency, enabling plants to survive drier conditions (Battipaglia et al. 2013;Lloyd and Farquhar 2008;Soh, 2019;Zotz et al. 2005). Although the destiny of these assimilates is not always clear (Jiang, 2020), this CO 2 fertilisation effect is expected to stimulate plant growth and may result in a higher vegetation growth and cover (Drake et al. 1997;Schippers et al. 2015a). Moreover, plants differ in their response to CO 2 elevation. For example, C3 and C4 plants differ with respect to their CO 2 assimilation at various temperature levels; C3 plants (e.g. all woody vegetation and most other plants of the temperate and boreal zone) profit more from CO 2 fertilisation than C4 species (e.g. many tropical grasses) (Poorter 1993;Wand et al. 1999). Consequently, increased atmospheric CO 2 favours tree cover in savannah systems at the cost of grasses (Bond et al. 2003). This example shows that increasing CO 2 concentration can directly affect vegetation structure inducing habitat changes affecting species vital rates and ultimately their distribution and survival. Another effect of the increase in CO 2 is that it changes the plant's stoichiometry resulting in relatively low nutrient-carbon ratios of the plant's tissue (Huang et al. 2015). Because herbivores are dependent on these nutrients, the low concentrations may affect their growth and reproduction (Yuan and Chen 2015) in turn altering the food web of the community (Welti et al. 2020 Direct effects of climate change Greenhouse gases absorb radiation in the thermal infrared range and therefore increase the atmospheric temperature. The temperature rise is affecting the Earth's weather system with changes in precipitation patterns as well as wind directions and speeds (IPCC 2014), changing species habitats. Additionally, climate change gives rise to an increase in extreme weather events (Coumou and Rahmstorf 2012;IPCC 2014IPCC , 2019Kendon et al. 2014;Verboom et al. 2010). Temperature is a strong determinant in the spatial distribution of species (Araujo et al. 2006;Green et al. 2008;Parmesan 1999;Waldock et al. 2018). Species have a specific temperature response with respect to survival and reproductive output. Especially the length and severity of winter in combination with species adaptations to the cold determine the poleward distribution and upper elevational limits of most species . In contrast, extreme summer temperatures and droughts govern the lower elevational boundaries and the species distribution towards the equator (Franco 2006;Jiguet et al. 2006). It is important to note that species differ in the specific nature of their temperature tolerance and response, and consequently will respond differently to change. Models predict that precipitation regimes will change due to climatic warming (IPCC 2014(IPCC , 2019. Decreases and increases in rainfall amount and frequency are expected in large parts of the world and recent studies suggest that the intensity of rainfall events is also changing (Felton et al. 2020;Myhre 2019). The anticipated magnitude of precipitation change varies between climate change scenarios. In the Representative Concentration Pathways (RCP) 2.6 scenario (2 °C increase in 2100), local changes in precipitation are expected between − 20 and + 20%. In the RCP 8.5 scenario (4.3 °C increase in 2100) precipitation changes between − 30 and + 60% are expected (IPCC 2014). Precipitation is the other important factor determining species distribution (Illan et al. 2014). Especially plant species will be sensitive to changes in precipitation because the water uptake and transpiration determine their growth rate (Felton et al. 2020;Schippers et al. 2015a;Wu et al. 2011). In general, wetland species like amphibians will suffer from precipitation reduction (Griffiths et al. 2010) whereas precipitation increase will stimulate the abundance of wetland species ). For specialists of dry ecosystems, we expect that reduced and increased precipitation both have a negative effect on their abundance because precipitation decrease in dry environment induces desertification and species loss (Zhang et al. 2020(Zhang et al. , 2019. Since there is a large variation in size, behaviour, morphology and physiology of community members we expect species to respond differently to temperature and precipitation change. This will drive community change (Williams and Jackson 2007) and induce transient communities. Although precipitation and temperature data enable researchers to predict the distribution of species (Harrison et al. 2006;Illan et al. 2014) it is not always clear whether this distribution is solely due to direct effects of both precipitation and temperature, or also due to biotic conditions such as co-determining species. Moreover, the predictions are based on extrapolation of regression models with relations based on data from the past in relatively constant conditions, these equations may not be valid for transient communities where species dynamics and interactions co-determines species presence. Climate change gives rise to an increase in the frequency, duration and intensity of extreme weather events (Coumou and Rahmstorf 2012;Harvey et al. 2020;IPCC 2014IPCC , 2019Kendon et al. 2014;Verboom et al. 2010). Results of observational studies show that in many regions variation in precipitation is amplified, while more temperature extremes are expected (Easterling et al. 2000). These extreme weather events may affect population survival (Lawson et al. 2015). There is evidence that periods of high temperatures and drought pose greatest risks on species survival, but also indirectly through forest fires (as we have seen recently in Australia in the 2019-2020 summer and more recently in the Pantanal, Brazil) and melting permafrost (Jorgenson 2010;Randerson 2006). Effects of biotic interactions As argued above, increase of atmospheric concentrations of CO 2 and attendant climatic changes affect species abundance and survival. In ecosystems, however, species are interdependent through food webs (herbivory, predation, foraging), competition for resources, pollination, and complex interactions such as symbiotic and parasite-host interactions. In the food chain, some species are specialists (eating only a limited number of species) whereas others are generalists (able to switch between alternative food sources). Some animals moreover shape the environment for other animals, making nesting holes or building (termite) mounds, or keeping the grass short. Climate change thus not only affects species directly by changing their physical conditions, but also by changing their community structure (see Box 1 for an overview). For instance, precipitation and temperature change may alter the composition of plant species which in turn affects herbivore species reproduction and survival which then in turn impacts reproduction and survival of their predators (Halpin 1997;Harvey 2015;Preston et al. 2008;Schleuning, 2016). These biotic changes may alter the suitability of a habitat for a species even if the abiotic conditions are not pushed beyond its tolerance levels (Brooker et al. 2007). Species respond to climate change in a unique, species-specific way with respect to adaptations and range shifts. Moreover, species are linked by interspecific interactions, so the change in abundance in one species will affect presence and abundance of other species. Thus, direct effects of CO 2 increase and changes in abiotic conditions induce biotic changes which affect community composition, thereby inducing changes in the presence and abundance of interacting species. Furthermore, newly colonizing species and species going extinct may disrupt the trophic structure and competitive relations in the community (Dunne and Williams 2009;Lurgi et al. 2012;Woodward et al. 2012). Box 1: Examples of climate change affecting species interactions • Plant-herbivore: -Directions in range shift of host plant and butterfly host differ (Schweiger et al. 2008) -Phenological responses of host plant and butterfly host differ causing a mismatch with host plant availability (Cerrato et al. 2016) or detrimental cooling of microclimate (Wallisdevries and Van Swaay 2006) -Plant-herbivore interactions change due to changes in food quantity and quality (Zhu et al. 2015) • Host-parasitoid: -Temporary escape from natural enemies in host butterfly under range expansion from resident parasitoid (Menendez et al. 2008) -Increased parasitism from expanding generalist parasitoid on resident butterfly host (Gripenberg et al. 2011) • Predator-prey -Environmental change alters predator-prey interactions (Harmon et al. 2009) -Adaptive phenological mismatches of birds and their food in a warming world (Both et al. 2006;Visser et al. 2009) • Plant-pollinator: Transient communities So, climate change can trigger cascading extinctions and introductions of new species changing the community structure constantly (Alexander et al. 2015;Gilman et al. 2010). These changes in community structure can be abrupt when environmental variables pass certain tipping points and ecosystems flip to alternative states (Amigo 2020;Osland, 2020;Scheffer et al. 2001). Additionally, extinction debts (Butterfield et al. 2019;Rumpf, 2019;Tilman et al. 1994) and time lags may exist between changes in climate and the direct and indirect responses of species. For these communities that have constantly changing composition and structure we introduce the term'transient communities'. We expect that most communities have or will become transient and that many species that live in these communities have difficulty to survive the transient dynamics. In contrast to non-transient communities, transient communities have a high extinction and colonization rate resulting in a more dynamic species composition (Fig. 2). In the past, due to slow environmental changes or even by neutral species replacement (Hubbell 2005), community composition changed slowly and was expected to be close to equilibrium. Nowadays, however, climate change together with other human induced pressures result in fast environmental change (Travis 2003). As a result, equilibria are on the move which makes community dynamics largely unpredictable (Cenci and Saavedra 2019). This may induce "Novel Communities" census Hobbs (2006); communities that did not exist previously and arise through human action, environmental change, and the impacts of the deliberate and inadvertent introduction of species from other regions (Hobbs et al. 2006). The concept of Novel Communities suggests that this new state is semi-permanent. In contrast, the concept of transient communities emphasizes that these communities are changing constantly. So, when distinguishing new community states, we should be very aware of their temporary status. Rearrangement in space Current climatic change is widely recognized as one of the main forces driving changes in the distribution of species. Mobile species from a wide range of taxa show distribution shifts resulting from climate change (Hickling et al. 2006;Hill et al. 2016;Mason et al. 2015). Conditions generally become more suitable in the poleward direction whereas they become unsuitable in the equatorial direction . Similar changes are seen over elevational gradients (Kuhn and Gegout 2019). However, rearrangement in space requires the presence of suitable habitat, connectivity, species mobility, and successful population establishment (Schippers et al. 2011). This is especially problematic in humandominated landscapes where habitat is scarce and urban areas and infrastructure limit population expansion (Arevall et al. 2018;Opdam and Wascher 2004;Travis 2003). However, large natural barriers, such as seas and mountain ridges can also block poleward expansion for terrestrial species (Keith et al. 2011;Robillard et al. 2015;Roratto et al. 2015). In addition, range shifts can have genetic and evolutionary consequences (Excoffier et al. 2009;Lee-Yaw et al. 2018) such as loss of genetic diversity (Cobben et al. 2011), gene surfing (Demastes et al. 2019;Travis et al. 2007) and spatial sorting (Cobben et al. 2015;Shine et al. 2011), which may hinder the possibility and flexibility to colonize in new habitat patches. There might also be positive effects of this spatial escape, because when a species is more mobile than a parasite, a predator or a competing species, this species is, at least temporally, released from negative species interactions (Carrasco 2018;Menendez et al. 2008). Mountain species should move to higher altitude to escape climatic warming. Here distances are small compared to latitudinal migration, but lack of space at higher altitudes can hinder these expansions (Essens et al. 2017). Successful rearrangements in space depend on species mobility and geographical conditions and are only effective if the new habitat is suitable for these immigrating species, including interactions in the new community, which itself consists of both novel and local species (Memmott et al. 2007;Preston et al. 2008;Tylianakis et al. 2008). So, species mobility is key for the persistence of species 1 3 that are not able to adapt (Arevall et al. 2018;Bourne et al. 2014;Cormont et al. 2011). Given the concept of transient communities mobility differences are a key factor driving community change. Furthermore, high mobility enables species not only to select for suitable abiotic conditions but also for suitable communities and biotic conditions. Adaptation Rapid adaptation to climatic changes is mostly associated with populations on the edge of the species range (Angert et al. 2020;Logan et al. 2019;Rehm et al. 2015). Species range expansions are expected at the poleward or uphill part of the species' distribution where environmental constrains are released. At the equatorial or downhill end of the range habitat is expected to become less suitable. Here, species can stay and adapt to the changing environment by phenotypic plasticity, selection, epigenetic changes or evolutionary adaptation (Charmantier et al. 2008;Chown et al. 2007;Richards, 2017;van Asch et al. 2013). Phenotypic plasticity is the ability of an organism to change its behaviour, morphology or physiology in response to stimuli or inputs from the environment. For example, the capacity for birds to lay their eggs earlier in the year as a response to higher temperatures. Phenotypic plasticity has been regarded as one of the most important mechanisms to cope with rapid climate change for many species, especially on the short term (Matesanz and Ramirez-Valiente 2019; Merila 2012; Seebacher et al. 2015). Selection for suitable genotypes that are already present is an alternative way to deal with changing conditions. However, it is especially important at the core of the distribution range where the genetic diversity is high. There is growing evidence that epigenetics contribute to plant phenotypes, with important consequences for adaptation to novel conditions and species distributions (Richards et al. 2017). Micro-evolutionary dynamics play an important role in the adaptation to climate change, increasing the ability of species to survive (Bourne et al. 2014). Genetic changes can occur through the selection of thermal performance related traits, such as changes in the critical thermal maximum (Skelly et al. 2007). However, these evolutionary changes may not come fast enough to keep up with the rate at which global climate change is occurring (Lasky 2019; Nadeau and Urban 2019; Penuelas et al. 2008). Moreover, the rate of evolutionary responses may decline through time (Kinnison and Hendry 2001), and antagonistic genetic correlations among traits can constrain evolution (Etterson and Shaw 2001). (Micro-) evolutionary adaptations, including selection for increased phenotypic plasticity, take at least some generations, while the surge of new traits to be selected for can be expected to take even longer. So here we face the danger of Darwinian extinctions, meaning that in many cases we cannot expect that (micro-) evolutionary adaptations and selection will be fast enough to allow survival of populations (Holt 2003;Meester et al. 2018;Razgour 2019). In the context of transient communities, species with high adaptability should be tolerant, plastic and/or genetically diverse with respect to abiotic climatic parameters, but should also be able to deal with the changed community structure and new biotic conditions (Lenoir and Svenning 2015;Lindner, 2010; MacLean and Beissinger 2017). Species traits and extinction rates From the preceding sections, we can conclude that mobility and adaptability are key traits for species to deal with both abiotic and biotic change. We expect general extinction for species with low adaptability and mobility because these species cannot cope or escape. Local extinction is to be expected for species with low adaptability and high mobility 1 3 (Fig. 3). We expect, however, low extinction risks in species that combine high mobility with high adaptability (Fig. 3). Species with intermediate adaptability can survive in resilient landscapes that can buffer impacts of climatic change by habitat heterogeneity (see paragraph "Increase ecosystem resilience") while species with intermediate mobility survive in well-connected landscapes (Fig. 3). Species conservation in transient communities Because species differ in response to abiotic and biotic change, these changes induce transient communities with constantly changing species composition due to colonization and extinction events. Biodiversity management recommendations for climate change mitigation mostly ignore species interactions (Bonebrake et al. 2018;Heller and Zavaleta 2009). We showed that climate change induces transient communities, communities in which increase in CO 2 , abiotic and biotic changes affect complex interactions between species determining the survival and colonization of new species (Fig. 2). This may affect the success of conservation approaches that focus on single species in specific locations. These approaches mostly use a static past situation as a reference and would be overlooking the consequences of species interactions and biotic change on species survival, as well as the spatial consequences of climate change (Brambilla 2020;Engelhardt et al. 2020;Hobbs et al. 2009). The expected outcome of these approaches may be too optimistic because they ignore the effects of codetermining species while a static approach ignores community Reduce greenhouse gas emissions Greenhouse gas emission is the main driver determining climate and community changes. It enhances species movement to cooler locations (poleward, uphill) causing the colonization of new species and the loss of established species in transient communities. Therefore the reduction of greenhouse gas emissions should have priority in every conservation management program because it mitigates the speed of change and species loss in communities (Warren et al. 2018). This is even more important in transient communities because the interactions between species also determine survival. Slower climate change simply means less stress and dynamics for communities and provide time for species adaptation and species movement. Reduce other human pressures Climate change is the result of global emissions that cannot be managed locally. In transient communities other human induced pressures, like disturbances, introduction of invasive species and nutrient loads, generally exacerbate extinctions (Fig. 2), but occur on a more local scale and are often more manageable than global emissions. Moreover, these pressures may affect population sizes within the community which reduce species mobility and resilience (Fernandez-Chacon, 2014;Schippers et al. 2011;Wilson et al. 2010). Therefore, it is important to mitigate other human-induced factors to reduce the total pressure on transient communities already under stress by climate change. Increasing monitoring effort Management thinker Peter Drucker is often quoted as saying that "you can't manage what you can't measure." In transient communities we expect rapid change in species composition, abundance and distribution. To keep track of these changes, monitoring effort should be intensified. As it is impossible to measure all processes and interactions, setting up a monitoring programme for species composition, abundance and distribution is essential (O'Connor et al. 2020). The species composition, abundance and distribution merely serve as an indicator for the community status and serve as a proxy for the state of the interactions. A way of quantifying the change of species composition in transient communities is the Biological Novelty Index (Schittko 2020). This index keeps track of functional changes in communities while taking the species abundance into account. We live in interesting times, and change is going to come in many expected and unexpected forms. Having monitoring schemes and action plans ready can help to prevent ecosystem collapse and/or largescale biodiversity loss due to a combination of climate change and other human-induced pressures. For example, when detecting invasive species, early detection and management measures can help stop the invasive species from spreading. Assess species vulnerability It is important to assess which species are particularly vulnerable in terms of mobility and adaptability (see chapter 3). We expect species with low adaptability and mobility to be likely candidates for extinction globally, whereas species with low adaptability may only become extinct locally. This might initiate communities to become transient by cascading extinctions and new introductions in open niches. Assessing mobility is relatively easy, as the responsible traits are often visible, such as the possession of wings or airborne propagules. Assessing species adaptability is, however, difficult because it is involving species genetics and plasticity with respect to temperature and precipitation. A large species distribution range is likely an indicator of high adaptability and mobility, as it reflects a species' tolerance to a variety of environmental conditions and biological constrains as well as the ability to reach suitable habitats. Study species interactions In the light of transient communities, species protection priority is not only determined by its own adaptability and mobility, but also by those of interacting species. Here, the food web, i.e. the competing, facilitating or otherwise interacting species co-determine the survival of species (Kaur and Dutta 2020;Van der Putten et al. 2010). So, knowledge about species interactions is crucial. If we consider species protection in transient communities, we should therefore also consider how important species are for the survival of other species of the community. If many community members are strongly dependent on the presence of a certain species, the protection of such a keystone species is of utmost importance because the survival of many species depends on its presence. Clearly keystone species that are vulnerable to climatic change are priority candidates in conservation planning. Protect species globally not locally Current conservation practice is usually focused on protecting what is there (current species distribution areas, biodiversity hotspots, nature reserves). Conservation professionals however should be aware of the fact that nowadays communities are becoming more and more transient, meaning that some species will go extinct locally, no matter how hard we try to save them, and new species will establish (Harrison et al. 2006) (Fig. 2). Rigid local species conservation aims with respect to species composition, such as the European Natura 2000 network, are therefore inadequate in the long run, since species composition will inevitably change locally (Harris et al. 2019;Kovac et al. 2018). It is better to evaluate species presence and survival globally. Locally, it is useful to identify new potentially colonizing species from equatorial direction and vulnerable species living at the equatorial end of their distribution that may go extinct (Kuussaari 2009). In many cases it would be a waste of resources to focus on saving these latter populations. Facilitate species to rearrange in space In transient communities, the ecosystem functions of species that are disappearing can be taken over by new species. The most evident way to enhance species introductions to 1 3 transient communities is to protect, restore and increase landscape connectivity (Heller and Zavaleta 2009). This can be done in many ways, but halting large monoculture expanses and obstacles, while creating habitat corridors, steppingstones and green bridges (ecoducts) seems crucial. More advanced ways to improve connectivity are creating extra habitat in the overlapping area between current and predicted future habitat of vulnerable species (Grashof-Bokdam et al. 2009;Ruter et al. 2014;Vos 2008). In the context of transient communities, the potential mobility of co-determining community members should also be considered. Therefore, steppingstones and green bridges should be suitable for sets of interacting species, taking into account the mobility traits of the weakest disperser. Rates of these poleward expansions are likely determined by the slowest species. Increasing landscape connectivity may also facilitate the expansion of invasive species using the same landscape network. It will be a challenge to make the network selective for wanted and unwanted species (Saura et al. 2014). Models of species range shifts show that larger populations will have greater potential for colonization because they produce more dispersing individuals. (Schippers et al. 2011;Wilson et al. 2010). So, maintaining high population size is also key for species mobility. In landscapes with large and permanent obstacles or when species are less mobile, it may be necessary to actively transport species poleward to new suitable habitats ('assisted colonization') (Heikkinen 2015;Hoegh-Guldberg et al. 2008;Richardson 2009). Richardson et al. (2009) developed a multi-criterium framework based on focal impact, collateral impact, feasibility and acceptability to evaluate the potential value and succes of assisted colonization (Richardson et al. 2009). In the context of transient communities, we can transport a group of interacting species simultaneously to avoid unsuccessful introductions due to lack of facilitating species (e.g. prey or pollinators). On the other hand, introduction can be sequentially performed with the lowest trophic levels first, e.g. plants, herbivores, carnivores, parasites. Evidently, transported species might become invasive (Lunt 2013). However, assisted colonization has the advantage over connectivity improvements because transporting species can be selected which makes it possible to avoid species with invasive properties (Richardson et al. 2009). More research on assisted colonization and species interactions is needed to facilitate successful poleward expansion of immobile species. Increase ecosystem resilience To reduce the species loss in transient communities we should improve the local resilience of populations (Cote and Darling 2010;Moritz and Agudo 2013;Prober 2012). This can be done by increasing habitat patch sizes, allowing for more robust populations (Fernandez-Chacon et al. 2014;Verboom et al. 2001) that are less vulnerable to extinctions (Verboom et al. 2010) and better able to survive sub-optimal conditions. Population size also affects the potential for adaptive evolution, although here it's less clear that a larger population is always superior because small or disjunct populations may adapt faster. Another way to preserve ecosystem resilience is by maintaining or improving the spatial and topographic heterogeneity of the landscape (Lawler et al. 2015;Perovic 2015;Schippers et al. 2015b;Suggitt, 2018). For example, by maximizing elevational and other environmental gradients, and adding blue or green infrastructure. The added value of such landscapes is that increased heterogeneity creates ecotones and edges. This allows for more species per functional group and genetic variation, both increasing ecosystem resilience (Anderson et al. 2014). Restoration of ecosystems should focus on creating both macro-and micro-refugia that help species to survive through short-term climatic extremes 1 3 Selwood and Zimmer 2020;Thakur et al. 2020). Given the concept of transient communities, it may be best to increase the resilience of the landscape with respect to keystone species that play an important role in the ecosystem. Conclusion Conservation efforts with respect to climate change often target individual species in response to abiotic climate change, but it is important to acknowledge that multiple interactions in food webs and communities underpin the functioning of ecological communities (Early and Keith 2019;Naeem et al. 1999;Ponisio, 2019). From the perspective of transient communities, conservation management should, therefore, scale up single species approaches to focus on communities, and consider the vulnerability of species in relation to their function in the community. One of the major challenges in elucidating the effects of climate change on biodiversity is that responses invariably must focus on interactive effects rather than on individual species. Species do not exist in isolation, they interact in a complex array of ways with other species that cover variable scales from the levels of trophic chains to more diffuse effects at the level of communities and ecosystems. Ecologist Daniel Janzen (Janzen 1974) once argued that the "most insidious sort of extinction [is] the extinction of ecological interactions", a point driven home more recently by Memmott et al. (2010), Memmott et al. (2010) and Valiente-Banuet (2015), Valiente-Banuet et al. (2015). Hence, science should identify crucial trophic and competitive or facilitative species interactions and value species interdependency and vulnerability with respect to climate change, while conservation efforts should shift from restoring the past to facilitating an unfolding a biodiversity-rich future.
7,001.4
2021-07-13T00:00:00.000
[ "Environmental Science", "Biology" ]
Microglial Intracellular Ca2+ Signaling in Synaptic Development and its Alterations in Neurodevelopmental Disorders Autism spectrum disorders (ASDs) are neurodevelopmental disorders characterized by deficits in social interaction, difficulties with language and repetitive/restricted behaviors. Microglia are resident innate immune cells which release many factors including proinflammatory cytokines, nitric oxide (NO) and brain-derived neurotrophic factor (BDNF) when they are activated in response to immunological stimuli. Recent in vivo imaging has shown that microglia sculpt and refine the synaptic circuitry by removing excess and unwanted synapses and be involved in the development of neural circuits or synaptic plasticity thereby maintaining the brain homeostasis. BDNF, one of the neurotrophins, has various important roles in cell survival, neurite outgrowth, neuronal differentiation, synaptic plasticity and the maintenance of neural circuits in the CNS. Intracellular Ca2+ signaling is important for microglial functions including ramification, de-ramification, migration, phagocytosis and release of cytokines, NO and BDNF. BDNF induces a sustained intracellular Ca2+ elevation through the upregulation of the surface expression of canonical transient receptor potential 3 (TRPC3) channels in rodent microglia. BDNF might have an anti-inflammatory effect through the inhibition of microglial activation and TRPC3 could play important roles in not only inflammatory processes but also formation of synapse through the modulation of microglial phagocytic activity in the brain. This review article summarizes recent findings on emerging dual, inflammatory and non-inflammatory, roles of microglia in the brain and reinforces the importance of intracellular Ca2+ signaling for microglial functions in both normal neurodevelopment and their potential contributing to neurodevelopmental disorders such as ASDs. Autism spectrum disorders (ASDs) are neurodevelopmental disorders characterized by deficits in social interaction, difficulties with language and repetitive/restricted behaviors. Microglia are resident innate immune cells which release many factors including proinflammatory cytokines, nitric oxide (NO) and brain-derived neurotrophic factor (BDNF) when they are activated in response to immunological stimuli. Recent in vivo imaging has shown that microglia sculpt and refine the synaptic circuitry by removing excess and unwanted synapses and be involved in the development of neural circuits or synaptic plasticity thereby maintaining the brain homeostasis. BDNF, one of the neurotrophins, has various important roles in cell survival, neurite outgrowth, neuronal differentiation, synaptic plasticity and the maintenance of neural circuits in the CNS. Intracellular Ca 2+ signaling is important for microglial functions including ramification, de-ramification, migration, phagocytosis and release of cytokines, NO and BDNF. BDNF induces a sustained intracellular Ca 2+ elevation through the upregulation of the surface expression of canonical transient receptor potential 3 (TRPC3) channels in rodent microglia. BDNF might have an anti-inflammatory effect through the inhibition of microglial activation and TRPC3 could play important roles in not only inflammatory processes but also formation of synapse through the modulation of microglial phagocytic activity in the brain. This review article summarizes recent findings on emerging dual, inflammatory and non-inflammatory, roles of microglia in the brain and reinforces the importance of intracellular Ca 2+ signaling for microglial functions in both normal neurodevelopment and their potential contributing to neurodevelopmental disorders such as ASDs. INTRODUCTION Autism spectrum disorders (ASDs) are neurodevelopmental disorders characterized by deficits in social interaction, difficulties with language, and repetitive/restricted behaviors (Lai et al., 2014). The etiology of ASDs is still largely unclear, but both immune dysfunction and abnormalities in synaptogenesis have repeatedly been implicated as contributing to the disease phenotype (Edmonson et al., 2016). Microglia are immune cells which are derived from progenitors that have migrated from the periphery and are from mesodermal/mesenchymal origin (Kettenmann et al., 2011). After invading the brain parenchyma, microglia transform into the ''resting'' ramified phenotype and are distributed in the whole brain. However, microglia revert to an ameboid appearance when they are activated in the disturbances including infection, trauma, ischemia, neurodegenerative diseases or any loss of brain homeostasis (Aguzzi et al., 2013;Cunningham, 2013). Microglia are the most active cytokine producing cells in the brain and can release many factors including pro-inflammatory cytokines (such as TNFα, IL-6), nitric oxide (NO) and neurotrophic factors (such as brain-derived neurotrophic factor, BDNF) when they are activated in response to immunological stimuli Monji et al., 2013Monji et al., , 2014Mizoguchi et al., 2014a;Smith and Dragunow, 2014). However, recent in vivo imaging has shown that microglia constantly use highly motile processes to survey their assigned brain regions and phagocyte pathogens and cellular debris even in their resting state, and are ready to transform to ''activated'' state in responses to injury, ischemia or autoimmune challenges in the brain (Wake et al., 2013). Microglia are also shown to sculpt and refine the synaptic circuitry by removing excess and unwanted synapses and be involved in the development of neural circuits or synaptic plasticity thereby maintaining the brain homeostasis (Schwartz et al., 2013;Hong et al., 2016). By extension, neurodevelopmental disorders such as ASDs might not need to involve a pathological gain in microglial function but simply a disruption of their physiological functioning in the regulation of synaptic circuits (Salter and Beggs, 2014;Ziats et al., 2015;Macht, 2016). This review article summarizes recent findings on emerging dual, inflammatory and non-inflammatory, roles of microglia in the brain and reinforces the importance of intracellular Ca 2+ signaling for microglial functions in both normal neurodevelopment and their potential contributing to neurodevelopmental disorders such as ASDs. The pioneer work by Vargas et al. (2005) and subsequent studies revealed an active neuroinflammatory phenotype of microglia in the post-mortem brains of patients with autism (Morgan et al., 2010). Marked changes in microglial morphology, accompanied by a unique profile of pro-inflammatory cytokines were seen in the cerebral cortex, white matter and cerebellum of patients with autism. Excessive microglia activation in young adults (age 18-31 years) affected by ASDs was also confirmed with PET using [11C]-(R)-PK11195. In this study, ASD brain regions showing increased binding potentials of the radiotracer included the cerebellum, midbrain, pons, fusiform gyri, and the anterior cingulate and orbitofrontal cortices. The most prominent increase was observed in the cerebellum (Suzuki et al., 2013). In the cerebellum, activated microglia were observed to be intimately associated with Purkinje cells undergoing apoptosis in cerebellar organotypic cultures during normal development. This could be consistent with a role for microglia in developmentally regulated neuronal death by promoting Purkinje cell apoptosis (Marín-Teva et al., 2004), an important physiological activity that could be impaired in autism. A deficit in microglia/complement-mediated synaptic pruning might be fundamental to the cognitive effects associated with ASDs (Voineagu et al., 2011). The chemotactic/phagocytic activity of microglia could also be impaired, further aggravating the symptoms by insufficient clearance of debris (Derecki et al., 2013). The complement cascade, normally associated with removal of pathogens and cellular debris, is also crucial to microglial-mediated synaptic pruning and refinement of neuronal connectivity in the normal brain (Stephan et al., 2012). Evidence points to convergence on C3 and its microglial receptor C3 receptors (C3R). The initiator of the complement cascade is C1q, which induces C3 secretion via C4. The presence of C3 on unwanted synapses ''tags'' them for recognition by microglia to be eliminated. In addition, decreased C4 leading to reduced synaptic pruning in early life, mediated through reduced C3 synaptic tagging, is implicated in ASD-like behaviors (Estes and McAllister, 2015). Furthermore, mice deficient in the CX3CR1, a chemokine receptor expressed in the brain exclusively by microglia, have increased densities of immature synapses caused by delayed synaptic pruning, resulting in excessive and electrophysiologically immature synapses and deficits in functional connectivity (Zhan et al., 2014). Altogether, recent findings on emerging dual, inflammatory and noninflammatory, roles of microglia in the brain suggest that abnormal secretion of inflammatory cytokines and abnormal or exaggerated execution of normal developmental microglial functions, including incorrect synaptic pruning, failure of phagocytosis of apoptotic neurons might be underlying mechanisms of neurodevelopmental disorders such as ASDs (Edmonson et al., 2016). NEURONAL INTRACELLULAR Ca 2+ SIGNALING MEDIATED BY VGCCs AND ASDs The electrical activity of neurons (i.e., excitable cells) depends on a number of different types of voltage-or ligand-gated ion channels that are permeable to inorganic ions such as sodium, potassium, chloride and calcium. While the former three ions predominantly support the electrogenic roles, Ca 2+ are different in that they can not only alter the membrane potential but also serve as important intracellular signaling entities by themselves. In the CNS, intracellular Ca 2+ signaling regulates many different neuronal functions, such as cell proliferation, gene transcription and exocytosis at synapses (Berridge, 1998). In neurons, because the prolonged elevation of intracellular Ca 2+ concentration ([Ca 2+ ]i) is cytotoxic, [Ca 2+ ]i is tightly regulated by intrinsic gating processes mediated by voltage-gated calcium channels (VGCCs) and NMDA receptors (NMDARs; Simms and Zamponi, 2014). In addition, dysregulation of neuronal Ca 2+ signaling have been linked to neurodevelopmental disorders including ASDs (Krey and Dolmetsch, 2007). Ca V 1.3 channels are a major class of L-type VGCCs which constitute an important calcium entry pathway implicated in the regulation of spine morphology and then contribute to the rhythmicity of brain (Stanika et al., 2016). In the brain, VGCCs are vital for coupling of neuronal excitation-transcription, synaptic plasticity and neuronal firing, and de novo missense mutation A760G of Ca V 1.3 channels has been linked to ASDs (Pinggera et al., 2015). Ca V 1.3 channels employ two major forms of feedback regulation, voltage-dependent inactivation (VDI) and Ca 2+dependent inactivation (CDI). Limpitikul et al. (2016) recently found that introduction of missense mutation A760G to Ca V 1.3 severely suppressed the CDI but also potentiated the VDI of Ca V 1.3 channels, suggesting that deficits of these two feedback regulation appear to increase the [Ca 2+ ]i, thus potentially disrupting both neuronal development and synapse formation, ultimately leading to ASDs. There are many other reports showing that functional mutations in genes encoding VGCCs can lead to ASDs (Splawski et al., 2004;Li et al., 2015). In addition, disruption of the BK Ca gene KCNMA1 which encodes the α-subunit of the large conductance Ca 2+activated K + channel (BK Ca ) led to both haplo-insufficiency and reduced BK Ca activity (Laumonnier et al., 2006). Thus, the reported decrease in BK Ca channel activity, together with the reduced inactivation of VGCCs in autistic patients, suggests that ASDs are caused by abnormally sustained increases in intracellular Ca 2+ levels (Krey and Dolmetsch, 2007). MICROGLIAL INTRACELLULAR Ca 2+ SIGNALING AND IMPORTANCE OF TRP CHANNELS Elevation of [Ca 2+ ]i is also important for the activation of microglia, including proliferation, migration, ramification, de-ramification and release of NO, proinflammatory cytokines and BDNF (Kettenmann et al., 2011). In addition, disruption of microglial Ca 2+ homeostasis triggers activation of death programs, which are regulated by the microglia activation status. Treatment of primary cultured microglial cells with thapsigargin or ionomycin induced apoptosis, whereas the same agents applied to lipopolysaccharide (LPS)-activated microglia resulted in necrotic cell death (Nagano et al., 2006). Both apoptotic and necrotic pathways are regulated by [Ca 2+ ]i because the treatment of cultures with BAPTA-AM reduced microglial cell death (Nagano et al., 2006). However, in microglial cells, an application of high [K + ]out or glutamate does not elevate [Ca 2+ ]i. This observation is supported by the fact that both VGCCs and NMDARs are not expressed in microglia (Kettenmann et al., 2011). For electrically non-excitable cells including microglia, the primary source of intracellular Ca 2+ is the release from intracellular Ca 2+ stores and the entry through the ligand-gated and/or store operated Ca 2+ channels (Möller, 2002). Microglia contain at least two types of intracellular Ca 2+ stores: the endoplasmic reticulum (ER) and mitochondria. The main route for the generation of intracellular Ca 2+ signaling is associated with inositol 1,4,5-trisphosphate (InsP3) receptors on the ER membrane. Stimulation of G proteincoupled metabotropic or tyrosine kinase receptors results in the activation of the phospholipase C (PLC), production of two second messengers including the diacylglycerol (DAG) and the InsP3 and the release of Ca 2+ from the ER. Importantly, the depletion of ER activates the store-operated Ca 2+ entry (SOCE), known as a capacitative Ca 2+ influx, mediated by plasmalemmal channels such as calcium releaseactivated Ca 2+ (CRAC) channels and/or transient receptor potential (TRP) channels (Parekh and Putney, 2005). In addition, STIM1, one of ER membrane proteins, senses the filling state of ER Ca 2+ and delivers the ER to the plasma membrane where it directly activates Orai1/CRAC channels, thereby facilitating the re-uptake of Ca 2+ to ER through the sarco(endo)plasmic reticulum Ca 2+ -ATPases (SERCA). The concentration of Ca 2+ in the ER is precisely controlled by SERCA. Recently, Schmunk et al. (2015) found that dysregulation of InsP3/ER signaling in primary, untransformed skin fibroblasts derived from patients with Fragile X (FXS) or tuberous sclerosis syndromes. This suggests that ASDs might also affect the status of the ER-Ca 2+ store in microglial cells. The influx of Ca 2+ through the TRP channels could play some important roles in many inflammatory processes including the activation of microglia (Nilius and Szallasi, 2014). There are seven transient receptor potential canonical (TRPC) channels in mammalian species. Among them, TRPC2 is a pseudogene in humans. The remaining members of the TRPC subfamily are classified into three groups according to sequence homology, TRPC1, canonical TRPC3/C6/C7 and TRPC4/C5. Quantitative comparisons of mRNA expression using real-time RT-PCR showed that TRPM7 > TRPC6 > TRPM2 > TRPC1 > TRPC3 ≥ TRPC4 > TRPC7 > TRPC5 > TRPC2, where ''>'' denotes a significant difference from the preceding gene, and ''≥'' indicates a non-significant difference, in microglial cells cultured from rats (Ohana et al., 2009). IMPORTANCE OF BDNF SIGNALING IN ASDs BDNF, one of the neurotrophins, has various important roles in cell survival, neurite outgrowth, neuronal differentiation and gene expression in the brain (Thoenen, 1995;Park and Poo, 2013). BDNF is most abundantly expressed in the hippocampus and cerebral cortex and is also involved in the pathophysiology of psychiatric disorders (Sen et al., 2008). Two meta-analysis studies recently showed that neonates diagnosed with ASDs later in life had no association with blood levels of BDNF, while children in the ASD group demonstrated significantly increased BDNF levels compared with healthy controls (Qin et al., 2016;Zheng et al., 2016). These suggest that peripheral BDNF levels might serve as a potential biomarker for the diagnosis of ASDs and further studies are needed to clarify the causal relationship between the symptoms of ASDs and peripheral levels of BDNF. BDNF binds to the tropomyosin-related kinase B (TrkB) receptor and induces the activation of intracellular signaling pathways, including PLC-γ, phosphatidylinositol 3-kinase (PI3K) and mitogen activated protein kinase-1/2 (MAPK-1/2; Patapoutian and Reichardt, 2001). BDNF rapidly activates the PLC pathway, leading to the generation of InsP3 and the mobilization of intracellular Ca 2+ from the ER (Mizoguchi et al., 2003a,b). TRPC3 channels are shown to be necessary for BDNF to increase the density of dendritic spines in rodent hippocampal CA1 pyramidal neurons (Amaral and Pozzo-Miller, 2007). Rett syndrome (RTT) is caused by loss-offunction mutations in MECP2, encoding methyl-CpG-binding protein 2 (Amir et al., 1999). The TRPC3 mRNA and protein levels are lower in CA3 pyramidal neurons of symptomatic Mecp2 mutant mice and chromatin immunoprecipitation (ChIP) identified Trpc3 as a target of MeCP2 transcriptional regulation. BDNF mRNA and protein levels are also lower in Mecp2 mutant hippocampus and dentate gyrus granule cells, which is reflected in impaired activity-dependent release of endogenous BDNF. These results identify the gene encoding TRPC3 channels as a MeCP2 target and suggest a potential therapeutic strategy to boost impaired BDNF signaling in RTT (Li et al., 2012). POSSIBLE INVOLVEMENT OF MICROGLIAL INTRACELLULAR Ca 2+ SIGNALING MODULATED BY BDNF IN ASDs In the rodent brain, microglial cells express BDNF mRNA (Elkabes et al., 1996) and secrete BDNF following stimulation with LPS (Nakajima et al., 2001). BDNF released from activated microglia then induces the sprouting of nigrostriatal dopaminergic neurons (Batchelor et al., 1999), causing a shift in the neuronal anion gradient (Coull et al., 2005), or promotes the proliferation and survival of microglia themselves (Zhang et al., 2003). In addition, Parkhurst et al. (2013) showed that the Cre-dependent removal of BDNF from microglia induces deficits in multiple learning tasks mediated by reduction in learningdependent spine elimination/formation. These suggest that microglia serve important physiological functions in learning and memory by promoting learning-related synapse formation through the BDNF signaling. We have reported that BDNF induced a sustained increase in [Ca 2+ ]i through binding with the truncated tropomyosin-related kinase B receptor (TrkB-T1), resulting in activation of the PLC pathway and SOCE in rodent microglial cells. Sustained activation of SOCE occurred after a brief BDNF application and contributed to the maintenance of sustained [Ca 2+ ]i elevation. Pretreatment with BDNF significantly suppressed the release of NO from activated microglia. Additionally, pretreatment of BDNF suppressed the IFN-γ-induced increase in [Ca 2+ ]i, along with a rise in basal levels of [Ca 2+ ]i in rodent microglial cells (Mizoguchi et al., 2009). Thereafter, we observed that TRPC3 channels contribute to the maintenance of BDNF-induced sustained intracellular Ca 2+ elevation. Immunocytochemical technique and flow cytometry also revealed that BDNF rapidly up-regulated the surface expression of TRPC3 channels in rodent microglial cells. BDNF-induced up-regulation of surface expression of TRPC3 channels also depends on activation of the PLC pathway, as previously shown by others (van Rossum et al., 2005). In addition, pretreatment with BDNF suppressed the production of NO induced by TNFα, which was prevented by co-administration of a selective TRPC3 inhibitor, Pyr3. These suggest that TRPC3 channels could be important for the BDNF-induced suppression of the NO production in activated microglia. We first showed direct evidence that rodent microglial cells are able to respond to BDNF and TRPC3 channels could also play important roles in microglial functions. Hall et al. (2009) have previously demonstrated the implication of the basal level of [Ca 2+ ]i in the activation of rodent microglia, including NO production. BDNF-induced elevation of basal levels of [Ca 2+ ]i could regulate the microglial intracellular signal transduction to suppress the release of NO induced by IFN-γ (Hoffmann et al., 2003;Mizoguchi et al., 2009). We observed that pretreatment with BDNF also suppressed the production of NO in murine microglial cells activated by TNFα, which was prevented by co-administration of Pyr3. We also found that pretreatment with both BDNF and Pyr3 did not elevate the basal [Ca 2+ ]i in rodent microglial cells. These suggest that BDNF-induced elevation of basal levels of [Ca 2+ ]i mediated by TRPC3 channels could be important for the BDNF-induced suppression of NO production in rodent microglial cells. Although the mechanism underlying the activation of TRPCs via PLC stimulation is still not completely resolved, TRPC3, like TRPC6 and TRPC7, can be activated directly by DAG. The trafficking of TRPC3 channels to the plasma membrane depends on interactions with Cav-1, Homer1, PLC-γ, VAMP2 and RFN24 (de Souza and Ambudkar, 2014). In addition, phagocytic activity is suppressed by pharmacological inhibitors of SOCE in murine microglial cells (Heo et al., 2015). Altogether, these suggest that BDNF might have an anti-inflammatory effect through the inhibition of microglial activation and TRPC3 could play important roles in not only inflammatory processes but also formation of synapse through the modulation of microglial phagocytic activity in the brain. We need additional studies to identify the molecular mechanisms that determine the trafficking and activity of TRPC3 channels and what underlies the BDNF-induced up-regulation of surface TRPC3 channels in these mechanisms (Mizoguchi et al., 2014a,b). FIGURE 1 | Schematic illustration representing the microglial intracellular Ca 2+ signaling mediated by canonical transient receptor potential 3 (TRPC3) channels and the tripartite synapse. In microglia, brain-derived neurotrophic factor (BDNF) induces a sustained increase in [Ca 2+ ]i through binding of the truncated TrkB receptors (TrkB-T), resulting in activation of the phospholipase C (PLC) pathway. Up-regulation of cell surface TRPC3 channels occurs after a brief treatment with BDNF and contributes to the maintenance of BDNF-induced sustained intracellular Ca 2+ elevation. BDNF-induced elevation of basal levels of [Ca 2+ ]i mediated by TRPC3 channels could be important for the BDNF-induced suppression of nitric oxide (NO) production induced by TNFα or IFNγ. Microglial intracellular Ca 2+ signaling is also important for microglial functions such as phagocytosis in the brain. The tripartite synapse consists of the presynaptic (glutamatergic) terminal, Frontiers in Cellular Neuroscience | www.frontiersin.org FIGURE 1 | Continued postsynaptic terminal, astrocytes and microglia. Dysregulation of normal microglial functions including incorrect synaptic pruning, failure of phagocytosis of apoptotic neurons and abnormal secretion of inflammatory cytokines might be underlying mechanisms of neurodevelopmental disorders such as autism spectrum disorders (ASDs). On the other hand, the effects of proBDNF on microglial functions are not fully understood. Further work will be needed to elucidate the role of proBDNF on microglial cells by focusing on intracellular Ca 2+ signaling mediated by TRPC channels. Using multiple models including individual's dental pulp cells (DPCs), neural cells derived from induced pluripotent stem cell (iPSCs) and mouse models, Griesi-Oliveira et al. (2015) recently reported that loss-of-function mutations of TRPC6 is a novel predisposing factor for ASDs, suggesting that dysfunction of Ca 2+ signaling mediated by TRPC6 contributes to altered neuronal development, neuronal morphology and synaptic function in ASDs. It is not well known whether TRPC6 channels of microglia also serve important physiological roles in the alteration of synaptic function in ASDs. FUTURE PROSPECTS Elevation of intracellular Ca 2+ is important for the activation of microglial cell functions, including proliferation, release of NO, cytokines and BDNF. It has been shown that alteration of intracellular Ca 2+ signaling underlies the pathophysiology of neurodevelopmental disorders including ASDs. BDNF induces a sustained intracellular Ca 2+ elevation through the upregulation of the surface expression of TRPC3 channels in rodent microglial cells. Microglial cells are able to respond to BDNF, which may be important for the regulation of inflammatory responses, and may also be involved in the normal development of CNS. BDNF is first synthesized as proBDNF protein. ProBDNF is then either proteolytically cleaved intracellularly or by extracellular proteases, such as metalloproteinases and plasmin, to mature BDNF. Interestingly, interaction of mature neurotrophins with Trk receptors leads to cell survival, whereas binding of proBDNF to p75NTR leads to apoptosis. In addition, mature BDNF and proBDNF facilitates long-term potentiation (LTP) and long-term depression (LTD) at the hippocampal CA1 synapses, respectively. Thus, Trk and p75NTR preferentially bind mature-and pro-neurotrophins, respectively, to elicit opposing biological responses in the CNS (Greenberg et al., 2009). Indeed, a recently published report shows that pruning of spines promoted by proBDNF is mediated by the p75NTR-RhoA, while maturation of spines induced by BDNF is through the stimulation of TrkB-Rac1 signaling (Orefice et al., 2016). However, the effects of proBDNF on microglial cells are not fully understood. Thus, further work will be needed to elucidate the role of proBDNF on microglial cells by focusing on TRPC channels (Figure 1). Oxytocin (OT) is a pituitary neuropeptide hormone synthesized from the paraventricular and supra-optic nuclei within the hypothalamus. Like other neuropeptides, OT can modulate a wide range of neurotransmitter and neuromodulator activities. OT is secreted into the systemic circulation to act as a hormone, thereby influencing several body functions. OT plays a pivotal role in parturition, milk let-down and maternal behavior and has been demonstrated to be important in the formation of pair bonding between mother and infants as well as in mating pairs. Furthermore, OT has been proven to play a key role in the regulation of several behaviors associated with neuropsychiatric disorders, including social interactions, social memory response to social stimuli, decision-making in the context of social interactions, feeding behavior, emotional reactivity, etc. An increasing body of evidence suggests that dysregulation of the oxytocinergic system might be involved in the pathophysiology of neurodevelopmental disorders such as ASDs (Romano et al., 2016). Using functional magnetic resonance imaging, single-dose intranasal administration of OT was shown to improve the frequency of the nonverbal information-based judgments with a shorter response time and the brain activity of the medial prefrontal cortex in participants with ASDs (Watanabe et al., 2014). Thus, there is a significant potential for OT to ameliorate some aspects of the persistent and debilitating social impairments in individuals with ASDs (Alvares et al., 2016). Although there is no clinical use of minocycline in ASDs, prenatal minocycline treatment can alter the expression of PSD-95 and ameliorate abnormal mother-infant communication in oxytocin receptor (Oxtr)deficient mice (Miyazaki et al., 2016). This finding suggests that minocycline has a therapeutic potential for the development of OT/Oxtr-mediated ASD-like phenotypes (Nakagawa and Chiba, 2016). In addition, OT suppressed both the mRNA expression of TNFα, IL-1β, COX-2 and iNOS and the elevation of [Ca 2+ ]i in LPS-stimulated microglia cells (Yuan et al., 2016). These suggest that OT would be a potential therapeutic agent for alleviating neuro-inflammatory processes in ASDs. However, the effects of OT on microglial intracellular Ca 2+ signaling are not fully understood. Thus, it will be important to study the effect of OT on microglial cells especially by focusing on TRPC channels. CONCLUSIONS There is increasing evidence suggesting that pathophysiology of neurodevelopmental disorders is related to the inflammatory responses mediated by microglial cells. In addition, recent advances in the understanding of microglial functions suggest an important role for these cells in the normal development of CNS in addition to their traditional role as immune cells of the brain. Dysregulation of normal microglial functions such as regulation of programmed cell death and/or synaptic pruning is supposed to be increasingly implicated in ASDs associated with cognitive deficits. These findings have resulted in a new model of the synapse as ''tripartite,'' recognizing the important role of not just neurons and astrocytes but also microglia in the normal physiological function of the brain. We now need to explore the emerging dual, inflammatory and noninflammatory, roles of microglia in the brain and recent findings reinforce the importance of intracellular Ca 2+ signaling for microglial functions in both normal neurodevelopment and their potential contributing to neurodevelopmental disorders such as ASDs. AUTHOR CONTRIBUTIONS YM and AM wrote this article.
5,987.8
2017-03-17T00:00:00.000
[ "Biology", "Medicine" ]
Power Aware Mobility Management of M2M for IoT Communications Machine-to-Machine (M2M) communications framework is evolving to sustain faster networks with the potential to connect millions of devices in the following years. M2M is one of the essential competences for implementing Internet of Things (IoT). Therefore, various organizations are now focusing on enhancing improvements into their standards to support M2M communications. Thus, Heterogeneous Mobile Ad Hoc Network (HetMANET) can normally be considered appropriate for M2M challenges. These challenges incorporated when a mobile node (MN) selects a target network in an energy efficient scanning for efficient handover. Therefore, to cope with these constraints, we proposed a vertical handover scheme for handover triggering and selection of an appropriate network. The proposed scheme is composed of two phases. Firstly, the MNs perform handover triggering based on the optimization of the Receive Signal Strength (RSS) from an access point/base station (AP/BS). Secondly, the network selection process is performedby considering the cost and energy consumption of a particular application during handover. Moreover, if there are more networks available, then theMN selects the one provided with the highest quality of service (QoS).The decision regarding the selection of available networks is made on three metrics, that is, cost, energy, and data rate. Furthermore, the selection of an AP/BS of the selected network is made on five parameters: delay, jitter, Bit Error Rate (BER), communication cost, and response time. The numerical and experimental results are compared in the context of energy consumption by an MN, trafficmanagement on an AP/BS, and QoS of the available networks.The proposed scheme efficiently optimizes the handoff related parameters, and it shows significant improvement in the existing models used for similar purpose. Introduction Adaptation of heterogeneous access network and efficient use of available resources are one of the key confront for the next generation of mobile communication.M2M practices miscellaneous models in terms of the electronic smart grid, connected cars, body area network, and Android communication using vehicular technologies [1].M2M communication is an important facilitating technology for Internet of Things (IoT), which facilitates an outsized number of implicated escorts to a large number of research challenges [2].In 3GPP terminology, M2M communication is usually referred to as Machine-Type Communication (MTC) [3][4][5]. Over the last few years, the numbers of heterogeneous networks available at a particular location were increased noticeably [6].Various communication networks show inherent characteristics in terms of handover failure and consumption of energy and cost, which offer higher communication performance.To highlight the eventual function of M2M communication for developing wide-ranged connections among various devices, the potential of HetMANET cannot be neglected in this regard. A handover process starts from the machine when it experiences a weak received RSS from a base BS/AP.When RSSI reached a predefined threshold, machine (MN) starts to search for available networks.The handover time is mainly dependent on the scanning delay of the available networks.Furthermore, an optimum network can be selected for effective handover process among available networks on the basis of price, security, transmission rate, and quality of service (QoS). Mobile Information Systems Employing the available technologies for MTC leads to various challenges, including a selection of the best network for handover, incompatibility among different networks, and handover delay.To address these challenges in the HetMANET, an efficient, organized handover management scheme is required, which can switch communication data from one network to another with the minimum packet loss and delay, respectively.When a device is moving from one BS/AP to another, it executes a discovery mechanism for searching nearby BSs/APs and then establishes a connection with higher QoS.The selection of inappropriate network introduces high handover time and delay in a handover process.This handover delay can be minimized by adjusting different factors like RSS, data rate, available bandwidth, and Signal to Interference and Noise Ratio (SINR) from a BS [7,8]. In 2008, International Telecommunication Union-Radio Sector (ITU-R) defined new specifications for 4G standard called International Mobile Telecommunication Advanced (IMT-Advanced).IMT-Advanced supports 100 Mbits/s for high mobility connections and 1 Gbits/s for low mobility connections [9].With the increase in data rates, new technologies such as WiFi and WiMAX participate actively to develop new technology modification.Development of new technologies is urgently needed since these networks face key constraints of compatibility issues. The IEEE 802.21 is the MIH standard in 2008 for seamless handover between networks of the same and different type [10].Recently, much research has been performed to improve the currently available MIH standard [11][12][13].The MIH standard is still facing many challenges that need to be addressed; that is, (i) long handover time is required for the MIIS server, which is located many hops away, (ii) time needed for handover process is very short when the handover number is frequent in a handover region, and (iii) failure of a hop requires alternate routes to connect MIIS server that can increase handover time.In the MIH standard, MN initiates handover upon receiving RSS below the predefined threshold.The time required for handover is constant even if the MIIS server is located many hops away.When MIIS server is located many hops away, a longer time is required for MN to get the information of the available networks.If an alternate route is available for MIIS server, which consists of several hops in the case of route failure, then the time required for handover will also increase that may cause the breaking of connection during handover process in the worst condition.MIH standard utilize RSS for handover initiation.RSS suffered from different problems such as wrong network selection and too early and late handover, as shown in Figure 1. Therefore, to address the challenges above, this paper introduces QoS based efficient handover scheme in which a BS collects information, in advance, of the available networks for MN.In this scheme, MN does not wait for long time in a handover region for the collection of information of the available network.Each time when MN needs information of available networks, it is available one hop away from BS.This straightforward and distinct scheme has many advantages including the following: (i) MN search for nearby BS/AP is done in an efficient manner, (ii) the proposed scheme is not affected by the number of incoming connections, and (iii) MN does not consume excess energy when it scans for nearby network. The remainder of this paper is organized as follows.In Section 2, we give a brief background of the existing schemes and their shortcomings faced by handover for M2M communications in HetMANET.In Section 3, we propose an energy efficiency vertical handover scheme from M2M communication.In Section 4, a detailed analytical and simulation analysis is discussed in detail.Finally, Section 5 offers a conclusion. Related Work In the last decade, various schemes have been proposed for the improvements of handover management in HetMANET.Most of these schemes are based on the optimization of different parameters necessary for handover.The optimization of these parameters reduces the handover time and latency.With the passage of time, the number of the new access networks was rapidly increased, producing signaling overhead and other issues related to handover phenomenon.Similarly, the access new technologies such as LTE Advanced and Bluetooth 4.0 low energy were introduced to save communication time and energy.All of the recent technologies try to provide its customer with the best QoS.The QoS of a network can be enhanced if a consumer is provided with a continuous connection to different networks. To upgrade the QoS of a network, a scheme is proposed, based on optimizing parameters such as Bit Error Rate, delay, jitter, and data rate in [14].The decision of handover is performed by using fuzzy logic and analytic hierarchy approaches.The proposed scheme receives the context information like networks related information, user preferences, and service requirements for an efficient handover process.MNs periodically check the RSS level with the current AP/BS; if the RSS drops below a particular level, then the MN initiates network selection phase.The network quality scoring function is defined to evaluate the QoS of a network.The network with the highest QoS is selected for the handover.The proposed scheme takes handover initiation decision on the basis of available RSS, due to which the number of false handover indications increases considerably [15]. Considering the challenges to minimize the handover number, a scheme based on self-optimizing is proposed, which extracts velocity of the MN on the basis of location information followed by the selection of an appropriate RSS level for the handover [16].The optimization of a single parameter is not a generic solution for handover management.A complete set of handover parameters should be optimized for an efficient handover process.The efficiency of handover can be maximized if each phase of the handover process is optimized for the best performance.The MIH standard does not provide an optimized solution for a handover management because this standard supports only one MIIS server for all of the available access networks.In future, the number of access networks and MNs will be increased due to advancement in network access technologies.Thus, a single MIIS server will not be appropriate for all of the access networks. In the next generation network, MN will be provided with multiple optimized routes to send data from one end to another.During a handover process, selection of an optimized route for data transfer is a challenging task.A scheme has been proposed to provide MN with an optimized route after the handover has been processed [12].The proposed scheme efficiently reduced the handover latency and achieved fast recovery of the optimized path.A similar scheme has also been proposed in [13].The scheme optimizes the route optimization for tunnel establishment to buffer packets during handover.The scheme efficiently solves buffer overflow problem in proactive handover techniques.The packet loss and handover delay due to buffer overflow are significantly minimized.However, still many issues are yet to be answered in terms of the selection of the optimal network during handover.The route optimization not only helps in balancing traffic on a particular AP/BS, but also maximizes the probability of new connections on an optimized route. The energy consumption during the selection of networks is a major factor in a handover process in HetMANET.The energy consumption by an MN directly depends on the application running during a handover process.A scheme based on the energy efficient handover for multimedia based applications is proposed, which utilizes the concept of adapt or handover for balancing the multimedia traffic during a handover process [17].The proposed scheme saves energy consumed due to the insignificant degradation in QoS.A single objective handover management cannot be adopted for a generic solution.However, the energy could also be reserved by the real-time power managements scheme in M2M communications, mobility management, probabilistic modeling, and graph based on M2M communications, [18][19][20][21].Furthermore, the same parameters are also achieved by optimizing data transmission in device-to-device communication and WSN, which is based on the advanced clustering scheme [22,23].Such scheme is based on the received signal strength of the sensor node. The selection of the less expensive network with best QoS during handover leads to the smooth transition of an ongoing session from one network to another.Therefore, the cost optimization must be considered during a handover process.A scheme based on cost aware handover decision is proposed which uses two cost functions, that is, triggering and priority decision [24].Both functions are optimized for the best values of signal transmission quality, handover signaling cost, handover latency, and estimate interference.The proposed scheme efficiently transfers an ongoing session from one cell to another after checking the cost of the adjacent cells.However, still some of the parameters such as data rates and data rate based costs are not addressed in the current schemes.Therefore, we propose a solution that considers possible multiple parameters that affect the quality of a handover process in M2M. Proposed Scheme This section presents our proposed handover scheme for M2M in detail.Figure 2 delineates the architecture of handover scheme that M2M practices.Multiple BSs and APs are deployed in the large geographic area having different networks (HetMANET), in which an MN moves from one network to another, performing some handover. Assumptions and Definitions. In this section, we present assumptions made during the design of our network and simulation model.Some of the scenario related definitions are also given. Assumption 1 (heterogeneous devices).All the MNs have different configuration; that is, their battery requirements are different from each other. Assumption 2 (communication radius model).In the communication range of a BS, "" has the radius "" that is centered at "."It can be defined as CR(, ) = {, ∈ : (−) ≤ }, where CR represents communication radius, represents the set of deployed nodes, and ( − ) is the distance between BS and in the M2M network. Definition 3 (Medium Scale Network ).If all the MNs have direct communication access to the BS/AP, then the network is considered to be Medium Scale Network (MSN).Suppose that, in any environment, the M2M network comprising of 100 MNs deployed in the area of 100 m × 100 m is considered as MSN.This definition can be modeled as ∀Λ ∈ , |( − BS)| < , where is an MN among the set "" of deployed MNs and ( − BS) is the distance between any of the deployed network nodes say, and the BS. is the communication radius of node . Definition 4 (Large Scale Network).If any of the deployed MN does not have direct communication access to the BS/AP, then the network is considered to be Large Scale Network (LSN).Suppose that, in any environment, the M2M network comprising of 100 MNs deployed in the area of 200 m × 200 m is considered to be Large Scale Network (LSN).This definition can be modeled as where is a node among the set "" of deployed nodes and (−BS) is the distance between any of the deployed network nodes say, and the base station. is the communication radius of node . 3.2. Overview.An MN can perform a handover from one Access Network Operator (ANO) to another upon weak link connectivity.The MN obtains the information of cost and data rate of available networks from the MIIS server during handover to select the target network.The MIIS server stores the information of geographical locations of point of attachments (PoAs) of an ANO.Every ANO needs to send information regarding cost packages and data rates to the MIIS server.If an ANO updates either the cost model or the data rate information, it will also update this information in the MIIS server.Figure 2 delineates the fundamental idea of the proposed scheme.The proposed scheme consists of three phases: (1) handover triggering phase (2), network selection, and (3) handover execution phase. Handover Triggering Phase. In the proposed scheme, we have used a threshold mechanism for handover triggering.This mean that handover is triggered if the RSS from the current network drops below a predefined threshold.An optimal threshold mechanism reduces the number of false handover indications as well as the number of handover failures to a network with overloaded APs/BSs.We set a threshold of RSS level on the boundary of the coverage area. Let represent the radius of the coverage area of AP and BS.According to the signal propagation model [25], the threshold should be set based on the distance (1 − ) × from AP or BS, where represents the fluctuation that is produced due to the variation in network data rate dynamics.The value of is taken between 0 and 1.The threshold is given by the following equation: where 1 represents the antenna gain and signal wavelength and 2 represents the path loss factor.Most of the traditional To avoid these problems, we optimize the RSS value using (1), and it considerably reduces the false handover indications as shown as in Figure 3.With a view to elaborating the proposed handover triggering phase, we consider a reference example based on Figure 3 [26].In 3GPP, various handover measurements techniques are defined which supports mobility [27,28].However, handover triggering based on RSS and Time-to-Trigger (TTT) techniques are usually used in a horizontal handover in LTE system since its simplicity and efficiency made it easy to implement [29].As shown in Figure 3, MN periodically measures the RSS of the neighboring AP/BS.If the RSS of the candidate MN is greater than the RSS from the current MN, the timer is set to TTT TTT seconds by the MN and starts to observe RSS candidate and RSS attached .Apparently, if RSS candidate > RSS attached is incessantly following during the TTT seconds, the MN performs handover to the candidate MN (case 1).However, if the RSS candidate < RSS attached follows during the TTT seconds, the MN stops observation and come back to its initial state (case 2). Network Selection. The network selection phase is further divided into the following subsections. (i) Cost.To select the new network for optimal handover, we need to consider the cost of applications used by an MN.The target network assigns the same cost as that of the old network to continue the movement of the MN in HetMANET.If a target network is not providing the cost that is equal to the old network, then the MN selects new cost from the target network, which is acceptable to the MN.Otherwise, the MN experiences long delay and even breaking of connection during handover.The MN normally uses different Multimedia applications require more cost as compared to elastic applications.We assign a particular weight of cost to each category of application as listed in Table 1. Assuming that there are applications, the weights of all the applications running on the MN, denoted by , are as follows: where represents the weight of each application.The value of various weights is taken from 0 to 1, depending on the priority of an application.An application with the highest priority is assigned the largest weight.For instance, if MN's device is running a real-time streaming application, it will be assigned the highest weight since the streaming application can tolerate a handover delay of only 150 ms and a packet loss of 3%, respectively [30].Therefore, the MN selects a network with less possible cost, which has the potential to run a particular application during switching from one network to another. (ii) Energy.In HetMANET, the MN consumes a significant amount of energy for scanning the available PoAs.In particular, the application with high priority needs much energy, which is normally required for fast scanning.Depending on the density of the medium (APs and BSs) of the network, the interface for a particular network is periodically switched to sleep and active states.The energy (denoted by ) required by MN for the scanning of a particular PoA of a network is given by where is the power required by MN for the scanning of a PoA of an access network and represent the time taken for interface scans. The energy required for scanning during handover depends on the application used by MN.If the request is of a high priority, then the MNs perform fast scanning.In this case, the energy required for scanning will be high.In traditional approaches, the scanning procedure is throughout the uniform and mainly depends on the RSS from an AP/BS, which leads to a high packet loss.Thus, we restrict the scanning energy consumption depending on the applications running by the MN during scanning.The proposed energy efficient scanning procedure significantly reduces the energy consumption by the MN during handover. (iii) QoS Computation.The network selection phase is considered an important factor in a handover management scheme.When the MN is moving across the HetMANET, it performs some handover switching from one network to another.For efficient handoff, we should choose the target network that provides acceptable cost and sufficient data rate for the applications running on the MN.Also, we need to minimize the energy for scanning a PoA for handoff.When the MN is moving away from the current ANO where RSS drops below a predefined threshold, it selects the target network for handoff using three metrics: cost, data rate, and energy.In our scheme, we introduce a QoS function to select an optimal network by integrating three metrics, which is given by where , , and represent cost, data rate, and energy.Similarly, , , and are the weights of cost, data rate, and energy, respectively.The MN obtains the values of parameters and of available networks from MIIS server.The energy consumption by an interface using a particular application is computed by the MN.The weights are assigned randomly depending on the priority of application.This mean that the application with high data rate requires extra cost as compared to the application with fewer data.Similarly, every application requires different data rates depending on the nature of the application.For instance, real-time application requires more data rate as compared to an elastic application.Therefore, a target network is selected on the basis of cost, data rate, and energy which requires an interface of the MN during handover using (4). ( ( where = 1, . . ., . (iii) Optimal AP/BS Selection.Once a particular network is selected for handover, the next step is to select an appropriate AP/BS for handover.As we know that the competition among different networks is increasing every day.Each network is trying to provide best QoS with low cost and high data rate to their user.Similarly, every network is attempting to deploy AP/BS everywhere to provide an MN with "always best connected" functionality.To achieve similar functionality in our scheme, we implant a handover decisionmaking model in our proposed scheme to provide an MN with the best AP/BS in HetMANET environment.There are several decision-making schemes available in the literature.The TOPSIS decision model has remarkable applications in handover management [31][32][33].Therefore, we used TOPSIS decision-making scheme to select one of the APs/BSs for handover.There are two types of criteria available for the selection of the AP/BS.The first type directly affects the performance of an AP/BS and the second type increases the performance of the AP/BS.To minimize the imbalance effect of both of these parameters, we choose only those parameters that directly affect the performance of an AP/BS.We choose five different criteria for the selection of the network.These criteria include delay (), jitter (), Bit Error Rate (BER) (), communication cost (), and response time ().The decision-making matrix () is represented as follows: The maximum and minimum value of a parameter in a network is represented through * = max 1≤≤ ( ) and ∘ = min 1≤≤ ( ), respectively, where represent a parameter.It is also important to normalize the decision-making matrix.Therefore, we perform linear scaling by checking the distance of each criterion from minimum ( The superscript ( * ) is used to represent criteria after normalization. The proposed approach is purely user based decision handover scheme.Therefore, we give the user the option to assign each criterion a particular weight.These weights help us in calculating negative and positive ideal situation of a network.In particular, an AP/BS with the more positive ideal situation is closer for the selection of handover.The weighted normalized matrix () is represented as follows: After calculating the weighted normalized decision matrix, the next step is to compute the ideal situations, as we choose those parameters that directly affect the performance of an AP/BS.Therefore, the maximum and minimum value in each column of the matrix is represented through negative ( − ) and positive ( + ) ideal situations, respectively, using the following relations: To check whether these ideal situations fulfill the requirements of an appropriate AP/BS, we compare them with the reference ideal situation.Similarly, TOPSIS also ranks the available AP/BSs by comparing the ideal situations with reference situations.Therefore, we check the distance of each criterion from ( − ) and ( + ) using the followings relations: where + and − represent the degree of negative and positive ideal situations.To elaborate the structure of these situations, we illustrate it in Figure 4. Finally, the optimal AP/BS is selected by computing the relative degree approach () of each AP/BS as follows: Increasing If there are multiple AP/BSs available in a HetMANET environment, then one can compute the degrees of each AP/BS and then sort them to select the one with the highest degree.In general, the working of the TOPSIS decision model is illustrated in Algorithm 6. (3) To determine the ideal situations (positive and negative): (4) Computing the separation measure of each situation: Mobile Information Systems (5) Computing relative closeness of each criterion to ideal situation. Handover Execution. The MN performs handover execution after selecting the network with the highest QoS. The MN requests the serving AP/BS to connect the network.The AP/BS forwards this request to the MIIS server.The new network sends a connection response to the MN, and the MN performs handover to the new AP/BS.The MN releases the resources and terminates the connection with the old network. Performance Evaluation In this section, we present the simulation results to highlight the benefits of the proposed handover triggering and network selection scheme.First, we show the advantages of the QoS aware network selection scheme.Second, we performed some experiments to check the handover decision model in dense and low coverage HetMANET environment.Furthermore, we evaluate the working of the proposed scheme in C programming language.The proposed approach is tested on three different networks, that is, WIFI, UMTS, and WiMAX.Different numbers of mobile nodes are tested in the proposed scenario with a speed ranging from 10 to 100 km/h.The number of applications is assigned randomly to each MN during initialization.In Figure 5, we are using only two BSs of the WiMAX network and three BSs of the cellular network because of the availability of space.In the actual simulation scenario, we used around 15 and 20 WiMAX and cellular BSs, respectively.The simulation time is set differently with the number of nodes.We test four sets of nodes, that is, 25, 50, 75, and 100, with a simulation time of 30, 60, 90, and 120 minutes, respectively.The MIH standard does not implement MIIS server in NS 2.29 V3.Therefore, we implement the MIIS server to store the cost and data rate information of the available networks.Moreover, the proposed scheme is tested for a longer duration of time to check its performance and quality in high speed and congested scenarios.The proposed M2M communication scenario in HetMANET scenario is shown in Figure 5. In Figure 5, the MN is initially connected with the AP1.After moving away from the AP1, the MN has found three different types of networks, that is, BSc, BSw, and AP3.In general, an MN has three different networks to decide handover to one of them.Our proposed approach enables the MN to scan the available networks and compute the QoS of each network.The MN found that BSw (WiMAX) provided the highest QoS.Therefore, the MNs choose WiMAX network for handover.The MN also uses the proposed decision model to check the available BSs of the WiMAX network.Similarly, the MN found the BSw1 with the highest degree and, therefore, the MN performs handover to it.Furthermore, the MN continues its movement in the proposed scenario.The handover is shown on the label attached to the MN in Figure 5. The cost for UMTS network is fixed, and the costs for WIFI and WiMAX networks are generated randomly.We used two types of cost values for each network, that is, cost per minute and cost per data volume.The value of selection criteria for the network as well AP and BS is given in Table 2. The energy consumption values by each interface are taken randomly from the ranges present in Table 2. Similarly, the RSS values are generated depending on the data rate.During the simulation, we periodically check the relation between data rate and RSS.When the data rate is increasing, the RSS is decreasing as they are in indirect proportion to each other. We used five different parameters for the selection of AP/BS of the target network.As previously discussed, these parameters are indirectly eruptional to the performance of an AP/BS.The values of all of these parameters depend on the distance of MN and AP/BS.If the MN is away from the AP/BS, the values are high, and as the MNs move closer to the AP/BS, the values are decreasing.Initially, we do not set any particular values for these parameters.The values are changing with the distance of MN from the target AP/BS.Therefore, to achieve the optimal changing of these parameters, we implant a location management system using the coordinate geometry.The location management is simulated, and we obtain remarkable results.Finally, we do not simulate the handover execution phase and left this to the network operator. An interface requires high energy if the MN is running an application that requires higher data rate.For example, the streaming application requires higher data rate compared to the elastic application.Therefore, a streaming application consumes more energy than an elastic application.Similarly, the MN consumes more energy on scanning if the numbers of available networks are high.Sometimes, the MN consumes unnecessary energy on scanning available networks that are far away from the MN.Therefore, in the proposed approach, we perform a dynamic sort of scanning based on the density (number of APs and BSs) of the medium.The results obtained from simulating the density based scanning are shown in Figure 6.The result shows that the energy consumption is significantly optimized by scanning a particular set of APs and BSs.Similarly, the energy that is consumed on unnecessary scanning is highly reduced.The performance of the proposed scheme shows that the energy consumed on scanning of all of the available AP/BSs of all networks is now reduced to one particular network. We compute the device lifetime by running the simulation for a longer duration of time with the different application running on the MN's device.The MN is assigned periodically various applications.Similarly, the MN performed several handover ranging from hundreds to thousands.The device lifetime is recorded in both with and without proposed schemes.The efficient selection of the target network and AP/BS highly reduced the energy required for scanning.Therefore, the device lifetime is also increased.As shown in Figure 7, the device is consuming two types of energies (1) on Scanning and (2) running different applications. We evaluate the energy consumption of the MN against the scanning number of APs/BSs.The MN scans a particular set of APs/BSs.Therefore, it requires less amount of energy.In Figure 8, we compared the performance of the proposed energy optimization with the scheme presented in [34].We compute the scanning time of an interface against the number of APs/BSs.Furthermore, we compute the average energy consumption by an application using all of the three interfaces.Finally, the results are calculated and drawn in Figure 8.If an application used by the MN has high priority, then the MN needs fast scanning, which requires higher energy as compared to the application with lower priority.Therefore, we assign the MN the highest weight if the applications running on it have high priority and vice versa.The range of weights for energy is taken from 0 to 1.0.The proposed optimization of energy required less energy than the existing scheme due to the new power aware interface management scheme. Mobile Information Systems The AP/BS can provide services to a limited number of MNs.As the number of MNs increases on AP/BS, the QoS decreases.Therefore, we need traffic management that can efficiently balance the number of connections on a particular AP/BS.If the number of connections on AP/BS exceeds a given threshold (), the AP/BS should not accept any more connections.If the MN requests a connection to an overloaded AP/BS, the AP/BS rejects the connection application for the particular MN.To address the issue above, we modeled the traffic on an AP/BS.Let be the number of connections on AP/BS computed as follows: where If an AP/BS reaches a close state, then it blocks any incoming connections.The probability of blocking a new connection can be represented as follows [24,35]: where is the probability of a channel that is either busy or available and is the state of an AP/BS whether it is in the open or closed state.We restrict the boundaries of to either 1 (open state) or 0 (close state).The computation of the blocking probability on an AP/BS is summarized in Algorithm 7. We performed theoretical and experimental analysis of the proposed handover blocking probability results in Figures 9 and 10.As shown in Figure 9, theoretical and experimental results are almost similar demonstrating that increasing the number of new connections consequently increases the blocking probability.Similarly, in Figure 10, the theoretical and experimental analysis is giving similar results, with the increase in a mean number of connections, affecting the total The experimental and theoretical results are very close to each other which shows the accuracy of the proposed approach.The employing of blocking probability approach significantly models the traffic on an AP/BS.Moreover, the MNs do not scan those APs/BSs, which are already in a close state.Thus, using this way, the proposed approach always provides the MN with only those APs/BSs, which has channels available for the new connections. Similarly, we test every possible probability of incoming connection on AP/BS.The probabilities are divided into three different data sets; that is, the first set is 0.1∼0.3, the second set is 0.4∼0.7,and the third set contains 0.8∼1.0 with high probability of close state in data set 3, average close state probability in data set 2, and, similarly, low probability in data set 1. On each channel of an AP/BS, we test all the possible probabilities from set 1. Furthermore, we compute the average blocking probability of AP/BS in the meantime when there is no particular channel available for incoming connection, or it is already in use.In Figure 11, we have shown that the blocking probability is high for data set 3 since most of the channels are already occupied.Similarly, for 1 and 2 data sets, the close state is considerably small as compared to the first data set.It is also shown that the increase in the number of incoming connections increases the blocking probability.In fact, upon arrival of new connections on the AP/BS, the channels are occupied and are switched to the close state.The close state probability is evaluated by using the following equation: where , , and are the probabilities of a channel availability or unavailability, holding time distribution, and new call attempt that has been accepted, respectively.Moreover, we investigated the handover initiation process on different distances from a target network.We gradually increase the velocity of an MN to check the variation in data rate dynamics.Each application has assigned a particular weight based on data rate requirements.The range of weights is taken from 0 to 1.An application that requires high data rate is assigned high weight (nearly equal to 1.0) and the application which requires less data rate is assigned (nearly equal to 0) lower weight.Table 2 shows the different values used in the performance evaluation of data rate optimization phase. The velocity of an MN is checked against the application's weight.The proposed scheme performed efficiently since both the velocity of the MN and the weight of the application are increased.It is due to the optimization of parameters used in Table 3 above.In Figure 12, the proposed with optimization data rate is employed against the without optimization data rate.The proposed optimization efficiently solves the high MN's velocity problem.The velocity of the MN is gradually increased, and we observed that the application that requires high data rate was shifted to the optimized data rate.Our proposed solution significantly optimized the data rate problem for the MN during the handover process. Finally, we compute the quality of each network available in the vicinity of the MN's on the current AP/BS.The MN computes the QoS of each network using (4).The MN selects the network with the highest QoS and proceeds in contact with it.Figure 13 delineates the performance selection of an optimal network using different weights of data rate, cost, and energy ranging from 0 to 1.The different values obtained in the performance evaluation Sections 4.1, 4.2, and 4.3 are classified into three different data sets on the basis of weights assigned to data rate, cost, and energy.In data set 1, the weight of the data rate is less compared to the cost and energy, data set 2 has less weight of energy compared to the data rate and cost, and, in data set 3, the weight of the cost is smaller compared to the data rate and energy.The QoS of a network is tested against the user preference (random selection of applications from Table 1) in terms of the cost, data rate, and energy required during a handover process.The selection of QoS of a network significantly achieves good results in the case of data set 3, which shows that most of the users preferred data rate compared to the cost and energy.Moreover, we also compute the QoS of each network after each handover.The comparison of the technologies against the QoS is illustrated in Figure 14.The proposed scheme selects the available networks on the basis of the applications running on the MN's device. Conclusion In this paper, we proposed a QoS based vertical handover scheme for M2M communications in HetMANET, which represents multiparameters optimization technique for a handover process.The proposed scheme efficiently obtained the information of the communication cost of all the available networks.The scanning of the available network is performed based on the density of the AP/BS.Moreover, the MN optimized the energy required by an interface for scanning and making a connection to the new network.The AP/BS optimized the traffic on AP/BS for providing the best connectivity and QoS to the users.The handover initiation phase is triggered by using the proposed optimal threshold scheme due to which the numbers of failed handover are significantly minimized.The optimizations of parameters above are quantified in a QoS function network.The QoS function returns the suitable network against the application used by the MN during handover.The quantitative analysis shows the accuracy and strength of the proposed scheme.For future work, we are planning to develop an optimization technique based on the decision modeling as well as fuzzy logic technique. Figure 10 : Figure 10: Mean number of connections. Figure 11 : Figure 11: Performance analysis of traffic management. Figure 12 : Figure 12: Performance analysis of data rate optimization. Figure 13 :Figure 14 : Figure 13: Optimization of QoS of a network. Table 1 : Application weight table. Number of Applications Assigning weight to each application = ∑ =1 Energy computation: ← power require by interface for scanning = scanning slot = ∑ =1 × QoS Computation: = * ln ( ) + * ln ( 1 ) + * ln ( ) ) (2) Network Selection Cost computation: → → MIIS server ← represents the number of connections already present on an AP/BS and the is a new connection arriving on AP/BS.We define a two-state Markov chain model for AP/BS.In the first state, the AP/BS accepts new connection since it has vacant channels available for new connections, while in the second state the AP/BS does not accept new connections since it has no available channels for incoming connections.We called the first state as an open state, whereas the other one is a close state.The probability of a close and open state depends on the leaving and joining of new MNs, respectively. Table 3 : Simulation parameters used in data rate optimization.
9,058
2015-10-11T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
A previously uncharacterized Factor Associated with Metabolism and Energy (FAME/C14orf105/CCDC198/1700011H14Rik) is related to evolutionary adaptation, energy balance, and kidney physiology In this study we use comparative genomics to uncover a gene with uncharacterized function (1700011H14Rik/C14orf105/CCDC198), which we hereby name FAME (Factor Associated with Metabolism and Energy). We observe that FAME shows an unusually high evolutionary divergence in birds and mammals. Through the comparison of single nucleotide polymorphisms, we identify gene flow of FAME from Neandertals into modern humans. We conduct knockout experiments on animals and observe altered body weight and decreased energy expenditure in Fame knockout animals, corresponding to genome-wide association studies linking FAME with higher body mass index in humans. Gene expression and subcellular localization analyses reveal that FAME is a membrane-bound protein enriched in the kidneys. Although the gene knockout results in structurally normal kidneys, we detect higher albumin in urine and lowered ferritin in the blood. Through experimental validation, we confirm interactions between FAME and ferritin and show co-localization in vesicular and plasma membranes. nature portfolio | reporting summary March 2021 Data analysis For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information. Data Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A description of any restrictions on data availability -For clinical datasets or third party data, please ensure that the statement adheres to our policy Human research participants Policy information about studies involving human research participants and Sex and Gender in Research. Reporting on sex and gender Population characteristics Recruitment Ethics oversight Note that full information on the approval of the study protocol must also be provided in the manuscript. Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection. Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf Life sciences study design All studies must disclose on these points even when the disclosure is negative. Sample size -Library preparation was performed using a 10x controller (10x Genomics) with the Single Cell 3' v3 chemistry. Sequencing was performed using a HiSeq 3000 (Illumina) -LC-MS/MS analyses of peptide mixture were done using Ultimate 3000 RSLCnano system connected to Orbitrap Elite hybrid spectrometer (Thermo Fisher Scientific). -For 3D visualization, the segmentation was done by an operator using a combination of software Avizo 2022.2 (ThermoFisher Scientific) and VG Studio MAX 3.4 (Volume Graphics GmbH, Germany). Statistical data analysis: Statistical analysis was done using GraphPad Prism 9 software. The quantitative data are provided in the Source Data file. All custom-made scripts used in the analysis are available at: https://github.com/ipoverennaya/RIK_paper Two knockout mice strains were generated for this manuscript. These will be available upon reasonable request and also be deposited to the Jackson Laboratory. All other relevant data supporting the key findings of this study are available within the article and its Supplementary Information files or from the corresponding author upon reasonable request. The quantitative data generated in this study are provided in the Source Data file. Source data are provided with this paper. n/a n/a n/a n/a In our study, we did not perform a formal sample size calculation. Instead, we followed accepted practices in the mouse facilities where the research was conducted to determine the sample size. We initially conducted preliminary experiments to estimate the number of animals needed to achieve adequate statistical power for our tests. Furthermore, the number of animals involved in this research were chosen to reduce unnecessary animal use and to reach statistical significance with two-sided student´s t-test. For the mass spectrometry experiments, we analyzed six independent replicates to ensure the reliability and reproducibility of our findings. For immunofluorescence, single cell sequencing, and transmission electron microscopy, we used three independent samples each. These sample sizes were selected based on the Validation same approach of balancing statistical power and accepted practices in the research community to yield robust effects. To check whether DE genes between wildtype and knockout are not sex-specific, we compared them with the list of the corresponding DE genes between female and male proximal tubule (PT) samples from (Ransick et al., 2019). The genes whose adjusted p-values are less than 0.01 were excluded from the comparative analysis For plotted graphs we did not exclude any data. In vitro experiments were repeated a minimum of three times. All attempts were succesful. Parameters related to mice were tested once in several animals (minimum of four). All attempts were succesful. Laboratory animals were allocated to the experimental and control groups based on their genotype and sex. The same accounted for the tissues harvested from these mice. In vitro experiments were allocated based on their treatment (control vs. inhibitor) or transfection condition. Blinding was not applicable to the study because the allocation of laboratory animals to experimental and control groups was based on their genotype and sex, and the tissues harvested from these animals were also allocated accordingly. Additionally, the in vitro experiments were allocated based on treatment (control vs. inhibitor) or transfection condition. Lotus tetragonolobus lectin: Lotus tetragonolobus lectin (LTL) encompasses a family of closely related glycoproteins with similar specificities toward "-linked L-fucose-containing oligosaccharides. Although many of the binding properties of Lotus lectin are similar to those of Ulex europaeus lectin I (UEL I), the binding affinities and some specificities for oligosaccharides are significantly different between these fucose-specific lectins. This fluorescein-labeled LTL features a ratio of fluorophores to lectin protein that provides optimal staining (excitation 495 nm, emission 515 nm). Supplied as a solution essentially free of unconjugated fluorophores, it is preserved with sodium azide. The recommended inhibiting/eluting sugar is 50-100 mM L-fucose. https://doi.org/10.1016/j.celrep.2022.110473 Anti-VANGL1: Vangl1 Antibody is a mouse monoclonal IgG1 # Vangl1 antibody provided at 200 µg/ml specific for an epitope mapping between amino acids 281-298 within a cytoplasmic domain of Vangl1 of human origin. Vangl1 Antibody is recommended for detection of Vangl1 of mouse, rat and human origin by WB, IP, IF and ELISA; also reactive with additional species, including and equine, canine, bovine and porcine https://doi.org/10.1371/journal.pgen.1007840 anti-GFP: Chicken polyclonal antibody to GFP with over 2500 references: https://www.abcam.com/products/primary-antibodies/gfpantibody-ab13970.html?productWallTab=ShowAll
1,520.4
2023-05-29T00:00:00.000
[ "Biology", "Medicine" ]
An EFSM-Based Test Data Generation Approach in Model-Based Testing : Testing is an integral part of software development. Current fast-paced system developments have rendered traditional testing techniques obso-lete. Therefore, automated testing techniques are needed to adapt to such system developments speed. Model-based testing (MBT) is a technique that uses system models to generate and execute test cases automatically. It was identified that the test data generation (TDG) in many existing model-based test case generation (MB-TCG) approaches were still manual. An automatic and effective TDG can further reduce testing cost while detecting more faults. This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model (EFSM). The proposed approach integrates MBT with combinatorial testing. The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach. The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventional MB-TCG but at the same time generated 43 more tests. The proposed approach effectively detects faults, but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing. Introduction In short, the International Software Testing Qualification Board (ISTQB) defined testing as the planning, preparation, and evaluation of a component or system to ensure that they adhere to specified requirements, prove that they are fit for purpose, and identify faults. Testing determines whether the system under test (SUT) fulfils the specified requirements agreed during the requirements elicitation. At the same time, testing also aims to identify faults in the SUT caused by errors made in the code during software development or maintenance. One of the essential artefacts in testing is the test cases. Test case generation (TCG) is where test cases are created from the available test basis. The necessity of effective and efficient testing is becoming more significant in the current fastmoving system developments. In general, being effective means adequately achieving an intended purpose, while being efficient means accomplishing the intended purpose in the best way possible while saving time and effort. Therefore, based on the definition of testing earlier, effective testing can be measured by how many faults can be detected with a given testing strategy [1], while efficient testing can be measured by the time or effort needed to detect faults [2]. According to a statistic by Memon et al. [3], every day at Google, 150 million tests execute on more than 13 thousand projects that require 800 thousand builds. In large scale and fast-paced development projects, manually creating tests is time consuming and susceptible to human errors. Therefore, the transition toward automatic TCG to be more effective and efficient in testing is imperative, even more so in the current industry 4.0 revolution. Some of the well-known automatic TCG techniques are symbolic execution, model-based testing (MBT), combinatorial testing (CT), adaptive random testing, and search-based testing [4]. Modelbased TCG (MB-TCG) is based on MBT [5], which utilizes an automation process and SUT models to generate test cases. By exploiting the system models, test cases representing the expected behaviours of the SUT can be developed promptly with little to zero human effort. This study uses the extended finite state machine (EFSM) model because it has formal definitions that can ease the automatic processes in MBT that require consistency and accuracy. One of the advantages of using MBT is that the testing activity can be commenced in the early phase of the software development. In addition, with MBT, it is possible to start the actual testing of the system under test (SUT) earlier. Another advantage of MBT is that contradiction and obscurity in the requirements specification and the design documents can be identified and fixed during the design and specification phase. All in all, it has been proved that MBT can decrease the associated development cost while increasing the software quality [6]. Although MB-TCG can increase the effectiveness and efficiency of testing, existing approaches in the literature indicated a significant limitation, particularly regarding test data generation (TDG). TDG is one of the main aspects of MBT [7]. However, it is still one of the main challenges in automating MBT [8]. Manual TDG is still common in existing MB-TCG studies even though it is costly, complicated and laborious. TDG possesses a high potential in the testing cost reduction since it can decrease human effort. The selection of test data is also crucial because it can affect the number of faults detected during testing. A study by Ahmad et al. [9] that reviewed MBT studies using activity diagrams discovered that more than half of the selected studies did not explicitly specify their TDG methods. This finding conveyed that the TDG in MBT was still undervalued, despite its importance. Therefore, more research is needed to improve the TDG in MB-TCG. More discussion regarding the TDG in existing MB-TCG approaches is presented later in Section 3. CT is a technique that tests the SUT using a covering array test suite that contains all the possible t-way combinations of parameter values [10]. It is based on the idea that instead of exhaustive testing all possible parameter combinations of the SUT, only a subset of them is used, which satisfies some predefined combination strategies. Although only a subset is used, this technique's effectiveness in detecting faults is on par with exhaustive testing that is confirmed to detect all faults. This similarity in effectiveness is achievable because faults seem to result from interactions of only a few variables, so tests that cover all such few-variable interactions can be very effective [11]. CT is a suitable technique to address the TDG generation issue in MB-TCG because its implementation can be automated. So, it can be added on top of the automation in the MBT. In addition, the CT technique was chosen because both MBT and CT complement each other very well [12]. CT can address the TDG issue in MBT because it deals with the interaction between input parameters. Meanwhile, MBT can address the issue in CT where it has no model and paths that can guide the generation of structured and effective tests to ensure proper coverage of the SUT. MBT can address this limitation by automatically generating paths that act as test cases, which can be used by the CT technique to guide the TDG. Last but not least, the CT technique was chosen because it can provide fewer test data for the proposed approach without reducing the fault detection capability too significantly. Motivated by the limitation in the existing MB-TCG approaches and the importance of TDG in MBT, this study proposes an automated TDG approach in MB-TCG using the EFSM model and the CT technique. The proposed approach in this study adopts and modifies the approach proposed by Nguyen et al. [12]. Their proposed approach also combined MBT and CT. However, the domain input classifications are manually specified in their study. This study automates the domain input classifications step by taking advantage of the information available in an EFSM model. This improvement can further reduce the human intervention required in automating the TDG in MB-TCG while detecting more faults. The contributions of this study are twofold. First, an automated TDG approach in MB-TCG using CT technique and EFSM model is proposed. Second, an experiment that was done to assess the effectiveness and efficiency of the proposed approach is presented. The remainder of this paper is organized as follows. Section 2 gives a brief background concerning MBT, EFSM, and CT. Section 3 discusses studies with the TDG issue and related works similar to this proposed approach. Next, Section 4 presents the explanation regarding the proposed approach. Section 5 shows the experiment done to assess the proposed approach effectiveness and efficiency. Lastly, Section 6 discusses the conclusion and future works of this study. Model-Based Testing MBT is a branch of testing under black-box testing or functional testing [5]. It relies on the SUT models that visualize the expected behaviours of the SUT in performing testing. Due to its blackbox nature, the SUT source code is not required, and the MBT process can be initiated as early as the design phase of software development. In brief, MBT comprises the steps for the automatic generation of abstract tests from the SUT models, the generation of concrete tests from the abstract tests, and the manual or automatic execution of concrete tests. Approaches in MB-TCG generate tests using similar procedures as the MBT steps. Therefore, a good comprehension of how MBT is done is crucial. A brief description of the steps is presented next. A detailed description of MBT can be found in the study by Utting et al. [5]. The first step in MBT is to build one or more test models. EFSM is one of the common models used in MBT [13,14]. Test models are usually created from informal requirements or specification documents. In some cases, design models are used as test models. Next, one or more test selection criteria are decided to drive the automatic abstract test generation. For example, the test selection criteria can be related to the test model's structure, such as state coverage or transition coverage. In the third step, the criteria are transformed into test case specifications (TCSs) that describe the notion of test selection criteria. In the fourth step, abstract test cases are generated to satisfy all of the TCSs. In this step, automation tools are usually used to generate abstract tests given the model and the TCSs [15]. Lastly, the abstract tests are concretized and executed against the SUT in the fifth step. Generating the test data for each abstract test is one of the processes in concretizing the tests. This part is the focus of this study. The test execution process can be done manually or using a test execution tool to execute the tests and record their verdicts automatically. As explained earlier, the models that describe the expected behaviours of the SUT were created using the informal requirements or specification documents, which are usually assumed correct. Hence, the models can be used as the test oracle to compare with the actual behaviour during testing to decide the test verdicts. Extended Finite State Machine An EFSM comprises states and transitions between states [16]. Transitions in an EFSM consist of events, conditions and a sequence of actions. A particular transition is executed when the transition's specified event is triggered and all the transition's conditions are assessed to true. Then, actions associated with that transition are executed. A state could be interpreted as the current values of a set of variables that the SUT has [17]. These variables that dictate which state the SUT is currently in are usually called context variables or internal variables [13,14], in contrast to user variables, which hold the user inputs. When a transition is executed, the context variables of the SUT could change, as instructed by the actions, which then lead to a state change. Combinatorial Testing Combinatorial testing (CT) is a technique that tests a SUT using a covering array test suite that contains all the possible t-way combinations of parameter values [10]. In a covering array, the columns represent the parameters while the rows represent the tests. Assumes an example application is to be tested whether it works correctly on a computer that uses Windows or Linux OSs, Intel or AMD processors, and the IPv4 or IPv6 protocols. These parameters (OS, processor, and protocol) and their values (the possible choices for each parameter) require 2 * 2 * 2 = 8 tests to check each component interacting with every other component at least once if exhaustive testing is used. With the pairwise testing (t = 2) technique, only four tests are required to test all possible pairs (two components) of combinations. An empirically derived rule, called the interaction rule, claims that most failures came from single factor faults or the interaction of two factors, with fewer failures induced by interactions between three or more factors. This rule is why CT can still be effective even though fewer tests are used than exhaustive testing. A covering array CA(N, n, s, t) is a form of N ×n matrix where N is the number of rows (array size), n is the number of columns (parameters), s is the level (number of possible values for parameters), and t is the interaction strength. Algorithms to generate a covering array are primarily categorized into computational and algebraic methods. In brief, computational methods work by directly listing and covering every t-way combination, while algebraic methods operate according to predefined rules in contrast to computational methods. Automatic efficient test generator (AETG) and in-parameterorder-general (IPOG) are the two most used algorithms in generating covering arrays [11], and both are computational methods. However, AETG builds a complete test at a time while IPOG covers one parameter at a time. More details regarding CT can be observed in the study from Kuhn et al. [11]. Related Work Kamath et al. [18] proposed an MB-TCG approach using the activity diagram model. The test data in their approach were assumed to be provided manually by a tester because only abstract test data were presented, and no further detail was given. Andrews et al. [19] proposed an MB-TCG approach for testing Urban Search and Rescue (USAR) robots using the class diagram and the Petri net model. All Combination Coverage (ACC) and Each Choice Coverage (ECC) approaches were used for selecting test data. Both ACC and ECC implementations for generating test data were considered manual based on how test data were created and selected in their approach. Majeed et al. [20] proposed an MB-TCG approach for event-driven software using the FSM model. The TDG in their approach was considered semi-automated because in the first phase, manual test generation and execution must be done first before automated test generation and execution can be achieved in the second phase. This implementation means that the test data were manually generated during the first phase. Singi et al. [21] proposed an MB-TCG approach for testing from visual requirement specifications, focusing specifically on prototyping. The TDG in this approach was considered a manual process because the test cases only provided templates for the tester to enter appropriate test data. Gutiérrez et al. [22] proposed an MB-TCG approach to automatically generate functional test cases from functional requirements. For the TDG, the equivalence partitioning (EP) method was used after formalizing the information obtained by applying the Category-Partition method to the functional requirements. They stated that this process was manual. Sarmiento et al. [23] proposed an MB-TCG approach to generate test cases from natural language requirements specifications using the activity diagram model. The generation of test data in this proposed approach was identified to be manually done. They also stated that this process required human intervention and did not address it further. These studies mentioned earlier showed that many existing MB-TCG approaches still used manual TDG. Several studies also still required test data manually provided by the tester. This manual method is inefficient and can result in human error, especially in large-scale testing. Most of these mentioned studies were published in the range of the last decade. This circumstance inferred that manual TDG was still practised in recent approaches. To make the circumstance worse, TDG still has not been given much attention when proposing MB-TCG approaches, as discovered by the latest review study conducted by Ahmad et al. [9]. These were the reasons why this study proposes an automated TDG approach in the MB-TCG. The closest work to this current study is probably from Nguyen et al. [12] because their proposed approach was adapted and modified in this study. They combined MBT and CT, where test sequences were derived using the FSM model of the SUT and complemented with selective test input combinations. One limitation of their approach was that the domain input classifications were manually specified to transform paths to classification trees. This study enhances the existing approach by automating the domain input classifications to increase further the automation level of the whole test data generation process. Another similar work to this current study is from Kansomkeat et al. [24]. They proposed an MB-TCG approach using the activity diagram. The input domains were classified based on the guard conditions in the decision points of the activity diagram. Nonetheless, the test suite generated using their approach consisted of all possible combinations of input classes, similar to exhaustive testing. Also, they did not use any CT tool for generating the combinations. In this current study, the conditions in the EFSM model transitions are used to classify the input domains. In addition, the generated test suite consists of the interaction between several input classes only. This implementation decreases the total number of tests generated. Furthermore, this current study uses the existing CT tool to increase the efficiency of generating combinations. Generation of Test Paths A test path is a combination of the SUT states and transitions in the EFSM form. It can be generated from the EFSM using various strategies like state coverage and transition coverage. A complete test path has the start and end states, Start, T 1 , . . . , Exit . In MBT, this part is represented as step two until step four. A complete test path is also an abstract test case that cannot be used yet for testing because it does not have all the necessary information to execute with the SUT. Transformation to Classification Trees The classification tree method is used to interpret and analyze input partitions and test input combinations that will be done later. The complete test paths generated previously are transformed into classification trees where each tree represents a path. After the transformation, each tree will contain (1) a root node that acts as an identifier for the transformed test path, (2) child nodes of the root node that represent the sequence of transitions of the transformed test path, from left to right, (3) child nodes for each of the upper child nodes that represent the parameters for each transition of the transformed test path, and lastly (4) leaf nodes for each of the upper child node that represent the input classification for each parameter. Only transitions are transformed into child nodes in the second item contained in a classification tree. The states are excluded because they deal with the verification of certain SUT states, whereas the transitions deal with the users' inputs to the SUT, which are the required information in this approach. Also, only transitions that take user inputs are taken into consideration. To explain this step further, consider an example of a test path, t 1 = Start, T 1 , S 1 , T 2 , S 2 , T 3 , Exit . Tab. 1 shows the input parameters and classifications of the transitions traversed by t 1 . After the transformation, the classification tree, as shown in the upper part of Fig. 2, is produced. As mentioned earlier in the introduction and the related work, Nguyen et al. [12] manually specified the domain input classifications. This step is where this research approach differs from their approach. Since this approach uses the EFSM model, the enabling conditions available in each transition will be used and transformed to input classifications. Also, the boundary value analysis (BVA) strategy is used. The general rule for the input classification in this approach is that, for an enabling condition comparing an input variable with a constant value or a context variable value, the input should be classified into the highest or lowest possible values. Tab. 2 shows the example classifications of input for each type of comparison operator in an enabling condition where x is the input variable. For example, an enabling condition of a transition is c i = (x ≥ 10). The inputs will then be classified into a:x = 10 and b:x > 10. Assuming that x is an integer type variable, the highest possible value is 2,147,483,647. So, that value will be used for the classification b. Input class for x < 10 will not be covered because other transitions will usually cover the range of input values not covered by a particular transition. It is also to prevent incorrect transitions from being executed when a test path has been specified, which will cause the test to be misinterpreted as failed. The same rule applies for enabling conditions having logical operators that combine two or more conditions. For example, if an enabling condition of a transition is c i = (x ≥ 10 AND x ≤ 20), then the inputs will be classified into a:x = 10 and b:x = 20. Note that only enabling conditions related to user input variables are transformed because this part is all about the input classification. Other types of enabling conditions unrelated to user inputs, such as checking a context variable, are not considered. Generation of Test Combinations After the classification tree for each test path is completed, test combinations are generated. In generating the test combinations, t-way combinations with any desirable interaction strength, such as 2-way (pairwise), 3-way and so on, can be utilized, depending upon the required t-way coverage. The lower part of Fig. 2 shows the generated combinations for the example earlier. It consists of the classification tree and the generated covering array, which is for pairwise combination. The covering array in Fig. 2 was generated using the CTWedge tool [25], which uses the ACTS tool [26], based on the IPOG algorithm, as the test generator. Removal of Duplicates Some generated test paths traversed the same transitions or states. Because of this circumstance, some generated test cases will possess identical input parameter combinations. Suppose these test cases, containing only identical combinations with other test cases while not having any unique combination, are not removed. In that case, test execution will become inefficient because these redundant test cases are pretty much useless. The method from Nguyen et al. [12] is followed to identify and discarded these redundant tests. The objective is to identify and discard unnecessary combinations while maintaining the t-way combination coverage within all the generated paths at the global level. Tab. 4 lists all the generated combinations and the pairwise combinations that each combination uniquely covers (in this example, it is assumed that testing needs to fulfil pairwise coverage). After the uniquely covered combinations for each test combination have been identified, one test combination that does not cover any unique combination is removed. Then, the uniquely covered combinations for all test combinations left are recomputed because they depend on the remaining test combinations. From Tab. 4, in the fourth column, it can be observed that both test combinations four from the path t 1 and t 2 do not contain any unique pairwise combination. Therefore, any one of them will be removed at random. Assume that test combination four from the path t 1 is removed, the fifth column of Tab. 4 lists all the remaining test combinations and the new pairwise combinations that each combination uniquely covers. It can be observed that after test combination four from the path t 1 is removed, all the remaining test combinations will cover at least one unique pairwise combination. Therefore, no more test combination can be removed without compromising the t-way combination coverage. Execution of Tests and Incremental Refinement of Constraints In this step, the remaining test combinations can be prepared to be used for testing the SUT. As mentioned earlier in Section 4.1, a complete test path is also an abstract test case, which is not yet usable for testing. These generated test combinations, which subsume the complete test paths, are also considered abstract tests. The test data representing the input data classification must be generated to make them concrete tests, and transitions must be connected with the SUT. This step is pretty much similar to the fifth step in the MBT process explained in Section 2.1. The difference is that test data are usually created manually by the tester in a typical MBT process. In contrast, the test data that adhere to the input data classification and the t-way combination coverage are automatically generated in this approach. Regarding the execution, there are possibilities that some test paths are infeasible. Some of the reasons are because of conditions conflicts in the path [27], conflicts between context variables and enabling function [28] or because in one state reached, the subsequent available action/event is not accepted by the SUT [12]. These circumstances happen possibly because of (1) faults in the specifications/models, (2) faults in the SUT or (3) there are dependencies between inputs that are missing. For the first case, it is sometimes possible that the specifications/models themselves are faulty. However, it is usually assumed that the specifications/models have been validated and considered accurate before test execution. The second case is an occurrence where a fault in the SUT has been detected. However, it would mean that one or more dependencies between input classifications have been missed during the generation of test combinations for the third case. To prevent this from happening, path constraints that keep track of the dependency relationship among input classifications are specified to ensure that invalid test combinations are not generated. This step requires human intervention and domain knowledge, so it is done manually. After all the missing constraints have been specified, test combinations will have to be generated again and the subsequent steps repeated. Empirical Study This section discusses the experiment done to assess the effectiveness and efficiency of the proposed TDG approach in testing. The research questions (RQs) for this experiment are: RQ1. What are the fault detection capability and code coverage of the proposed TDG approach compared to the conventional MB-TCG? RQ2. What is the resulting test suite size of the proposed TDG approach compared to the conventional MB-TCG? RQ1 was designed for the effectiveness assessment by determining whether the test data generated using the proposed TDG approach can detect more faults and achieve higher code coverage than when using the test data from the conventional MB-TCG. RQ2 was designed for the efficiency assessment by comparing the resulting test suite size using the proposed TDG approach and the conventional MB-TCG. To an extent, executing fewer test cases means less testing time and effort, thus higher efficiency. However, it is not always accurate [29]. The generation time and execution time of each test also play significant parts in determining testing time. These aspects will be considered in the future work of this study. Fig. 3 illustrates the framework of the experiment in general. The parts for the conventional MB-TCG and the proposed approach have been discussed in more details in Section 2.1 and Section 4, respectively. The remaining parts are explained accordingly later. Figure 3: Experiment framework The TestOptimal tool was used in the experiment for performing the MBT part [30]. It was responsible for implementing the conventional MB-TCG and the proposed approach parts in Fig. 3. TestOptimal was chosen because the end state can be included in the model. This feature was essential for resetting the path generation when the end state is reached and generating several test paths. Another reason for choosing TestOptimal was because it supported data-driven testing (DDT). This feature made it easier for test data combinations to be used alongside the generated test cases. Another tool that was utilized in this experiment was the muJava tool [31]. It was responsible for implementing the mutation analysis part in Fig. 3. It was used to seed faults into the SUT to measure the quality of a generated test suite using the mutation score metric. This tool was chosen because the mutant generation was done at the source code level. This approach was preferable because mutants can closely imitate the types of errors programmers might make. Besides, the injected mutants can be clearly described and understood by testers this way. Lastly, the EclEmma tool was used to obtain the code coverage information. It was chosen because it generates code coverage information for programs in Java, the language used to implement the SUT. In addition, it is an open-source tool and is still upto-date in the year 2021. Usage of open-source and up-to-date tools are recommended because they ease future research to utilize or extend them [32]. Case Study System models for real-world commercial software with their respective software systems are not available free [33]. Consequently, this research uses the lift system case study from Kalaji et al. [34]. The case study's specifications and details are represented as the "Requirements" in Fig. 3. In the context of the EFSM model, the lift system has six states Q = {Start, Floor 0 , Floor 1 , Floor 2 , Stop, End}, four context variables, five input variables V = {doorOpened, inputUpdated, floor, weightload, liftPosition, preferredFloor, tempload, temperature, smoke} and 36 transitions. The SUT for the lift system case study was implemented as a Java program using Eclipse IDE based on its specifications. It is represented as the "SUT" in Fig. 3. Benchmark Approach For this experiment, the test data sequences generated from the proposed approach were compared with those from the conventional MB-TCG. The proposed approach by Nguyen et al. [12] was not included in this experiment because it required manually specifying the domain input classifications. As this step is crucial in implementing the proposed approach, performing it manually would introduce bias that will affect the obtained results. In addition, the study did not provide a systematic way of specifying the domain input classifications. This limitation would introduce variabilities whenever the approach is replicated, which will then affect the obtained result. These issues were the reasons why their approach was not included in the comparison. The test data sequences from the conventional MB-TCG were arbitrarily or randomly created as long as they adhered to the test case flow. This process can be observed in Fig. 3, where the manual test data are combined with the abstract test cases to produce a concrete test suite for the conventional MB-TCG. In the proposed approach, the difference was that the combinatorial testing approach and the information available in the EFSM model were utilized to generate better test data sequences for fault detection. This process can be observed in Fig. 3, where the EFSM model and the abstract test cases are used to produce a concrete test suite for the proposed approach. Evaluation Metrics Mutation analysis was used in the experiment to assess an MB-TCG approach performance in detecting faults. A fault is introduced by making a syntactic change to the original SUT. One change injected into a copy of the SUT results in one faulty version called a mutant. A mutant is "killed" or detected by a test suite if its result running the mutant is different from the result with the original SUT. This process is also illustrated in Fig. 3 in the upper-right part. The mutation score is the outcome of mutation analysis. The calculation for the mutation score (MS), taken from De Souza et al. [35], is as follows: where P is the SUT being mutated, T is the test suite being used, KM(P, T) is the number of killed mutants, TM(P) is the total number of mutants and EM(P) is the number of equivalent mutants. Equivalent mutants are mutants that cannot be killed. They are syntactically different but functionally equivalent to the original SUT. Because of this characteristic, automatically distinguishing all of them from the non-equivalent mutants is nearly impossible [36] and often done manually. Mutation analysis was used as a metric for the effectiveness assessment because it provides a quantitative measure of how well a test suite detects faults. A higher mutation score reflects more faults are detected, which shows that the test suite is good. Furthermore, mutation analysis was used because it resembles real faults typically made by programmers. Empirical results showed that 85 percent of errors caused by mutants were also caused by real faults [36]. In the scarcity of publicly available real faults from the industry, mutation analysis is an alternative to alleviate the threat to external validity regarding fault seeding during the experimentation. In addition, mutation analysis was used because the implementation can be automated using mutation tools. Code coverage was used as another metric for the effectiveness assessment because higher code coverage means more confidence in the reliability of the software so as not to fail [37]. Code coverage was also used because it is an effective metric in predicting a test suite quality in terms of fault detection [38]. Test suite size was used as a metric for the efficiency assessment because it generally affects the time and effort required to complete test execution. It was also used because it is one of the most common and simplest metrics in measuring the performance of an MB-TCG approach [32]. Faults Seeding Mutants were seeded into the lift system Java program using the muJava tool. All method-level operators and class-level operators supported by muJava were used. In total, 767 and four mutants were generated for method-level and class-level, respectively. This study used the mutant sampling method to maximize efficiency without deteriorating mutation analysis effectiveness significantly [36]. A quarter of the total mutants, approximately 25 percent, were randomly picked. Empirical results suggested that a random selection of 10 percent of mutants is only 16 percent less effective than utilizing the complete set of generated mutants in terms of mutation score [36]. This method is an acceptable trade-off between the mutation score effectiveness minimization and the amount of work reduced. After a subset of mutants was selected, equivalent mutants were identified and discarded. One hundred seventy-nine mutants were selected, where 43 were equivalent mutants, and 136 were non-equivalent mutants. Generated Test Suites Abstract tests were generated by simply executing the constructed model using the optimal sequencer provided by the TestOptimal tool. This sequencer generates the least number of necessary abstract tests that can cover every transition in the model. In total, ten abstract tests were generated. For the conventional MB-TCG, random test data sequences were used to concretize the ten abstract test cases. The concrete test cases were then executed against every selected mutant discussed earlier to determine the test verdicts. The ten abstract test cases were further manipulated for the proposed MB-TCG approach before test data sequences were generated, as explained in Section 4. 74 test data sequences were initially generated for all ten abstract tests after the proposed approach was implemented. After the duplicate removal was done, 53 test data sequences remained. The test data sequences generated from the proposed approach were then used to concretize the ten abstract tests. Each abstract test will be concretized by several test data sequences generated from the proposed approach, unlike the conventional MB-TCG, where each abstract test case only has one test data sequence. The concrete test cases were then executed against every selected mutant discussed earlier to determine the tests verdicts. The process explained in this subsection is also illustrated in Fig. 3, where the test suites from the conventional MB-TCG and the proposed approach, the original SUT, and the faulty versions of SUT are used during the test execution. Result Analysis and Discussion This subsection is represented as the "Effectiveness and efficiency evaluation" in Fig. 3. Tab. 5 tabulates the results summary of the experiment. The result showed that the number of mutants detected by the proposed approach test suite was 106, nine more than the conventional MB-TCG. This number gave the proposed approach a mutation score of 77.94 percent, 6.62 percent higher than the conventional MB-TCG. The code coverage achieved by both test suites was the same. However, in terms of the number of test data sequences, conventional MB-TCG outperformed the proposed approach with ten sequences over 53 sequences. To further analyze the effectiveness of the conventional MB-TCG and the proposed approach, their types of mutants detected were investigated. Fig. 4 illustrates the types of mutants detected. It can be observed that the proposed approach outperforms the conventional MB-TCG by detecting more arithmetic operator insertion (AOI) and relational operator replacement (ROR) mutants. The test suite generated from the proposed approach detected more mutants compared to the conventional MB-TCG. This finding was obtained because the proposed approach improved the test data used for each test case. By integrating the BVA strategy and transitions' enabling conditions into the data classification step of the proposed approach, faults that happen to be near or at boundary conditions can be detected. Conventional MB-TCG was unable to detect these faults because it mainly used random test data between the acceptable ranges. In terms of code coverage, both test suites achieved identical scores. First, this finding was obtained because the test data from the proposed approach and the conventional MB-TCG used the same test cases. Second, the test data from the proposed approach tested the boundary of a condition without exceeding the condition's range of possible data, while the conventional MB-TCG test data were taken randomly within the condition's range. Therefore, the execution path was the same for both the proposed approach and the conventional MB-TCG, which resulted in identical code coverage. Figure 4: Types of mutants detected by the conventional MB-TCG and the proposed approach By inspecting the mutants detected by both approaches, the proposed approach can detect AOI and ROR mutants better than the conventional MB-TCG. For AOI, this is logical because an increment or decrement operator might be accidentally used to a variable in a condition statement, thus causing the value to be compared to a constant increase or decrease. So, if the test data test the maximum or minimum boundary of the condition statement, the fault can be detected. It is also sensible for ROR mutants to be detected by the proposed approach. Programmers sometimes get confused or mistaken between using the "greater/less than" or the "greater/less than or equal to" relational operators in a condition statement, thus creating an incorrect maximum or minimum boundary. This kind of error can indeed be detected by testing the maximum or minimum boundary. The conventional MB-TCG was smaller in terms of test suite size because of their generated test data sequences. This result was expected because the proposed approach was implemented by taking an abstract test case and splitting it into several possible test data sequences to accommodate the combinations of input classifications. This finding was in tandem with the results obtained by Nguyen et al. [12]. The initial test suite size for the proposed approach was even worse, with 74 tests. The removal of duplicates reduced the number of test data sequences by 21. Still, it was improbable that the proposed approach could have the same test suite size as the conventional MB-TCG or less, given how tests are generated for the proposed approach. This finding motivated this study to propose an approach for model-based test case prioritization (MB-TCP) in future work. The generated test suite can be prioritized to further improve the effectiveness and efficiency of the test suite in terms of fault detection. Conclusion and Future Work This study proposes an automated TDG approach in MB-TCG using the EFSM model. It was motivated by the limitation in the existing MB-TCG approaches and the importance of TDG in MBT. The proposed approach by Nguyen et al. [12] was adopted and modified in this study by automating the domain input classifications step, taking advantage of the information available in an EFSM model. The experiment showed that the proposed approach is 6.62 percent more effective in fault detection than the conventional MB-TCG based on their mutation scores. However, the conventional MB-TCG is more efficient with 43 fewer tests than the proposed approach. For future work, an MB-TCP approach using the EFSM model will be proposed as a continuation of this study. This new MB-TCP approach will be applied to the generated tests to increase testing effectiveness and efficiency further. It will also serve as an effort to address the efficiency issue faced by the proposed TDG approach. Furthermore, more related metrics that represent testing effectiveness and efficiency, such as test generation time or execution time, will be used. Last but not least, more case studies will be utilized to increase the confidence level regarding the proposed TDG approach effectiveness. Nontrivial case studies will also be used to reflect real-world systems more, thus making the obtained results more generalizable.
9,426.2
2022-01-01T00:00:00.000
[ "Computer Science" ]
Neuroprotective Effects of a New Derivative of Chlojaponilactone B against Oxidative Damaged Induced by Hydrogen Peroxide in PC12 Cells A new sesquiterpenoid (1) was obtained by hydrogenating Chlojaponilactone B. The structure of 1 was elucidated according to a combination of NMR, HRESIMS, and NOE diffraction data. The treatment of H2O2 in a PC12 cell model was used to evaluate the antioxidant activity of 1. An MMT assay showed that 1 had no cytotoxicity to the PC12 cell and rescued cell viability from the oxidative damage caused by H2O2. The treatment of 1 stabilized the mitochondria membrane potential (MMP), which decreased the intracellular ROS level and reduced cell apoptosis in the oxidative stress model. The activities of antioxidant enzyme superoxide dismutase (SOD) and glutathione peroxidase (GSH-Px) and the content of intracellular glutathione (GSH) were significantly enhanced after the treatment of 1. In addition, the results of qRT-PCR showed that 1 treatment minimized the cell injury by H2O2 via the up-regulation of the expression of nuclear factor erythroid 2 (Nrf2) and its downstream enzymes Heme oxygenase 1 (HO-1), glutamate cysteine ligase-modifier subunit (GCLm), and NAD(P)H quinone dehydrogenase 1 (Nqo1). Based on the antioxidant activity of 1, we speculated its potential as a therapeutic agent for some diseases induced by oxidative damage. Introduction Oxidative stress refers to a reaction in which the body generates a large number of reactive oxygen species after being stimulated by harmful stimuli, leading to an imbalance between the oxidation state and antioxidant state, resulting in pathological changes in tissues and cells [1]. Excessive amounts of highly active molecules are produced when the body is subjected to various stimuli, such as reactive oxygen species (ROS), causing the balance of oxidative and antioxidant states to be lost, which, in turn, results in tissue oxidative damage. Excessive free radicals produced by oxidative stress can directly or indirectly damage DNA, oxidize proteins, and reduce biological activity because of structural and functional defects [2]. Studies have shown that oxidative stress is closely related to neurological diseases, including Parkinson's disease and Alzheimer's disease [3,4], cardiovascular diseases and prostatic diseases [5], inflammation, and cancers [6]. The reduction in oxidative stress-induced damage to the body has become key in treating these clinical diseases. Treatment strategies based on the antioxidant amelioration of ROS appear to be able to delay the progression of these diseases. Natural products are rich in resources and diverse in structure. Numerous studies have shown that antioxidants from natural plant sources have neuroprotective effects to reduce the probability of human diseases [7][8][9]. Demethylenetrahydroberberine was reported to protect dopaminergic neurons and alleviates the behavioral disorder in a mouse model of Parkinson's disease through anti-apoptotic, anti-inflammatory and antioxidant effects [10]. Chen et al. found that GTS40, which is an active fraction of Gou Teng-San, helped prevent and treat oxidative stress-mediated neurodegenerative disorders [11]. Antioxidant peptides [12]. A combination of natural antioxidants (Vitamin E, quercetin, and basil oil) is potential innovation against Alzheimer's disorder [13]. Therefore, research on new natural compounds with neuroprotective activity are highly urgently and necessary. Finding lead compounds with novel structures and significant activity from natural products has become an effective way to develop new drugs. However, traditional compounds obtained by separation and extraction are limited in yield and cannot satisfy the needs of research and development. Chemical modifications or biotransformation of compounds have become important methods to improve pharmacological activity, reduce side effects, and increase drug stability. The plants of genus Chloranthus are widely applied for Traditional Chinese Medicine to treat bruises, rheumatic arthralgia, pain, soreness, and furunculosis [14]. Many terpenoids have been reported, such as terpenoids, diterpenoids, sesquiterpenoid dimers, and sesquiterpene lactones, in phytochemical investigations of Chloranthus plants. Pharmacological research has shown that Chloranthus plant have anti-inflammatory, anti-tumor, antivirus, and antifungal activities [15]. Sesquiterpenoids are a diverse group of compounds with abundant pharmacological activity, which have attracted the attention of scholars in recent years. By modifying the structure of sesquiterpenoids with lower toxicity and higher activity, they can be used in new drugs, food and cosmetics. Our previous studies showed that chlojaponilactone B, a sesquiterpenoid isolated from the genus Chloranthus, has anti-inflammatory effects that rely on the C-6 acetyl group and the C-8-C-9 double bond [16,17]. In this study, we perhydrogenated chlojaponilactone B to explore its structureactivity relationship. The sesquiterpene compound chlojaponilactone B was modified to reduce the three double bonds in the structure and open the ring of cyclopropane to obtain a derivative, termed compound, 1. Surprisingly, we obtained a new compound (1) with strong anti-oxidant activities and inhibition of nitric oxide (NO) production (the value of IC 50 is shown in Supplementary Materials). The aim of this study was to investigate the neuroprotective effect of compound 1 against oxidative damage to PC12 cells induced by H 2 O 2 . Chemical Modification and Structure Elucidation Our previous study speculated that the anti-inflammatory effects of Chlojaponilactone B depended on the C-6 acetyl group and the C-8-C-9 double bond [17]. To test this hypothesis, we perhydrogenated chlojaponilactone B into compound 1. Compound 1 appeared as a white powder and has the molecular formula C 17 3 hybridized methylene groups, seven sp 3 hybridized methine groups (two of which contained oxygens), and one sp 3 hybridized quaternary carbon. Among the five unsaturated positions, one is occupied by ester groups, one is occupied by an acetyl group, and the remaining three unsaturated positions are presumed to be in a tricyclic structure in 1 (see Table 1). The analysis and identification of the structure of compound 1 was achieved using various 2D NMR spectroscopic techniques. The heteronuclear multiple bond correlations (HMBCs) of H 2 -2/C-1, C-3 and C-10, H 3 -15/C-3, C-4 and C-5, H 3 -14/C-1, C-5, C-9, and C-10 indicated that C-1, C-3, C-4, C-5, and C-10 formed the five-membered ring, and the C-1, C-4, and C-10 positions were connected to one methyl group, respectively, which was supported by the 1 H-1 H correlated spectroscopy (COSY) correlations of H 3 -2/H-1/H 2 -3/H-4/H-5 and H-4/H 3 -15. The HMBCs of H 3 -14/C-5, C-9, and C-10, combined with the 1 H-1 H COSY correlations of H-5/H-6/H-7/H-8, suggested that the six-membered ring was formed by C-5, C-6, C-7, C-8, C-9, and C-10, for which a C-5-C-10 bridge connected it to the fivemembered ring. The HMBC correlations of H 3 -13/C-7, C-11, and C-12, combined with the 1 H-1 H COSY correlations of H-7/H-11/H 3 -13, verified that C-7, C-8, C-11, and C-12 formed the furan lactone ring, for which a C-7-C-8 bridge connected it to the six-membered ring, and the C-11 position was connected to one methyl group. The HMBC correlations of H-6/COOCH 3 indicated that the C-6 position was modified with an acetyl group. Therefore, the planar structure of 1 was identified as a 2,3-ring guaianese sesquiterpene. We analyzed its nuclear Overhauser effect (NOE) spectrum and compared its configuration with the known compound, chlojaponilactone B (Figure 1), which determined the absolute configuration of 1. Compound 1 is the perhydrogenated and cyclopropyl ring-opened product of Chlojaponilactone B. After opening the cyclopropyl moiety, C-1 maintains the R configuration, and C-5, C-6, and C-10 keep their S, R, and S configurations, respectively. The NOE spectrum indicated that H-6 is related to H 3 -13/H 3 -14/H 3 -15; H 3 -14 is related to H 3 -15; and H-4 is related to H-5, which determined C-4 as being in the S configuration. The correlation between H-8 and H-7/H-11, the correlation between H-7 and H-11, and the coupling constant of H-7 and H-8 was 1.52, which suggested that H-7 and H-8 are on the same side, and H-6 and H 3 -13 in the binding spectrum are related; therefore, the absolute configurations of C-7, C-8, and C-11 were determined as R, R, and S, respectively. Thus, the structure of compound 1 was established as depicted in Figure 1, and was named Perhydrochlojaponilactone B. Neuroprotective Effect of Compound 1 against PC12 Cell Injury Induced by H2O2 PC12 cells were cloned from rat adrenal pheochromocytoma and differentiated into sympathetic nerve cells using nerve growth factor (NGF) stimulation, which have been widely used in studies of neurological diseases [18]. PC12 cell treating with H2O2 is a common model to study oxidative damage, which will cause cell membrane and nuclear damage, the loss of mitochondrial membrane potential (MMP), and the decreased activities of antioxidant enzymes, containing glutathione peroxidase (GSH-Px), superoxide dismutase (SOD), and the cellular glutathione (GSH) content [19][20][21]. PC12 cells were exposed to different concentrations of compound 1 to determine its cytotoxicity, as assessed using an MTT-based colorimetric test. As shown in Figure 2A, cell viability approached 100% at the concentration of 40 μM, 20 μM, 10 μM, 5 μM, and 2.5 μM, suggesting that 1 had no cytotoxicity to PC12 cell. To choose a proper concentration of H2O2, cells were treated with varying concentrations for 24 h. With increasing H2O2 concentration, the cell viability decreased in a linear manner (53.94% at 750 μM) ( Figure 2B). Therefore, we selected 750 μM H2O2 to induce oxidative damage in at least half of the viable cells. The H2O2-induced decrease in cell viability was ameliorated dramatically after treatment with 1 in a dose-dependent manner. Cell viability after treatment of 1 at 40 μM was approaching to that treated with Vitamin C (VC, 10 μM), which was dramatically enhanced compared to the group induced with H2O2 (p < 0.05). In particular, 1 displayed a strong anti-oxidative effect at a lower concentration (2.5 μM) ( Figure 2C). These results confirmed the non-cytotoxicity and antioxidant activity of 1 on PC12 cells. Neuroprotective Effect of Compound 1 against PC12 Cell Injury Induced by H 2 O 2 PC12 cells were cloned from rat adrenal pheochromocytoma and differentiated into sympathetic nerve cells using nerve growth factor (NGF) stimulation, which have been widely used in studies of neurological diseases [18]. PC12 cell treating with H 2 O 2 is a common model to study oxidative damage, which will cause cell membrane and nuclear damage, the loss of mitochondrial membrane potential (MMP), and the decreased activities of antioxidant enzymes, containing glutathione peroxidase (GSH-Px), superoxide dismutase (SOD), and the cellular glutathione (GSH) content [19][20][21]. PC12 cells were exposed to different concentrations of compound 1 to determine its cytotoxicity, as assessed using an MTT-based colorimetric test. As shown in Figure 2A, cell viability approached 100% at the concentration of 40 µM, 20 µM, 10 µM, 5 µM, and 2.5 µM, suggesting that 1 had no cytotoxicity to PC12 cell. To choose a proper concentration of H 2 O 2, cells were treated with varying concentrations for 24 h. With increasing H 2 O 2 concentration, the cell viability decreased in a linear manner (53.94% at 750 µM) ( Figure 2B). Therefore, we selected 750 µM H 2 O 2 to induce oxidative damage in at least half of the viable cells. The H 2 O 2 -induced decrease in cell viability was ameliorated dramatically after treatment with 1 in a dose-dependent manner. Cell viability after treatment of 1 at 40 µM was approaching to that treated with Vitamin C (VC, 10 µM), which was dramatically enhanced compared to the group induced with H 2 O 2 (p < 0.05). In particular, 1 displayed a strong anti-oxidative effect at a lower concentration (2.5 µM) ( Figure 2C). These results confirmed the non-cytotoxicity and antioxidant activity of 1 on PC12 cells. Neuroprotective Effect of Compound 1 against PC12 Cell Injury Induced by H2O2 PC12 cells were cloned from rat adrenal pheochromocytoma and differentiated into sympathetic nerve cells using nerve growth factor (NGF) stimulation, which have been widely used in studies of neurological diseases [18]. PC12 cell treating with H2O2 is a common model to study oxidative damage, which will cause cell membrane and nuclear damage, the loss of mitochondrial membrane potential (MMP), and the decreased activities of antioxidant enzymes, containing glutathione peroxidase (GSH-Px), superoxide dismutase (SOD), and the cellular glutathione (GSH) content [19][20][21]. PC12 cells were exposed to different concentrations of compound 1 to determine its cytotoxicity, as assessed using an MTT-based colorimetric test. As shown in Figure 2A, cell viability approached 100% at the concentration of 40 μM, 20 μM, 10 μM, 5 μM, and 2.5 μM, suggesting that 1 had no cytotoxicity to PC12 cell. To choose a proper concentration of H2O2, cells were treated with varying concentrations for 24 h. With increasing H2O2 concentration, the cell viability decreased in a linear manner (53.94% at 750 μM) ( Figure 2B). Therefore, we selected 750 μM H2O2 to induce oxidative damage in at least half of the viable cells. The H2O2-induced decrease in cell viability was ameliorated dramatically after treatment with 1 in a dose-dependent manner. Cell viability after treatment of 1 at 40 μM was approaching to that treated with Vitamin C (VC, 10 μM), which was dramatically enhanced compared to the group induced with H2O2 (p < 0.05). In particular, 1 displayed a strong anti-oxidative effect at a lower concentration (2.5 μM) ( Figure 2C). These results confirmed the non-cytotoxicity and antioxidant activity of 1 on PC12 cells. Effects of 1 on ROS Generation in H 2 O 2 -Induced PC12 Cells Oxidative stress leads to neutrophil infiltration, and increased secretion of nucleic acids [22], ultimately caused various chronic diseases. ROS are derivatives of free radicals, and include hydrogen peroxide, singlet oxygen, and ozone. ROS in the body have certain functions, such as participating in immune and signal transduction processes. In the normal physiological state, the production and clearance of free radicals in the body maintain a dynamic balance. The production and clearance of ROS is an important marker of redox homeostasis. Under normal physiological conditions, cells eliminate the accumulated ROS by generating antioxidants [23]. When the body or immune cells (macrophages and neutrophils) are subjected to harmful stimuli, ROS clearance is reduced, causing oxidative damage and even cell death [24]. Thus, excess ROS exhibits destructive behavior. As shown in Figure 3, intracellular ROS showed a burst increase in H 2 O 2 -treated PC12 cells. However, compound 1 treatment decreased ROS levels in PC12 cells after 24 h. These findings suggested that compound 1 could effectively antagonize the ROS accumulation in PC12 cells induced by H 2 O 2 . Effects of 1 on ROS Generation in H2O2-Induced PC12 Cells Oxidative stress leads to neutrophil infiltration, and increased secretion of nuclei acids [22], ultimately caused various chronic diseases. ROS are derivatives of free radicals and include hydrogen peroxide, singlet oxygen, and ozone. ROS in the body have certain functions, such as participating in immune and signal transduction processes. In the nor mal physiological state, the production and clearance of free radicals in the body maintai a dynamic balance. The production and clearance of ROS is an important marker of redo homeostasis. Under normal physiological conditions, cells eliminate the accumulated ROS by generating antioxidants [23]. When the body or immune cells (macrophages and neutrophils) are subjected to harmful stimuli, ROS clearance is reduced, causing oxidativ damage and even cell death [24]. Thus, excess ROS exhibits destructive behavior. A shown in Figure 3, intracellular ROS showed a burst increase in H2O2-treated PC12 cells However, compound 1 treatment decreased ROS levels in PC12 cells after 24 h. Thes findings suggested that compound 1 could effectively antagonize the ROS accumulation in PC12 cells induced by H2O2. Effects of 1 on the Recovery of the Loss of MMP in H2O2-Induced PC12 Cells Mitochondria are important in many biological processes, including ROS generation apoptosis, the cell cycle, and cell growth. However, when the body is stimulated by endo toxins or alcohol, the antioxidant system is damaged, and ROS clearance is blocked. Th accumulated ROS led to mitochondrial membrane damage and mitochondrial membran potential (MMP) reduction [25]. In order to investigate H2O2-induced mitochondrial dys function in PC12 cells, JC-1 Kits were used for MMP detection. Moreover, apoptosis wa detected quantitatively using flow cytometry. As shown in Figure 4, after exposure t H2O2, PC12 cells displayed a dramatic increase in cell apoptosis (p < 0.001). By contrast, treatment dose-dependently decreased the number of apoptotic cells. These results sug gested that 1 could restore the decrease in cellular MMP and attenuate oxidative stress-in duced cell apoptosis, thus exerting a neuroprotective effect. Effects of 1 on the Recovery of the Loss of MMP in H 2 O 2 -Induced PC12 Cells Mitochondria are important in many biological processes, including ROS generation, apoptosis, the cell cycle, and cell growth. However, when the body is stimulated by endotoxins or alcohol, the antioxidant system is damaged, and ROS clearance is blocked. The accumulated ROS led to mitochondrial membrane damage and mitochondrial membrane potential (MMP) reduction [25]. In order to investigate H 2 O 2 -induced mitochondrial dysfunction in PC12 cells, JC-1 Kits were used for MMP detection. Moreover, apoptosis was detected quantitatively using flow cytometry. As shown in Figure 4, after exposure to H 2 O 2 , PC12 cells displayed a dramatic increase in cell apoptosis (p < 0.001). By contrast, 1 treatment dose-dependently decreased the number of apoptotic cells. These results suggested that 1 could restore the decrease in cellular MMP and attenuate oxidative stress-induced cell apoptosis, thus exerting a neuroprotective effect. Molecules 2022, 27, x FOR PEER REVIEW 6 of 11 Effects of 1 on SOD and GSH-Px Activities, and GSH Levels in H2O2-Induced PC12 Cells GSH-Px is an important peroxide decomposition enzyme that catalyzes GSH to generate glutathione disulfide and reduces toxic H2O2 to non-toxic hydroxyl compounds. The enzymes GSH-Px and SOD are important oxygen free radical scavengers in cells. GSH is a natural tripeptide composed of glutamate, cysteine, and glycine, comprising a sulfhydryl compound that contributes to the reductive catalysis of thiol and disulfide bonds, which plays an important role in maintaining redox homeostasis. The reduction in glutathione in the brain is associated with Parkinson's disease and aging [26]. The antioxidant system comprising GSH, GSH-Px, and SOD maintains the body's redox homeostasis under physiological conditions. To further explore the effect of 1 on antioxidants in H2O2-stimulated PC12 cells, we used ELISA to detect SOD and GSH-Px activities and GSH levels. The SOD activity in the H2O2 stimulation group decreased significantly compared to that in the control group (p < 0.05), whereas the SOD activity was enhanced after treatment with 1 in a dose-dependent manner. The GSH-Px activity decreased significantly (p < 0.05) in the H2O2 stimulation group, while 1 treatment enhanced the activity of GSH-Px, with the highest level at 10 μM. The GSH content in the H2O2 group decreased compared to control group (p < 0.05). However, the GSH content increased dose-dependently after 1 treatment ( Figure 5). These findings suggested that 1 could increase antioxidant levels, thereby exerting an antioxidant effect. Effects of 1 on SOD and GSH-Px Activities, and GSH Levels in H 2 O 2 -Induced PC12 Cells GSH-Px is an important peroxide decomposition enzyme that catalyzes GSH to generate glutathione disulfide and reduces toxic H 2 O 2 to non-toxic hydroxyl compounds. The enzymes GSH-Px and SOD are important oxygen free radical scavengers in cells. GSH is a natural tripeptide composed of glutamate, cysteine, and glycine, comprising a sulfhydryl compound that contributes to the reductive catalysis of thiol and disulfide bonds, which plays an important role in maintaining redox homeostasis. The reduction in glutathione in the brain is associated with Parkinson's disease and aging [26]. The antioxidant system comprising GSH, GSH-Px, and SOD maintains the body's redox homeostasis under physiological conditions. To further explore the effect of 1 on antioxidants in H 2 O 2 -stimulated PC12 cells, we used ELISA to detect SOD and GSH-Px activities and GSH levels. The SOD activity in the H 2 O 2 stimulation group decreased significantly compared to that in the control group (p < 0.05), whereas the SOD activity was enhanced after treatment with 1 in a dose-dependent manner. The GSH-Px activity decreased significantly (p < 0.05) in the H 2 O 2 stimulation group, while 1 treatment enhanced the activity of GSH-Px, with the highest level at 10 µM. The GSH content in the H 2 O 2 group decreased compared to control group (p < 0.05). However, the GSH content increased dose-dependently after 1 treatment ( Figure 5). These findings suggested that 1 could increase antioxidant levels, thereby exerting an antioxidant effect. Effects of 1 on the mRNA Expression Levels of Antioxidant Proteins in PC12 Cells Induced with H 2 O 2 Furthermore, to preliminarily explore the mechanism of the protective effects of 1 against damage by H 2 O 2 in PC12 cells, qRT-PCR was performed to detect the expression levels of nuclear factor erythroid 2 (Nrf2), glutamate cysteine ligase-modifier subunit (GCLm), heme oxygenase 1 (HO-1), and NAD(P)H quinone dehydrogenase 1 (Nqo1). Nrf2 regulates detoxification and downstream antioxidant enzyme gene expression, including Nqo1, GCLm, and HO-1. Nrf2 also regulates SOD and GSH-Px activities and the GSH level [27][28][29][30]. As shown in Figure 6, exposure to H 2 O 2 downregulated the transcription of Nrf2, GCLm, HO-1, and Nqo1, whereas compound 1 treatment dose-dependently increased the mRNA expression levels of these antioxidant proteins. Effects of 1 on the mRNA Expression Levels of Antioxidant Proteins in PC12 Cells Induced with H2O2 Furthermore, to preliminarily explore the mechanism of the protective effects of 1 against damage by H2O2 in PC12 cells, qRT-PCR was performed to detect the expression levels of nuclear factor erythroid 2 (Nrf2), glutamate cysteine ligase-modifier subunit (GCLm), heme oxygenase 1 (HO-1), and NAD(P)H quinone dehydrogenase 1 (Nqo1). Nrf2 regulates detoxification and downstream antioxidant enzyme gene expression, including Nqo1, GCLm, and HO-1. Nrf2 also regulates SOD and GSH-Px activities and the GSH level [27][28][29][30]. As shown in Figure 6, exposure to H2O2 downregulated the transcription of Nrf2, GCLm, HO-1, and Nqo1, whereas compound 1 treatment dose-dependently increased the mRNA expression levels of these antioxidant proteins. Effects of 1 on the mRNA Expression Levels of Antioxidant Proteins in PC12 Cells Induced with H2O2 Furthermore, to preliminarily explore the mechanism of the protective effects of 1 against damage by H2O2 in PC12 cells, qRT-PCR was performed to detect the expression levels of nuclear factor erythroid 2 (Nrf2), glutamate cysteine ligase-modifier subunit (GCLm), heme oxygenase 1 (HO-1), and NAD(P)H quinone dehydrogenase 1 (Nqo1). Nrf2 regulates detoxification and downstream antioxidant enzyme gene expression, including Nqo1, GCLm, and HO-1. Nrf2 also regulates SOD and GSH-Px activities and the GSH level [27][28][29][30]. As shown in Figure 6, exposure to H2O2 downregulated the transcription of Nrf2, GCLm, HO-1, and Nqo1, whereas compound 1 treatment dose-dependently increased the mRNA expression levels of these antioxidant proteins. Reagents Methyl thiazolyl tetrazolium (MTT), JC-1 Kit assay kit, 2 ,7 -dichlorofluorescein diacetate (DCFH-DA) fluorescent dye were purchased from the Beyotime Institute of Biotechnology (Shanghai, China). The SOD, GSH-Px and GSH immunosorbents assay kits were purchased from Beijing Solarbio Technology (Beijing, China) Co., Ltd. All primers were purchased from Sangon Biotech (Shanghai, China) Co., Ltd. Other chemicals and solvents used in the present study were of analytical or biological grade. ROS Measurement Intracellular ROS contents were determined using a 2,7-dichloro-dihydro-fluorescein diacetate (DCFH-DA) assay. PC12 cells were seeded in a 6-well plate at 1 × 10 6 cells/mL, cultured for 24 h, and then treated with H 2 O 2 (750 µM) as the model group. The blank group was treated with 0.5% DMSO, and the treatment group comprised cells treated with 1 at different concentrations (20, 10, and 5 µM) combined with H 2 O 2 (750 µM) (both for 24 h). Next, 10 µM DCFH-DA was added to the cells and incubated for 30 min. Flow cytometry (Beckman coulter, Indianapolis, IN, USA) was used to observe the fluorescence of intracellular ROS. Mitochondrial Membrane Potential A JC-1 Kit was used to assess changes to the mitochondrial membrane potential, a commonly used marker of early apoptosis. PC12 cells were added to the wells of a 12-well plate at 1 × 10 5 cells/mL and incubated for 24 h, after which they were treated with 1 at different concentrations (20, 10, and 5 µM) with H 2 O 2 (750 µM) for another 24 h. Wells with no test compound that received only H 2 O 2 (750 µM) served as controls; wells with neither any test compound nor H 2 O 2 (750 µM) served as blank controls. We harvested the cells, and rinsed them using PBS, then subjected them to flow cytometry to analyze fluorescence. Measurement of Intracellular Antioxidant Activity The intracellular SOD and GSH-Px activities, and the level of GSH in PC12 cells were measured employing enzyme-linked immunosorbents assay (ELISA) kits (Solarbio, Beijing, China). Briefly, the cells were treated the same as in Section 3.5. After 24 h of stimulation, cell lysis was achieved by incubation on ice, the cell lysate was collected, and the proteins were obtained by centrifugation for 10 min at 12,000× g and 4 • C. A Pierce™ BCA Protein Assay Kit (Thermo Fisher, San Diego, CA, USA) was used to quantify the total proteins in the samples. The absorbance of GSH-Px, SOD, and GSH were detected at 412, 560, and 412 nm, respectively. Analysis of Antioxidant Gene Expression by Quantitative Real-Time Reverse Transcription PCR (qRT-PCR) PC12 cells were added to 6-well plates at 1 × 10 6 cells/mL and incubated for 24 h. Subsequently, the model group comprised cells exposed with H 2 O 2 (750 µM), the blank group comprised cells treated with 0. 5% DMSO, and the treatment group comprised cells treated at different concentrations of 1 (2.5, 5, and 10 µM) combined with H 2 O 2 (750 µM) (all groups were treated for 24 h). The Trizol reagent (Invitrogen, Grand Island, NY, USA) was used to extract total RNA from the PC12 cells, and a Nanodrop 2000 ultramicro spectrophotometer (Thermo Fisher Scientific, Sacramento, CA, USA) was used to determine the RNA concentration. A HiScript II Q RT SuperMix for qPCR (Vazyme, Nanjing, China) was used to reverse-transcribe the mRNA into first strand cDNA. Next, a Hieff™qPCR SYBR ® Green Master Mix (Yisheng, Shanghai, China) in a LightCycler 96 Real-Time PCR System (Roche, Basle, Switzerland) was used to perform quantitative real-time PCR (qPCR) assays using the cDNA as the template. The qPCR reaction conditions comprised: preincubation at 95 • C for 5 min, then 40 cycles of denaturation at 95 • C for 10 s, annealing at 55 • C for 20 s, and elongation at 72 • C for 20 s. Table 2 shows the gene-specific oligonucleotide primers employed in qPCR. The reference gene comprised GAPDH (glyceraldehyde 3-phosphate dehydrogenase). All experiments were carried out three times. Statistical Analysis Values are presented as the means ± SD of triplicate experiments. were performed using One-way analysis variance was used to carry out the statistical analyses in SPSS 18.0 (IBM Corp., Armonk, New York, NY, USA). Statistically significant differences were accepted at a p-value less than 0.05. Conclusions In this study, the sesquiterpene compound chlojaponilactone B was modified to reduce the three double bonds in the structure and open the ring of cyclopropane to obtain a new derivative, named compound 1. Extensive activity screening found that 1 has strong anti-oxidant activities. Compound 1 could significantly reverse the oxidative damage caused by H 2 O 2 in the oxidative stress model of PC12 cells. Further study showed that ROS production in oxidatively damaged cells was inhibited significantly by the application of 1. Flow cytometry showed that after H 2 O 2 (750 µM) stimuli, ROS levels in PC12 cells increased significantly; however, after 1 intervention, ROS levels decreased clearly. Compound 1 increased the cellular MMP, and attenuate oxidative stress induced cell apoptosis. Moreover, compound 1 enhanced SOD and GSH-Px activities and GSH levels markedly; it also dosedependently increased the mRNA expression levels of Nrf2, GCLm, HO-1, and Nqo1, proving that 1 has a strong antioxidant effect. In conclusion, Compound 1 is a potentially promising therapeutic agent to treat oxidative damage-induced diseases in future research.
6,490.4
2022-09-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Bayes Estimation of Two-Phase Linear Regression Model Let the regression model be Yi = β1Xi + εi, where εi are i. i. d. N (0, σ2) random errors with variance σ2 > 0 but later it was found that there was a change in the system at some point of time m and it is reflected in the sequence after Xm by change in slope, regression parameter β2. The problem of study is when and where this change has started occurring. This is called change point inference problem. The estimators of m, β1,β2 are derived under asymmetric loss functions, namely, Linex loss & General Entropy loss functions. The effects of correct and wrong prior information on the Bayes estimates are studied. Introduction Regression analysis is an important statistical technique to analyze data in social, medical, and engineering sciences. Quite often, in practice, the regression coefficients are assumed constant. In many real-life problems, however, theoretical or empirical deliberations suggest models with occasionally changing one or more of its parameters. The main parameter of interest in such regression analyses is the shift point parameter, which indexes when or where the unknown change occurred. A variety of problems, such as switching straight lines [1], shifts of level or slope in linear time series models [2], detection of ovulation time in women [3], and many others, have been studied during the last two decades. Holbert [4], while reviewing Bayesian developments in structural change from 1968 onward, gives a variety of interesting examples from economics and biology. The monograph by Broemeling and Tsurumi [5] provides a complete study of structural change in the linear model from the Bayesian viewpoint. Bayesian inference of the shift point parameter assumes availability of the prior distribution of the changing model parameters. Bansal and Chakravarty [6] had proposed to study the effect of an ESD prior for the changed slope parameter of the two-phase linear regression (TPLR) model on the Bayes estimates of the shift point and also on the posterior odds ratio (POR) to detect a change in the simple regression model. In this paper, we studied a TPLR model. In Section 2, we have given a change point model TPLR. In Sections 3.1 and 3.2, we obtained posterior densities of m considering σ 2 unknown and of β 1 , β 2 and m considering σ 2 known, respectively. We derive Bayes estimators of β 1 , β 2 , and m under symmetric loss functions in Section 4 and asymmetric loss functions in Section 5. We have studied the sensitivity of the Bayes estimators of m when prior specifications deviate from the true values in Section 6. In Section 7, we have presented a numerical study to illustrate the above technique on generated observations. In this study, we have generated observations from the proposed model and have computed the Bayes estimates of m and of other parameters. Section 8 concludes the research paper. Two-Phase Linear Regression Model The TPLR model is one of the many models, which exhibits structural change. Holbert [4] used a Bayesian approach, based on TPLR model, to reexamine the McGee and Kotz [7] data for stock market sales volume and reached the same conclusion that the abolition of splitups did hurt the regional exchanges. The TPLR model is defined as 2 International Journal of Quality, Statistics, and Reliability where ε t 's are i. i. d. N (0, σ 2 ) random errors with variance σ 2 > 0, x t is a nonstochastic explanatory variable, and the regression parameters (α 1 , β 1 ) / = (α 2 , β 2 ). The shift point m is such that if m = n there is no shift, but when m = 1, 2, . . . , n − 1 exactly one shift has occurred. Bayes Estimation The ML method, as well as other classical approaches are based only on the empirical information provided by the data. However, when there is some technical knowledge on the parameters of the distribution available, a Bayes procedure seems to be an attractive inferential method. The Bayes procedure is based on a posterior density, say, g(β 1 , β 2 , σ −2 , m | z), which is proportional to the product of the likelihood function L(β 1 , β 2 , σ −2 , m | z), with a prior joint density, say, g(β 1 , β 2 , σ −2 , m) representing uncertainty on the parameters values where Z denotes g(β 1 , The likelihood function of β 1 , β 2 , σ −2 and m, given the where 3.1. Using Gamma Prior on 1/σ 2 and Conditional Informative Priors on β 1 , β 2 with σ −2 Unknown. We consider the TPLR model (1) with unknown σ −2 . As in Broemeling and Tsurumi [5], we suppose that the shift point m is a priori uniformly distributed over the set {1, 2, . . . , n−1} and is independent of β 1 and β 2 . We also suppose that some information on β 1 and β 2 are available that can be expressed in terms of conditional prior probability densities on β 1 and β 2 . We have conditional prior density on β 1 and β 2 given σ 2 , with N (0, σ 2 ) as We also suppose that some information on 1/σ 2 is available and that technical knowledge can be given in terms of prior mean μ and coefficient of variation ∅. We suppose marginal prior distribution of 1/σ 2 to be gamma (c, d) distribution with mean μ where Γd is gamma function same as explained in (8 The gamma function (Euler's integral of the second kind) Γ(z)[Re z > 0], (Euler) Gradshteyn and Ryzhik [8, page 933], is defined as If the prior information is given in terms of prior mean μ and coefficient of variation ∅, then the parameters c and d can be obtained by solving Hence, joint prior pdf of β 1 , β 2 , σ −2 and m, say g 1 (β 1 , β 2 , σ −2 , m) is where Joint posterior density of β 1 , β 2 , σ −2 , and m say, International Journal of Quality, Statistics, and Reliability 3 where S m3 , S m4 , S m1 , S m2 and A are as given in (4). h 1 (z) is the marginal density of z given by where k 2 is as given in (13) where x denotes S m1 + 1 · S n1 − S m1 + 1 , S m3 , S m4 , S m1 , S m2 , S n1 , and A are as given in (4). is the gamma function as explained in (8). Marginal posterior density of change point m, say g(m | z) is T 1 (m) is as given in (15). Where h 2 (z) is the marginal density of z given by And the integrals, So using (23) and (24) results in (22), it reduces to where k 4 is as given in (21). G 1m and G 2m are as given in (23) and (24). S m3 , S m4 , S m1 , and S m2 are as given in (4). Bayes Estimates under Symmetric Loss Function The Bayes estimator of a generic parameter (or function there of) α based on a squared error loss (SEL) function: where d is a decision rule to estimate α, is the posterior mean. As a consequence, the SEL function relative to an integer parameter, Hence, the Bayesian estimate of an integer-valued parameter under the SEL function L 1 (m, v) is no longer the posterior mean and can be obtained by numerically minimizing the corresponding posterior loss. Generally, such a Bayesian estimate is equal to the nearest integer value to the posterior mean. So, we consider the nearest value to the posterior mean as Bayes Estimate. The Bayes estimator of m under SEL is where T 1 (m) and T 2 (m) are as given in (15) and (25). Other Bayes estimators of α based on the loss functions is the posterior median and posterior mode, respectively. Asymmetric Loss Function The Loss function L(α, d) provides a measure of the financial consequences arising from a wrong decision rule d to estimate an unknown quantity (a generic parameter or function thereof) α. The choice of the appropriate loss function depends on financial considerations only and is independent from the estimation procedure used. The use of symmetric loss function was found to be generally inappropriate, since for example, an overestimation of the reliability function is usually much more serious than an underestimation. A useful asymmetric loss, known as the Linex loss function was introduced by Varian [9]. Under the assumption that the minimal loss occurs at α, the Linex loss function can be expressed as The sign of the shape parameter q 1 reflects the direction of the asymmetry, q 1 > 0 if overestimation is more serious than underestimation, and vice versa, and the magnitude of q 1 reflects the degree of asymmetry. The posterior expectation of the Linex loss function is where provided that E α {exp (−q 1 α)} exists and is finite. Another loss function, called general entropy (GE) loss function, proposed by Calabria and Pulcini [10], is given by The Bayes estimate α * E is the value of d that minimizes E α [L 5 (α, d)]: provided that E α (α −q3 ) exists and is finite. Combining the General Entropy Loss with the posterior density (17), we get the estimate m by means of the nearest integer value to (40), say m * E , as below. We get the Bayes estimates m * E of m using General Entropy loss function as where T 1 (m) is as given in (15). Minimizing expectation E[L 5 (m, d)] and then taking expectation with respect to posterior density g 2 (m | z), we get the estimate m by means of the nearest integer value to (44) say m * E , as below. We get the Bayes estimates m * E of m using General Entropy loss function as where T 2 (m) is same as given in (25). Note 1. The confluent hypergeometric function of the first kind 1 F 1 (a, b; x) [11] is a degenerate form of the hypergeometric function 2 F 1 (a, b, c; x) which arises as a solution to the confluent hypergeometric differential equation. It is also known as Kummer's function of the first kind and denoted by 1 F 1 , defined as follows: With Pochhammer coefficients (a, m) = Γ(a + m)/Γ(a) for m ≥ 1 and (a, 0) = 1 [12, page 755], also has an integral representation The symbols Γ and B denoting the usual functions gamma and beta, respectively. When a and b are both integer, some special results are obtained. If a < 0, and either b > 0 or b < a, the series yields a polynomial with a finite number of terms. If integer b ≤ 0, the function is undefined. In many special cases hypergeometric p F q is automatically converted to other functions. For p = q + 1, hypergeometric p F q [a list, b list, z] has a branch cut discontinuity in the complex z plane running from 1 to ∞. Hypergeometric p F q (Regularized) is finite for all finite values of its argument so long as p ≤ q. Note 3. β(x, y) is the beta function Euler's integral of the first kind defined as Gradshteyn and Ryzhik [8, pages 948, 950], The gamma function is as explained in (8). Illustration. Let us consider the two-phase regression model where ε t 's are i.i.d. N(0, 1) random errors. We take the first 15 values of x t and ε t from Table 4.1 of Zellner [13] to generate 15 sample values (x t , y t ) t = 1, 2, . . . , 15. The generated sample values are given in Table 1. The β 1 , β 2 , and σ 2 themselves were random observations. β 1 and β 2 were from standard normal distribution and precision 1/σ 2 was from gamma distribution with μ =1 and coefficient of variation ∅ = 1.4, respectively, in c = 0.5, d = 0.5. We have calculated posterior mean, posterior median and posterior mode of m. The results are shown in Table 2. We also compute the Bayes estimators m E of m using (40) for unknown σ 2 and (44) for known σ 2 and m L using (37) for unknown σ 2 and (41) for known σ 2 for data given in Table 1. The results are shown in Table 3. Table 3 shows that for small values of |q|, q = 0.9, 0.5, 0.2, 0.1 Linex loss function is almost symmetric and nearly quadratic and the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 3 also shows that for q 1 = q 3 = 1.5, 1.2, Bayes estimates are less than actual value of m = 4. It can be seen from Table 3 that positive sign of shape parameter of loss functions reflects overestimation is more serious than underestimation. Thus, problem of overestimation can be solved by taking the value of shape parameter of Linex and General Entropy loss functions positive and high. For q 1 = q 3 = −1, −2, Bayes estimates are quite large than actual value m = 4. It can be seen from Table 3 that the negative sign of shape parameter of loss functions reflects underestimation is more serious than overestimation. Thus, problem of underestimation can be solved by taking the value of shape parameters of Linex and General Entropy loss functions negative. We get Bayes estimators β * * 1L , β * * 2L , β * * 1E , and β * * 2E of β 1 and β 2 using (42), (43), (49), and (51), respectively, for the data given in Table 1 and for different value of shape parameter q 1 and q 3 . The results are shown in Table 4. Tables 3 and 4 show that the values of the shape parameters of Linex and General Entropy loss functions increase, the values of Bayes estimates decrease. Sensitivity of Bayes Estimates In this section, we study the sensitivity of the Bayes estimator, obtained in Sections 4 and 5 with respect to change in the prior of parameters. The mean μ of gamma prior on σ −2 has been used as prior information in computing the parameters c, d of the prior. We have computed posterior mean m * using (31) and m * * using (32) for the data given in Table 1 considering different sets of values of (μ). Following Calabria and Pulcini [10], we also assume the prior information to be correct if the true value of σ −2 is closed to prior mean μ and is assumed to be wrong if σ −2 is far from μ. We observed that the posterior mode m * appears to be robust with respect to the correct choice of the prior density of σ −2 and also with a wrong choice of the prior density of σ −2 . This can be seen from Table 5. Table 5 Simulation Study In Sections 4 and 5, we have obtained Bayes estimates of m on the basis of the generated data given in Table 1 for given values of parameters. To justify the results, we have generated 10,000 different random samples with m = 4, n = 15, β 1 = 3.2, 3.3, 3.4, β 2 = 3.5, 3.6, 3.7 and obtained the frequency distributions of posterior mean, median of m, m * L , m * E with the correct prior consideration. The result is shown in Tables 2 and 3. The value of shape parameter of the general entropy loss and Linex loss used in simulation study for shift point is taken as 0.1. We have also simulated several standard normal samples. For each β 1 , β 2 , and m and n, 1000 pseudorandom samples from two-phase linear regression model discussed in Section 2 have been simulated and Bayes estimators of change point m has been computed using q 3 = 0.9 and for different prior mean μ. Table 6 Conclusions In this study, we are discussing the Bayes estimator of shift point, the integer parameter, posterior mean is less appealing. Posterior median and posterior mode appear as better estimators as they would be always integer. Our numerical study showed that the Bayes estimators posterior mode of m is robust with respect to the correct choice of the prior specifications on σ −2 and wrong choice of the prior specifications on σ −2 , posterior median and posterior mode are sensitive in case prior specifications on 1/σ 2 deviate International Journal of Quality, Statistics, and Reliability 9 simultaneously from the true values. Here, we discussed regression model with one change point, in practice it may have two or more change point models. One can apply these models to econometric data such as poverty and irrigation.
4,058.6
2011-07-26T00:00:00.000
[ "Mathematics" ]
Steric ploy for alternating donor–acceptor co-assembly and cooperative supramolecular polymerization The presence of a bulky peripheral wedge destabilizes the homo-assembly of an amide functionalized acceptor monomer and thereby enables alternating supramolecular copolymerization with an amide appended donor monomer via the synergistic effect of H-bonding and the charge-transfer interaction. .15 mmol ) were taken together with 50 ml dry DMF and the reaction mixture was stirred at 100 C for 48 h under N 2 atmosphere. The reaction was stopped, cooled to rt and poured in 100 mL water and the product was extracted with ethyl acetate (3 x 30 mL). The combined organic layer was washed with water (3 x 10 mL) followed by brine (1 x10 mL) and dried over anhydrous Na 2 SO 4. Excess solvent was evaporated to get the crude product as brown oil which was further purified by column chromatography using basic alumina as a stationary phase and 2% and washed with HCl (1N) solution (3 x 10 mL) followed by brine (1 x 10 mL) and dried over anhydrous Na 2 SO 4 . Then excess solvent was evaporated to get the crude product. It was further purified by column chromatography using silica gel as the stationary phase and 30% ethyl acetatehexane as the eluent to get the pure product (4) NDI-2-EH: Compound 5 (350 mg, 0.63 mmol) and 1,4,5,8-napthalenetetracarboxylic-bisanhydride (86 mg, 0.31 mmol) were dissolved in dry DMF (10 mL) and refluxed at 140 °C for 24 h under N 2 atmosphere. After that the reaction mixture was allowed to cool to rt and placed in the refrigerator for 2 h while the crude product came out as an orange solid; which was filtered, and the obtained solid was washed with MeOH several times. The product was further purified by column chromatography by using silica gel as stationary phase and 1% MeOH in CHCl 3 as eluent to obtain the pure product as a light yellow solid. Yield: 280 mg (66%). 1 Description on physical studies Gelation Study: Stock solutions of all components were made in CHCl 3 at 10 mM concentration. Measured volume of aliquot was transferred to a screw capped vial and the solvent were evaporated by air blowing. Measured amount of methylcyclohexane was added to the vial to make the concentration of solute 10 mM and then the solutions were heated to make homogenous solution and allowed to rest at rt. Approximately after 5-10 min, NDI-2 and DAN-4 formed gel as tested by stableto-inversion method whereas NDI-2-EH or DAN-4 + NDI-2-EH (1:1) remained as free flowing solution. Alternating supramolecular copolymerization: Equal volume of an aliquot of DAN-4 and NDI-2-EH in CHCl 3 were mixed in a screw capped vial, solvent was evaporated and the red solid obtained was redissolved in measured volume of methylcyclohexane by heating which upon cooling to rt produced a red solution. The solution was allowed to equilibrate at rt for 2 h prior to carry out to any experiment. For estimation of association constant, the red solution was gradually diluted with a known volume of methylcyclohexane and CT-band (λ max = 510 nm) was monitored as a function of concentration (10-5.5 mM). After addition of solvent, the solution was made homogenous and settled for 5 min before recording the absorption spectrum was recorded. K a was estimated by fitting the experimental data to the following equation. 5 1 Where c, A, l and Ԑ indicates concentration, absorbance, optical path length and extinction coefficient, respectively. FT-IR Study: Stock solutions (5 mM) of NDI-2, NDI-2-EH, DAN-4 and NDI-2-EH + DAN-4 were prepared in both chloroform and methylcyclohexane solvents and spectra were recorded at room temperature. Variable temperature UV-Vis studies: Solution of a given self-assembled chromophore(s) in methylcyclohexane (0.1 mM) was heated from 20 °C to 90 °C using a peltier attached to the UV/Vis machine and spectra were recorded at regular interval. From the temperature-dependent absorption spectra the mole fraction of aggregate at a given temperature T [α Agg (T)] was estimated using following equation. Mon Agg Mon Agg Mon , A(T), and A Agg are the absorbance at particular wavelength (326 nm for DAN-4, 377 nm for NDI-2 or NDI-2-EH and 505 nm for DAN-4 + NDI-2-EH) for the monomer (the value was taken from the absorption spectrum of the solution in CHCl 3 ), the solution in methylcyclohexane at temperature T and the fully aggregated stage (lowest temperature spectrum), respectively. α Agg (T) was plotted as a function of T in each case to generate the melting curves shown in Figure 2c. 3 were mixed with appropriate ratio so that the NDI-2-EH concentration remained fixed while that for DAN-4 varied from 7-15% w. r. t. to the NDI-2-EH. Solvent was evaporated and to the solid measured amount of n-decane was added so that [NDI-2-EH] = 0.1 mM in each case. Absorption at a single wavelength (395 nm) was monitored as function of temperature (368 K to 290 K at 1 K/ min cooling rate) by Perkin-Templab software connected to the UV/Vis machine. The cooling curves were attempted to fit either the isodesmic or cooperative 5 . In isodesmic model, we used boltzman function, growth sigmodial in origin 8.0 software. After fitting the data of NDI-2-EH, we obtained correlation coefficient 0.9998. Whereas, the cooling curve of NDI-2-EH in presence of different percentage of DAN-4 failed to fit in isodesmic model. Thus we attempted to fit our data in cooperative model using gnu-software. First we separated elongation and nucleation regime and by using two known equations data was fitted as per reported literature. [6][7] Following a similar procedure, growth of supramolecular polymer was also monitored by temperature dependent (363-293 K, 5 K/ min) DLS studies which showed gradual increase in particle size. From these data diffusion coefficient (D) at each temperature was calculated using Stokes Einstein formula (shown below). Supramolecular polymerization by nucleation-elongation pathway: Stock solution of NDI-2-EH and DAN-4 in CHCl Where D, k, T, п, ɳ, R stand for diffusion coefficient, Boltzmann constant, temperature, pi constant, specific viscosity of the solvent and hydrodynamic radius of the aggregates (measured by DLS). 1/D 3 was plotted against α agg at each temperature (estimated from cooling curve in UV/Vis experiment).
1,443.4
2016-09-19T00:00:00.000
[ "Materials Science" ]
Automatic Relative Radiometric Normalization of Bi-Temporal Satellite Images Using a Coarse-to-Fine Pseudo-Invariant Features Selection and Fuzzy Integral Fusion Strategies Relative radiometric normalization (RRN) is important for pre-processing and analyzing multitemporal remote sensing (RS) images. Multitemporal RS images usually include different land use/land cover (LULC) types; therefore, considering an identical linear relationship during RRN modeling may result in potential errors in the RRN results. To resolve this issue, we proposed a new automatic RRN technique that efficiently selects the clustered pseudo-invariant features (PIFs) through a coarse-to-fine strategy and uses them in a fusion-based RRN modeling approach. In the coarse stage, an efficient difference index was first generated from the down-sampled reference and target images by combining the spectral correlation, spectral angle mapper (SAM), and Chebyshev distance. This index was then categorized into three groups of changed, unchanged, and uncertain classes using a fast multiple thresholding technique. In the fine stage, the subject image was first segmented into different clusters by the histogram-based fuzzy c-means (HFCM) algorithm. The optimal PIFs were then selected from unchanged and uncertain regions using each cluster’s bivariate joint distribution analysis. In the RRN modeling step, two normalized subject images were first produced using the robust linear regression (RLR) and cluster-wise-RLR (CRLR) methods based on the clustered PIFs. Finally, the normalized images were fused using the Choquet fuzzy integral fusion strategy for overwhelming the discontinuity between clusters in the final results and keeping the radiometric rectification optimal. Several experiments were implemented on four different bi-temporal satellite images and a simulated dataset to demonstrate the efficiency of the proposed method. The results showed that the proposed method yielded superior RRN results and outperformed other considered well-known RRN algorithms in terms of both accuracy level and execution time. Introduction Relative radiometric normalization (RRN) is the process of minimizing radiometric aberrations (i.e., gray-levels changes caused by variations in sun-target-sensor geometry, atmospheric conditions, illumination, and viewing angles) from one or more high/multispectral target images based on a high/multispectral reference image which are taken at different times from the same place [1][2][3][4][5]. It is a critical task because it is a prerequisite for the processing of multitemporal remote sensing (RS) images in several applications, such as automatic change detection [6,7] and image mosaicking [8]. A variety of RRN methods have been developed to radiometrically adjust RS images, mainly categorized into two main groups: dense RRN (DRRN) and sparse RRN (SRRN) [3]. DRRN methods adopt global image statistics to predict the relationship between image pairs, which are not feasible for image pairs with considerable noise and land use/land cover (LULC) changes [3,9,10]. In contrast, SRRN methods typically extract pseudo-invariant features (PIFs) from the target and reference images and use them to obtain the model transformation between the image pair [9,10]. Since the PIFs are partially invariant to illumination variation and changed regions, the SRRN can achieve more precise results than the DRRN methods in dealing with datasets with LULC regions [11]. Many SRRN methods have been developed in response to questions, such as: how to select PIFs and establish a reasonable relationship between PIFs? For example, Elvidge et al. [2] proposed a SRRN method based on an automatic scattergram-controlled regression (ASCR) to select the pixels close to the regression line. In this method, the regression line was determined by connecting the centers of water and land clusters at the scattergram between target and reference images. As a result, this strategy is operationally limited when image pairs do not include both clusters. Furthermore, due to the lack of PIF refinement in the ASCR approach, the radiometric resolution of the resulting normalized image may not be preserved. To address these limitations, a robust SRRN approach was introduced by Du et al. [12] based on the principal component analysis (PCA) and quality control for PIFs refinement. Additionally, Canty et al. [13] proposed a robust SRRN method in which PIFs were selected based on the multivariate alteration detection (MAD) transformations [14], which was invariant to linear transformation (e.g., affine and conformal) of the image pair gray-levels. Canty and Nielsen [15] further improved the robustness of the MAD method through an iterative reweighting scheme, named the iteratively reweighted (IR)-MAD method, which was affordable for radiometric adjustment of image pairs with significant seasonal changes. The MAD and IR-MAD methods have been widely used in the change detection process [16][17][18] and are frequently developed by researchers for RRN tasks [19,20]. For example, Byun et al. [21] developed a new MAD algorithm for RRN of very high-resolution (VHR) bitemporal images. Their algorithm utilized a weighting function derived from the normalized difference water index (NDWI) to calculate the MAD transform's covariance matrices. Furthermore, Liu et al. [20] presented a robust SRRN for image mosaicking that extracted the optimal PIFs through the modified IRMAD and used them in an iteratively reweighted block adjustment. Despite the advantages of IRMAD-based methods, they only use statistical analysis to select PIFs and do not take into account their physical properties, which may lead to potential errors in the RRN modeling process [4,22,23]. To address this, some rule-based SRRN methods have been suggested to consider the physical nature of land surfaces by adopting spectral indices over the PIFs selection process. For example, Zho et al. [22] proposed an automatic SRRN for multiple images with PIFs (MIPIFs) retrieved using stepby-step dark and bright sets selection based on the NDWI and some statistical sampling rules. Such PIFs selection is appropriate for the RRN of datasets acquired within the same time (e.g., season), but it cannot accurately handle the radiometric dispersion induced by seasonal fluctuations [22]. Furthermore, its results are highly reliant on the regulation of statistical sampling rules, which were employed to restrict the number of PIFs. To overcome these constraints, Ghanbari et al. [24] proposed a robust SRRN that took advantage of the Gaussian mixture modeling (GMM)-based change detection in PIFs selection and an error ellipsoid (EE) process in RRN modeling. Likewise, Moghimi et al. [3] employed a fast level set method (FLSM) and patch-based outlier detection to pick an ideal set of PIFs using a step-by-step unchanged sample selection strategy. With a similar idea, Yan et al. [25] employed a chi-square test to automatically extract the PIFs from the unchanged regions detected by an unsupervised autoencoder (AE) method. Although the mentioned methods yielded promising results, they were often computationally demanding in terms of both processing and memory storage. The prior SRRN methods were mainly developed by assuming a linear relationship between the PIF values in the reference and target images, which was not feasible for datasets with nonlinear radiometric differences. Several RRN methods have been suggested that employ a nonlinear mapping function instead of a linear one in RRN modeling to cope with this problem [10,19,26,27]. For example, Sadeghi et al. [10] proposed an intelligent RRN technique using an artificial neural network (ANN) to approximate solutions of a nonlinear relation between PIFs (unchanged samples) in the reference and target images. This method had high flexibility for modeling the relationship between PIFs in the reference and subject images. Nevertheless, its performance depended on its ANN architecture/network topology and the quality of the training data. Seo, et al. [26] developed the ASCR method [2] by employing a random forest (RF) regression instead of linear regression for handling nonlinear radiometric and phenological differences. Although this method had a good performance in radiometric correction, it was highly prone to overfitting and required one to set the appropriate RF regression parameters. Bai et al. [19] also developed the IR-MAD method by exploiting the kernel version of canonical correlation analysis (kCCA) and cubic polynomial (degree 3), respectively, instead of linear methods to eliminate the regular nonlinear spectral and radiometric differences. Selecting optimal values for the kernel parameters regulation and kernel type were the challenges of this approach. In general, although nonlinear-based SRRN methods [10,19,26,27] can radiometrically reduce the nonlinear distortions between image pairs, they are prone to overfitting and are often computationally intensive [28]. Most of the mentioned SRRN studies have provided a great solution to address the limitations of RRN. However, they do not contribute to the type of ground surfaces/LULC of PIFs in the RRN modeling, leading to potential errors and bias in the final results [29]. Therefore, several SRRN methods have been proposed that employ PIFs from different LULCs. These methods aim to create a linear relationship between all LULCs in the image pairs, while for different LULCs, such a relationship is different [30,31]. For instance, Sadeghi et al. [29] proposed an automatic RRN method by categorizing unchanged pixels according to the histogram of subject images for each band using the Otsu thresholding technique and calculating relevant coefficients of piecewise linear regression. In another study, He et al. [31] improved a semi-supervised RRN method to select the high correlated histogram of oriented gradients (HOG) features from image pairs as PIFs in each ground object class. In this method, an object-based classification was applied to input images to generate LULC maps for the reference and target images. This study generated the normalized image during linear class-wise RRN modeling using the extracted PIFs. Although this method had superior results, its automation was low because it used a supervised classification in its process. In general, the studies of [29,31] produced valuable RRN findings, but they did not refine the PIFs from uncertain/imprecise samples, which might lead to an imperfect linear model for specific classes. Moreover, their performance depended on the accuracy of the supervised/unsupervised classification algorithms utilized in their processes. Furthermore, some discontinuities between adjacent classes were found in the normalized images generated by these methods, resulting in an imperfect normalized image in terms of vision inception. To address the constraints noted above, we present a novel SRRN technique that could efficiently extract reliable PIFs from various clusters and reduce discontinuities and bias in the final results by formulating the RRN modeling process with a fusion strategy. In the first step, the optimal PIFs are selected in a coarse-to-fine process. In the coarse stage of this process, the Pearson correlation, Chebyshev distance, and spectral angle mapper (SAM) are combined to construct a change index from the down-sampled input images in which changed regions were highlighted. This index is further pre-classified to three regions of changed, unchanged, and uncertain, using efficient multiple thresholding. In the fine level of the process, the target image is first clustered into different groups using the histogram-based fuzzy C-means (HFCM) [32]. Subsequently, the stable PIFs are collected from unchanged and uncertain pixels (generated by multiple thresholding pre-segmentation) for each cluster using a hypothesis-test-based outlier detection. In the next step, the retrieved clustered PIFs are then employed in the proposed fusion-based RRN modeling to provide a reliable normalized image. In this model, the two normalized images are initially generated using a standard robust linear regression (RLR) and the new cluster-wise version, named CRLR. The Choquet integral [33] was then utilized to fuse the produced normalized images because it is a flexible nonlinear aggregator operator that can efficiently model the relationships between fusion sources [34]. The performances of the proposed method were comprehensively evaluated on a simulated dataset and four different bi-temporal satellite images and compared to several existing state-of-the-art RRN methods. The key contributions of our work can be summarized as follows: I. A coarse-to-fine approach is designed to efficiently extract reliable and wellspatially distributed PIFs from distinct ground surface clusters. Moreover, a hypothesis-based outlier detection was developed and embedded in this approach toefficiently refine the PIF candidates by taking advantage of the probability contour of the bivariate normal (BVN) joint distribution. II. The cluster-wise-RLR (CRLR) is proposed for better modeling the complex relationship between target and reference images with different LULC types. This model also contains a weight matrix defined based on the distance to PIFs that can reduce the potential bias in the results of RRN. III. A novel fusion-based framework is presented for RRN modeling that can integrate multiple normalized images using the Choquet integral as well as handle potential uncertainties, such as discontinues and bias in the final results. The rest of this manuscript is organized as follows. Section 2 describes the proposed RRN approach and its detailed descriptions, datasets, and quantitative measures utilized for performance evaluation. Section 3 presents the experimental results on the datasets to verify the feasibility of the proposed approach. Finally, the concluding remarks are summarized in Section 4. Proposed SRRN Method Consider two co/geo-registered satellite images R (i.e., Equation (1)) and T (i.e., Equation (2)), respectively, as the reference and target images with the same size, acquired from the same scene at different times. where H × W refers to height and width in pixels, and N is the number of spectral bands of the images R and T. The primary goal of this research was to generate a dependable normalized target image T N C (i.e., Equation (3)) computed using clustered PIFs extracted from input images R and T so that it is spectrally balanced with the reference image R. We designed a novel RRN framework composed of two main steps to reach this goal, as shown in Figure 1. First, the clustered PIFs were selected and optimized through a coarse-to-fine no-change selection strategy. The selected PIFs were then used in fusionbased RRN modeling to generate an optimal normalized target image. The steps included are detailed in the following sections. Step 1: Coarse-to-Fine Clustered PIFs Selection As mentioned before, the first step of the proposed method was to generate reliable and well-spatially distributed PIFs from input images and over a coarse-to-fine nochange selection. To this end and for further faster processing, the grid size of input images was reduced to a coarser size × , i.e., where the operator ⌊. ⌋ rounds its argument toward the nearest integer, is the scaling factor of down-sampling, computed by where is the user-defined positive integer (e.g., 128, 512, 720), (. ), and (. ) are respectively minimum and maximum operators. Accordingly, the input images and were down-sampled, respectively, to and which are defined as follows: = { ( , , )|1 ≤ ≤ , 1 ≤ ≤ , 1 ≤ ≤ } (7) 2.1.1. Step 1: Coarse-to-Fine Clustered PIFs Selection As mentioned before, the first step of the proposed method was to generate reliable and well-spatially distributed PIFs from input images R and T over a coarse-to-fine nochange selection. To this end and for further faster processing, the grid size of input images was reduced to a coarser size P × Q, i.e., where the operator . rounds its argument toward the nearest integer, s is the scaling factor of down-sampling, computed by where ϕ is the user-defined positive integer (e.g., 128, 512, 720), min(.) and max(.) are respectively minimum and maximum operators. Accordingly, the input images R and T were down-sampled, respectively, to R D and T D which are defined as follows: To generate a dependable input for the no-change selection process, down-sampled images T D and R D were then compared through a similarity index CSI which was combined from three metrics as follows: where ρ, D Ch , and θ respectively refer to Pearson correlation, Chebyshev distance, and SAM [35] metrics, which are given by: where T D b (p, q) and R D b (p, q) represent the gray-level pixel (p, q) in the spectral band bth of the down-sampled target and reference images, respectively. Such a combination can better reflect the characteristics of the changed and unchanged areas because it compares the reference and target images from different perspectives. It is worth noting that each metric was rescaled to [0, 1] by the Min-Max method before they were used in constructing CSI. To reinforce the boundaries of the changed/unchanged regions in the index CSI, gradient magnitude of the index was first calculated as: where G σ * CSI stands for the convolution of the CSI index with a Gaussian smoothing kernel G σ , and ∇ denotes the gradient operator. The maximum operator was then used to supply complementary information from the index CSI and its gradient magnitude Gmag CSI into the enhanced index ECSI which is given by: The ECSI index makes a trade-off between geometrical detail preserving (edges and corners) and enhancing changed regions. It should be noted that Gmag CSI rescaled to [0, 1] before applying the max operator. Generally, due to the coarse resolution of ECSI and the complexity of land surface features, a binary segmentation result often fails to reflect the real changed/unchanged regions. To address this issue, an automatic multilevel thresholding was applied to the index CSI to generate the change map CM = {cm(p, q) ∈ {0, 1, 2}|1 ≤ p ≤ P, 1 ≤ q ≤ Q }, in which cm(p, q) ∈ {0, 1, 2}, ("0" and "1" indicate the pixel respectively belongs to the certain changed and unchanged classes, whereas "2" involves pixels belonging to the uncertain class. This process can be performed as follow: where the low and high thresholds, Th 1 and Th 2 were determined by the fractional-order Darwinian particle swarm optimization (FODPSO) thresholding [36] due to its efficiency and fast process. The change map, CM, was further up-sampled to the original size of input images using the nearest-neighbor interpolation and indexed by CM U . In fine no-change selection, a local outlier detection was introduced, inspired by [37], to select reliable and well-spatially distributed PIFs from different LULCs by making decisions based on a hypothesis test. To this end, the target image T, was first converted to the grayscale target image T G , using CorrC2G [38], and it then partitioned into the c cluster. Since satellite image typically includes multiple features with overlapped distributions, fuzzy clustering algorithms have been found to be very beneficial [39,40]. In this study, the HFCM clustering [32] was selected for this task because it operates on the histogram of an input image instead of the entire image, leading to a much faster process and reducing memory storage. The HFCM algorithm typically needs to know the number of clusters as an input which should be optimally determined. In this study, the optimum number of clusters c opt was self-adaptively selected during the analysis of the Xie-Beni (XB) index [41] (i.e., Equation (15)) as the workflow of Figure 2. where and where h(l) = {h(l)} l=1,2,...,L is the histogram of the image T G with L gray-levels, i.e., L = max T G ; d(., .) is the distance metric between two variables, c n is the cluster number, the membership function u cl (i.e., Equation (16)) and cluster center υ c (i.e., Equation (17)) are obtained as a result of applying the HFCM algorithm to the image T G . It is worth noting that each pixel value of the image T G was normalized according to Equation (18) to make sure that they are in the range of [0, 255], i.e., T G (i, j) ∈ [0, 255]: After partitioning the image T G into c opt clusters, the pixel pairs of input images belonging to the unchanged class were first picked up from each cluster and considered as the definite PIFs for that cluster. The hypothesis-test-based method was then proposed to eliminate the outliers from pixels of the uncertain class in each cluster using statistical parameters estimated based on the definite PIFs. As a result of this process, more reliable PIFs could be extracted in each cluster, resulting in accurate modeling between the reference and target images. Let's consider that in each cluster, R u/c and T u/c are a vector of pixels values that belong to the uncertain class in each spectral band of reference and target images, respectively. Moreover, suppose that they follow a BVN distribution, and their joint probability density function (PDF) can be formulated as follows: where Z u/c T = (R u/c , T u/c ) and µ and Σ are, respectively, the mean vector and covariance matrix generated from R u/c and T u/c in each cluster. Moreover, as is clear from Equation (19), (Z u/c − µ) Σ −1 (Z u/c − µ) represent the Mahalanobis distance of input sam-ples, following a chi-square distribution with 2 degrees of freedom. Accordingly, the probability contour of the BVN distribution can be defined for each cluster as follows: where ξ is the scale of the probability contour and determined as ξ = χ 2 1− ,2 , in which is a given level of significance (e.g., the 95% probability contour corresponds to = 0.05 level of significance). and where ℎ( ) = {ℎ( )} =1,2,..., is the histogram of the image with gray-levels, i.e., = ( ); (. , . ) is the distance metric between two variables, is the cluster number, the membership function (i.e., Equation (16)) and cluster center (i.e., Equation (17)) are obtained as a result of applying the HFCM algorithm to the image . It is worth noting that each pixel value of the image was normalized according to Equation (18) to make sure that they are in the range of [0, 255], i.e., ( , ) ∈ [0,255]: As can be seen from Equation (20), the performance of the probability contour is highly dependent on the estimation of its statistical parameters (i.e., µ and Σ). The uncertain class may include many noises and anomalies in each cluster, leading to incorrect statistical parameters estimation and distortion of RRN results. To address this problem, we used unchanged samples (i.e., definite PIFs) to correctly estimate the statistical parameters of µ and Σ for each cluster. Therefore, the probability contour at the given significance level was updated with such parameters and then directly adopted to form the critical region for the hypothesis test in each cluster. For each cluster, the hypothesis had a null hypothesis , all of the uncertain pixel values which fall inside the critical region added to the set of PIFs in the corresponding band. Since this process was implemented band-by-band, a majority voting rule over spectral bands produced the final decision for selecting PIFs from uncertain pixels in each cluster. Step 2: Fusion-Based RRN Modeling This step aimed to adjust the target image to the reference image through a novel fusion-based RRN modeling. To this end, two normalized images of T N G and T N L were first generated, respectively, through the global and local RRN modeling based on the clustered PIFs. In fact, the relation between the target and reference images was once modeled globally through a band-by-band RLR as follows: where α G b and β G b are respectively the global slope and intercept for the bth spectral band, which were estimated using the iteratively reweighted least-squares (IRLS) method [42] based on the gray-levels of the clustered PIFs in images, T and R. The process of the IRLS method was started by considering the initial value for α b and β b . At each iteration τ, the non-negative weights ψ of clustered PIFs were then estimated from the previous iteration based on the bisquare estimator [43]. In the next stage, the new coefficients were estimated as follows: where N PIF is the total number of clustered PIFs, t b,y and r b,y are respectively the gray-levels for the yth clustered PIF on the bth spectral band of images T and R; µ t b (τ−1) and µ r b are the weighted means of gray-levels of clustered PIFs in the bth spectral band of images T and R, respectively, which are calculated from the previous iteration as follows: The two last stages were repeated until the estimated normalization coefficients converged to optimal values. The RLR is much more robust to existing outliers than the other conventional linear regression models due to an efficient weighting function in its procedure [42]. The global RRN modeling generates a uniform normalized image where the discontinuity problem is not observed. However, applying such an approach may be insufficient to model the complex relationship between target and reference images, especially in dealing with datasets with different LULC types. Additionally, when the number of PIFs in one of the clusters is high, the global RRN modeling results may be biased to that cluster. To address these problems, the CRLR was introduced as a local RRN modeling to locally estimate the relation between image pairs in a band-by-band manner, which is defined as where α L b,c and β L b,c are respectively the slope and intercept for oth cluster of the bth spectral band, which were calculated by the IRLS technique; W b,o refers to the weight matrix, which was separately computed for each cluster based on the inverse distance of pixel values of the target image from the corresponding cluster center as follows: Such an RRN model can generate a normalized image that is well adjusted to the reference image in different LULC. Furthermore, the weight matrix embedded in this model can help decrease the potential bias in the RRN results. However, the normalized image generated by the local model is not as uniform as that provided by the global model, where discontinuity along the LULC's edge boundaries could be observed in the results of this model. Thus, the global and local RRN approaches include advantages and weaknesses in producing the normalized image. Accordingly, we looked for a strategy that decreases the discontinuity along the LULC's edge boundaries and reduces the bias model problem. This could be resolved by fusing information from the normalized images T N G and T N L . There are a variety of fusion approaches available in the literature for merging information from multiple sources. In this study, we used the Choquet integral operator [33] to construct the fused normalized image T N C , as it utilizes the fuzzy measures in its calculation, allowing it to consider all possible combinations of criteria in the decision-making process [44]. Suppose that we have m normalized images, T N = T N 1 , T N 2 , ..., T N m , for fusion. Denote the gray-level of kth normalized image, T N k b , on nth pixel in the bth spectral band, The discrete Choquet integral on the instance, x b,n , calculated by [45]: Note that when g is an λ-fuzzy measure, g(Ak) values are determined by [34]: where g T N 0 b = 0, the parameter λ refers to the degree of interaction between two components which is obtained with the condition g T N b = 1 as the following equation [45]: As for the fusion between two normalized images, T N G and T N L the domain is defined as T N = T N G , T N L . In this domain and according to Equation (28), the pixel value for the fused image, T N C , can be obtained as follows: , which were determined based on the inverse of the absolute value of the gray-level difference between normalized and reference images on the nth pixel in the bth spectral band as follows: By substituting g({T can be determined as follow: Datasets As part of this study, a simulated dataset and four real bi-temporal optical images taken by Landsat-7, Terra, Sentinel-2, and IRS satellites were used to evaluate the proposed SRRN method (see Figure 3). The key reasons for choosing these datasets were their diversity in terms of satellite sensor, geographical coverage, and various atmospheric conditions. The main characteristics of real datasets are summarized in Table 1. In this dataset, the uncalibrated image was considered as the target image, whereas its adjusted image (in terms of contrast and brightness) with some simulated changed areas was selected as the reference image (see Figure 3a,c first row). The target image includes the different cropland types with low contrast and brightness. Following RRN, the contrast and illumination of target images must be similar to those of the reference image, and croplands will be discernible similarly to those shown in simulated reference images. There are many croplands, vegetation, soil changes, and illumination differences among this bitemporal data. The ASTER image taken in July 2002 was used as the reference image because of its realistic illumination and spectral contents. The one acquired in July 2003 was considered the target image. After applying RRN, the target image is expected to be harmonized with the reference image in terms of contrast, brightness, and spectral content. It is worth noting that the spatial resolution of 30 m/pixel bands of the ASTER images was also sharpened to 15 m/pixel by the PCA-based PAN-sharpening algorithm [46]. Dataset Figure 3a,c forth row). These images were acquired in the same month but under different atmospheric conditions. There are significant water body changes as well as illumination variations caused by slightly different viewing angles among the bitemporal data. The Sentinel 2 image taken in April 2016 was employed as the reference image, and atmosphere, terrain, and cirrus corrections were performed on this image by the Sen2Cor model, which is available in SNAP software. The uncalibrated Sentinel 2 image (at processing Level-1C) acquired in April 2016 was also used as a target image. After applying RRN, the spectral content of the target image is intended to be rectified based on that of the reference image. Moreover, the spatial resolutions of 20 m/pixel and 60 m/pixel bands of the Sentinel 2 dataset were also enhanced to 10 m/pixel by the Sen2Res model [47], which is available in the Sentinel Application Platform ( Figure 3a,c last row). There are many LULC changes as well as mountain offsets caused by different viewing angles among this bitemporal data. The IRS image taken in July 1998 was employed as the reference image due to its better brightness and contrast than the one acquired in May 2007. After applying RRN, the target image was expected to be well-adjusted with the reference image in terms of spectral content and visual point of view. All spectral bands of image pairs, except the thermal, cirrus, and panchromatic bands, were used in the RRN process. The ground truth of the change maps, which is shown in Figure 3d, was generated by the post-classification comparison and manual analysis of the image pairs. It is worth noting that the unchanged pixels in these maps were considered for RRN validation (experiments of Section 3.1, Section 3.3, and Section 3.4) to have fair results. Evaluation Metrics To quantify the global performance of the proposed SRRN method and generate comparative experiments, the root mean square error (RMSE) was used in this study (Equation (35)). where N c is the total number of unchanged pixels in the binary change map. A low RMSE describes the acceptable RRN results. To locally validate the performance of the proposed SRRN method, the cross-correlation (CC) to average mean absolute percentage error (AMAPE) ratio index (CAMRI) is suggested, which is calculated for each exciting LULC as follows: where N c l denotes the number of test samples in a specific LULC, R b and T N b are respectively the average values of these samples in the bth spectral band of the reference and normalized target images. The higher value of the CAMRI results in a better RRN. Analysis of the Coarse-to-Fine PIFs Selection As clustering is an essential component in the proposed coarse-to-fine PIFs selection, we present the results of the HFCM algorithm for all datasets used in Figure 4a-c. In this way, the XB values for the number of clusters from 2 to 10 are shown in Figure 4a during the clustering process for each dataset. For the best cluster number in each dataset, the clustering results and membership maps are also shown in Figure 4b,c, whereas the selected PIFs in each cluster are illustrated in Figure 4d for all datasets. As can be seen from the subplots in Figure 4a and according to the flowchart presented in Figure 2, the optimal cluster numbers were self-adaptively selected as 4, 4, 5, 3, and 4, respectively, for the simulated dataset and datasets 1-4. The achieved optimal cluster number for each dataset corresponds to the first minimum value of the XB index, which seemed to be consistent with the real number of clusters in the target image (see Figure 4b,c). As is evident from the clustering results and their membership maps, the decolorized target images were well categorized from dark to bright clusters. As a result, the robust and well-spatially distributed PIFs were collected from these clusters through the proposed hypothesis-test-based method, which was well fitted with the physical properties of ground surface types. For example, the clustered PIFs were extracted from dark and bright croplands of the simulated dataset using the proposed coarse-to-fine strategy (see Figure 4d first row). The PIFs were also collected from different LULC types of dataset 1, such as water bodies and wetlands (dark regions), dense vegetation (gray regions), As can be seen from the subplots in Figure 4a and according to the flowchart presented in Figure 2, the optimal cluster numbers were self-adaptively selected as 4, 4, 5, 3, and 4, respectively, for the simulated dataset and datasets 1-4. The achieved optimal cluster number for each dataset corresponds to the first minimum value of the XB index, which seemed to be consistent with the real number of clusters in the target image (see Figure 4b,c). As is evident from the clustering results and their membership maps, the decolorized target images were well categorized from dark to bright clusters. As a result, the robust and well-spatially distributed PIFs were collected from these clusters through the proposed hypothesis-test-based method, which was well fitted with the physical properties of ground surface types. For example, the clustered PIFs were extracted from dark and bright croplands of the simulated dataset using the proposed coarse-to-fine strategy (see Figure 4d first row). The PIFs were also collected from different LULC types of dataset 1, such as water bodies and wetlands (dark regions), dense vegetation (gray regions), sparse vegetation (light gray regions), and mountainous, as well as bare soil land cover (bright regions) (see Figure 4d second row). For dataset 2, the clustered PIFs mainly were selected from the river and dark-green farmlands (dark regions), wetlands (dark-gray regions), irrigated croplands (gray regions), harvested cropland areas (light-gray regions), and bare soil land covers (bright regions) (see Figure 4d third row). The clustered PIFs of dataset 3 were composed of the water bodies, sandy and rocky areas, and the dark, gray, and bright ground surfaces, respectively (see Figure 4d fourth row). They were also composed of dark to bright samples collected from the valleys, dense and sparse vegetation, and rocky areas included in dataset 4 (see Figure 4d last row). To analyze the efficiency of the proposed coarse-to-fine strategy, its RRN results were compared to those obtained using the same approach without the downscaling process, without the hypothesis-test-based method, and when using CVA instead of CSI in terms of RMSE and computing time ( Figure 5). The linear RLR was selected as the core of RRN modeling in this experiment to provide a fair comparison and investigate the quality of clustered PIFs generated by the coarse-to-fine approach under the mentioned conditions. The experiments and the computation time analysis were conducted on all considered datasets by MATLAB (version 2020a) on an Intel CPU Core (TM) i7-3770 GHz with 16 GB of RAM. sparse vegetation (light gray regions), and mountainous, as well as bare soil land cover (bright regions) (see Figure 4d second row). For dataset 2, the clustered PIFs mainly were selected from the river and dark-green farmlands (dark regions), wetlands (dark-gray regions), irrigated croplands (gray regions), harvested cropland areas (light-gray regions), and bare soil land covers (bright regions) (see Figure 4d third row). The clustered PIFs of dataset 3 were composed of the water bodies, sandy and rocky areas, and the dark, gray, and bright ground surfaces, respectively (see Figure 4d fourth row). They were also composed of dark to bright samples collected from the valleys, dense and sparse vegetation, and rocky areas included in dataset 4 (see Figure 4d last row). To analyze the efficiency of the proposed coarse-to-fine strategy, its RRN results were compared to those obtained using the same approach without the downscaling process, without the hypothesis-test-based method, and when using CVA instead of CSI in terms of RMSE and computing time ( Figure 5). The linear RLR was selected as the core of RRN modeling in this experiment to provide a fair comparison and investigate the quality of clustered PIFs generated by the coarse-to-fine approach under the mentioned conditions. The experiments and the computation time analysis were conducted on all considered datasets by MATLAB (version 2020a) on an Intel CPU Core (TM) i7-3770 GHz with 16 GB of RAM. It is evident from bar charts Figure 5a-e that using CVA instead of CIS index in the proposed coarse-to-fine PIFs selection resulted in a significant reduction in RRN performance for most analyzed datasets. For example, the average RMSE was reduced by 7.37 and 0.05 in the best (simulated dataset) and worst (dataset 2) cases after using CVA instead It is evident from bar charts Figure 5a-e that using CVA instead of CIS index in the proposed coarse-to-fine PIFs selection resulted in a significant reduction in RRN performance for most analyzed datasets. For example, the average RMSE was reduced by 7.37 and 0.05 in the best (simulated dataset) and worst (dataset 2) cases after using CVA instead of the CIS index, whereas it led to an increase in average RMSE by 0.42 only for dataset 3. These results could be due to the high sensitivity of CVA to radiometric differences between image pairs because it directly employs only information of spectral bands to generate a difference/change index. In contrast, the CSI index uses the integration of information obtained from distance, angle, and correlation between spectral bands of image pairs in the comparison process, which was less affected by radiometric distortions. However, the running time was reduced by almost 21% when using CVA in the PIF selection process, which could be due to the simplicity of CVA calculations (see Figure 5f). After using the hypothesis-based test in the process of PIF selection, the average RMSEs were improved by 4.22% for the simulated dataset and 3.83%, 1.68%, 7.34%, and 1.05%, respectively, for datasets 1-4, indicating its efficacy in the PIF refinement (see Figure 5a-e). As expected, using a refinement algorithm raised the computational cost of the process. Knowing this, using a hypothesis-based-test algorithm increased the execution time of the proposed PIFs selection by almost 40% over most cases, which is worth considering for better results. The PIFs selection without down-sampling yielded the best performance over most of the datasets, but there was no significant difference in its RRN accuracy and that of the proposed method. For example, when the down-sampling process was discarded from the proposed method, the average RMSE was reduced by only 0.11%. Furthermore, as compared to other approaches, the proposed PIF selection without down-sampling required the greatest processing time in all cases (see Figure 5f). These findings revealed that adopting down-sampling in the PIF selection process not only reduced execution time but also provided satisfactory RRN accuracy for most datasets. Comparative Results of the RRN Modelling To evaluate the competence of the RLR and CRLR models, they were compared with the two most widely used models in the RRN process, including ordinary least squares (OLS) [48] and orthogonal distance regression (ODR) [49], in terms of RMSE and processing time (see Figure 6). For this experiment, 67% of the PIFs from each cluster were randomly selected for training, and the rest was used to test the models (see Figure 6a). In addition, we calculated the running time only for the modeling step (see Figure 6c). As depicted in Figure 6b, the proposed CRLR model obtained the best performance in RRN modeling of all datasets among all considered models, which indicates its robustness and effectiveness in model fitting. For example, the CRLR outperformed the RLR, ORD, and OLR, respectively, by 5.20%, 22.19%, and 7.48% in the best case (dataset 2) and by~1.5% in the worst cases (dataset 3) in terms of average RMSE. Moreover, RLR had a somewhat better performance than the OLR in that it slightly improved the average RMSE of the OLR by~1% for all datasets. This can be mainly because RLR, like CRLR, benefits from the bisquare weighting function, resulting in more robust results. Among all considered models, the ORD had poor RRN results for considered datasets where it conducted somewhat large RMSE compared with other models. The main reason may be related to a large number of training samples and their variety of errors, leading to errors in fitting and estimating coefficients. Although the CRLR and RLR had better quantitative results than ORD and OLR, both of them were computationally intensive. This was more obvious for datasets 2 and 3, which have more spectral bands than other datasets (see Figure 6c). In summary, The CRLR and RLR typically require a much larger computational volume than conventional models used in the RRN modeling phase, which can be seen as a weakness of these models. Effects of the Fusion-Based RRN Modelling In this section, we evaluated the performance of the proposed fusion-based RRN model at the local and global levels over the analyzed datasets. The experiments were performed with the clustered PIFs selected by the proposed coarse-to-fine process. Figure 7 shows comparative results between the proposed fusion-based RRN model and local and RRN modeling in terms of accuracy and visual depth. Effects of the Fusion-Based RRN Modelling In this section, we evaluated the performance of the proposed fusion-based RRN model at the local and global levels over the analyzed datasets. The experiments were performed with the clustered PIFs selected by the proposed coarse-to-fine process. Figure 7 shows comparative results between the proposed fusion-based RRN model and local and RRN modeling in terms of accuracy and visual depth. As demonstrated in Figure 7a-e, the normalized images produced by the proposed fusion strategy were more visually similar to the corresponding reference image than those generated by local and global models, indicating its effectiveness in the RRN process. Moreover, the proposed fusion-based strategy significantly reduced the amount of bias and discontinuities present in the normalized images of datasets 3 and 4, produced from local and global RRN models. For example, the local RRN modeling generated undesirable normalized images that were not well harmonized with the reference images in datasets 3 and 4, whereas global RRN modeling produced blurred normalized images. Notably, the fused normalized images in these two datasets were in good agreement with the relevant reference images (See Figure 7a-e second and third rows). On other datasets, there were not many visual differences between the normalized images generated by methods compared, where they all visually matched their reference images. As demonstrated in Figure 7a-e, the normalized images produced by the proposed fusion strategy were more visually similar to the corresponding reference image than those generated by local and global models, indicating its effectiveness in the RRN process. Moreover, the proposed fusion-based strategy significantly reduced the amount of bias and discontinuities present in the normalized images of datasets 3 and 4, produced from local and global RRN models. For example, the local RRN modeling generated undesirable normalized images that were not well harmonized with the reference images in datasets 3 and 4, whereas global RRN modeling produced blurred normalized images. Notably, the fused normalized images in these two datasets were in good agreement with the relevant reference images (See Figure 7a-e second and third rows). On other datasets, there were not many visual differences between the normalized images generated by methods compared, where they all visually matched their reference images. Based on the results illustrated in Figure 7e, the RMSEs were reduced over most of the spectral bands of the datasets after fusing the normalized images generated by the local and global models by the fuzzy Choquet integral. For example, when using the proposed fusion technique in RRN modeling instead of a single local model, like the cluster-wise RLR, the average RMSEs were reduced by 8.63% for the simulated dataset and 2.83%, 6.40%, 0.62%, and 1.69% for datasets 1-4, respectively. Moreover, the average RMSEs were improved by 1.90% for the simulated dataset and 8.84%, 3.57%, 1.06%, and 4.02%, respectively, for datasets 1-4 when the proposed fusion strategy in RRN modeling was used instead of the single global model like the RLR. This could mainly be due to the difference between the reference and normalized target images as the fuzzy Choquet integral memberships during the fusion process. In addition, local RRN modeling provided better results for datasets 1, 3, and 4, while global RRN modeling delivered better results for the simulated dataset and dataset 2. Based on our results, a single RRN model was not sufficient to achieve better qualitative and quantitative results, and a fused approach, such as the one proposed in this study, could lead to better results by optimally combining various normalized images. The spectral characteristics of different LULC types in multi-temporal images may be unexpectedly affected due to radiometric distortions. Thus, a local assessment can be a practical approach to assessing the effectiveness of the proposed RRN methods in preserving spectral characteristics of LULC types on images pairs. In this way, the spectral signatures of several LULC types (e.g., vegetation, water, rock mountains) were compared before and after normalization using the proposed SRRN method under the local, global, and fused-based RRN modeling (see Figure 8). The CAMRI values before and after normalization were also compared over the LULC classes (see Figure 9). Multiple polygons from the LULC classes were manually selected for these experiments on the unchanged areas of reference, target, and normalized images. As shown in Figure 8a-d, the spectral signatures of LULC types in the target images were well adjusted after normalization with the proposed SRRN method under different modeling. Moreover, these results were in good agreement with CAMRI values reported for datasets before and after RRN (see Figure 9a-d). In more detail, the spectral signatures of vegetation and water bodies in the normalized image generated by the local approach were slightly similar to those obtained in the reference image for dataset 1. The fusion-based method also had the best performance with a CMRI value of~0.18 in rocky mountains areas of dataset 1. Similarly, the proposed fused approach provided spectrally better RRN results in the vegetation and soil LULC types, with the highest CMRI values. Nevertheless, the local model yielded better results for water bodies in dataset 2. The local, global, and fusion-based RRN modeling had the same performance for different LULCs included in dataset 3, as shown in Figures 8c and 9c. Moreover, the local and global models also had nearly the same RRN performance for LULCs of dataset 4. In contrast, the proposed fusion-based model produced better results in spectral similarity and CAMRI values (see Figures 8d and 9d). Although the SRRN method under the mentioned models had locally acceptable results, small shifts were observed between the spectral signatures of some of the LULCs in the reference and normalized images. For example, such shifts were mainly observed in water bodies due to existing particles floating in these areas, such as phytoplankton, pollution, and sediments which affect the apparent color of the water between acquisition times of the reference-target image pair. Moreover, the differences between the spectral signatures of vegetation in the reference and normalized images were mostly observed for bi-temporal images acquired in different seasons (e.g., datasets 1, 2, and 4). This could be mainly due to differences in vegetation phenological properties. Comparative Results of the SRRN Methods To evaluate the efficiency of the proposed method, it was compared with our implementations of IRMAD [15], multi-Otsu-based [29], MPIF [22], ASCRRF-based [27], GMM-EE [24], HOG-based [31], and FLSM-based [3] in terms of quantitative and qualitative results, as well as operation time (see Tables 2-6 and Figure 10). These SRRN algorithms were set up as described in their respective works of literature. This study did not report the componential time of the HOG-based [31] SRRN technique since it is a semi-supervised SRRN approach that requires training data for its classification process. As can be seen from Tables 2-6, RMSE values were significantly reduced after applying all considered SRRN methods over all datasets. Among these models, the proposed method obtained the highest accuracy in minimizing radiometric errors from the target images of the analyzed datasets. Specifically, the average raw RMSEs decreased significantly from 69.13 to 9.31 (~87%), from 88.82 to 23.32 (~74%), from 36.12 to 13.74 (~62%), from 2762.72 to 569.35 (~79%), and from 63.77 to 34.88 (~45%), respectively for the simulated dataset, and datasets 1-4. Compared with the implemented methods, the proposed methods also improved the average RMSEs by~27% and~1% in best and worst cases, i.e., the simulated dataset and dataset 3, respectively. This was mainly because the proposed method took advantage of the efficient coarse-to-fine PIFs selection and fusion-based RRN modeling in its process. The multi-Otsu-based [29] had the worst RRN performance in most cases, except for dataset 3, in which the HOG-based [31] achieved a lower average RMSE compared with other well-known methods. This could be mainly because this method uses the Otsu thresholding as segmentation in its process, which is highly sensitive to the nonuniform illumination and imperfectly segment images with such distortions. The ASCRRFbased [27] also had an almost poor RRN performance in most cases because it uses the RF regression in RRN modeling, which is highly prone to overfitting and needs parameter tuning for each dataset. Although the MPIF [22] had a moderate RRN performance, it could not perfectly handle radiometric falsifications between image pairs in most cases, as shown in Tables 2-5. This may be related to using several threshold-based sampling rules in PIFs selection that do not take LULC types into account, resulting in insufficient PIFs with an unsuitable spatial distribution. The IRMAD [15], FLSM-based [3], and GMM-EE [24] methods were high-end techniques among the considered methods because they provided comparatively reasonable results for most of the analyzed datasets. Of course, the IRMAD [15] had an incredible performance of the simulated dataset in RRN compared to the FLSM-based [3] and GMM-EE [24] methods. The reason for this could be found in the nature of the IRMAD method, which uses only statistical properties for detecting PIFs. In general, the IRMAD [15], FLSMbased [3], and GMM-EE [24] methods do not consider the LULC types in the process of PIFs selection and optimization, resulting in poor RRN results compared with the proposed method. On the other hand, the HOG-based method [31] had an unstable RRN performance among the evaluated methods because it yielded reasonable results for the simulated dataset and datasets 2 and 4, while it had poor performance in RRN of datasets 1 and 3. This instability was attributed mainly because this method depends on the value of the correlation coefficient's threshold and the accuracy of the classification, which may vary for different datasets. As shown in Figure 10a-j, after normalization with the considered SRRN methods, the target images were significantly close to the relevant reference images from a visual point of view. Moreover, the visualizations of RRN results were in line with the quantitative results reported in Tables 2-5. In more detail, the normalized images generated by the proposed method were more harmonized with the relevant reference images than other methods, indicating the high efficacy of the proposed method in reducing radiometric distortions from the target images (see Figure 10i). The HOG-based [31], FLSM-based [3], and GMM-EE [24] methods also generated reasonable normalized images, which were visually well-matched with the corresponding reference images in most cases (see Figure 10f-h). However, they produced contrast-distorted normalized images for the simulated dataset, as shown in the first row of Figure 10f-h. The IRMAD [15] also returned dependable normalized images with more contrast and saturation than the reference images in most cases. At the same time, it generated a low-brightness normalized image for dataset 2 (see Figure 10b). The multi-Otsu-based [29] generated blurred normalized images for the simulated dataset and datasets 1 and 4, while it induced more appropriate normalized images for datasets 2 and 3 (see Figure 10c). This could be mainly due to the fluctuated performance of the Otsu algorithm in dealing with non-uniform illumination (e.g., simulated dataset and datasets 1 and 4). The MPIF [22] visually yielded unsatisfactory results, especially for the simulated dataset and dataset 4, where it generated the normalized images with artifact colors (see Figure 10d). This could be mainly due to the sensitivity of this method to the numerical thresholds of its sampling rules and the lack of considering different LULC types in its process. The normalized images of the ASCRRF-based [27] were composed of artifact colors in the simulated dataset and datasets 1 and 4. This is mainly because this method was over-fitted to estimate the relationship between their image pairs (see Figure 10e). In terms of computation time, the proposed method was the most efficient method for the simulated dataset and dataset 4, while the multi-Otsu-based [29] was inexpensive on other analyzed datasets. A major reason for the efficiency of the proposed method could be attributed to the coarse-to-fine PIFs selection, which substantially reduced the execution time. In fact, a slight increase in time cost was observed when using the fused-based model in the proposed method, but did not lead to a heavy computational load (see Tables 2-5 and Figure 6f). The IRMAD [15] and MPIF [22] were also among the computationally efficient methods in RRN of the analyzed datasets, while the ASCRRF-based [27] and GMM-EE [24] were mid-range methods in terms of processing time. In all cases, the FLSM-based [3] method was more expensive, such that in the best and worst cases, its execution times were approximately twice and three times longer than the proposed method. This was mostly because the FLSM-based [3] included several image processing algorithms that demanded massive storage and componential time. [15], (c) multi-Otsu-based [29], (d) MPIF [22], (e) ASCR-RF-based [27], (f) GMM-EE [24], (g) HOG-based [31], (h) FLSM-based [3], (i) the proposed method, and (j) the reference images. Figure 10. Visualizations of the RRN results over the simulated dataset and real datasets 1 to 4, obtained by the proposed method and other well-known SRRN models. (a) target images; and normalized images generated by (b) IRMAD [15], (c) multi-Otsu-based [29], (d) MPIF [22], (e) ASCR-RF-based [27], (f) GMM-EE [24], (g) HOG-based [31], (h) FLSM-based [3], (i) the proposed method, and (j) the reference images. Conclusions This study introduced a new SRRN method to provide robust and well-distributed PIFs and a reasonable normalized image. To be specific, a new coarse-to-fine strategy was embedded in the proposed method to efficiently select robust PIFs from different LULC types. A fusion-based model was also developed based on the fuzzy Choquet integral that effectively integrated two normalized images generated by the global (i.e., RLR model) and local (i.e., proposed CRLR model) models. The experimental results were evaluated on a simulated dataset and four real datasets, each of which was composed of a bi-temporal image pair acquired by different RS systems. The experimental results demonstrated that the proposed coarse-to-fine approach successfully reduced uncertainties from PIFs and efficiently selected them from dark to bright regions, which aligned with the nature of LULC types. Moreover, the proposed fusion-based RRN modeling led to more accurate local and global RRN results than the RLR and CRLR models. In addition, the spectral signatures of different LULC types in the target images were closer to those of the reference image after normalization with the proposed SRRN method, indicating its high potential in preserving spectral characteristics of various LULC classes. The proposed method also outperformed the other well-known SRRN methods regarding accuracy, computation time, and visual point of view, indicating its high potential in reducing the radiometric differences between image pairs. Although the current work presented an efficient coarse-to-fine strategy for PIF selection, it employed RLR and its cluster-wise variant in the core of the RRN modeling, which is computationally expensive. Such an issue reduces the operationality of the proposed SERN method in dealing with the big-size dataset. Therefore, it is recommended to use much more efficient and robust models in the RRN modeling process. Moreover, the proposed method was developed by assuming a linear relationship between image pairs and their LULC types. However, this relationship can be nonlinear, especially in image pairs with significant illumination and LULC changes. Therefore, the current fusion scheme could also be further improved by using multiple normalized images generated by more advanced nonlinear mapping functions. Data Availability Statement: Publicly available datasets were analyzed in this study. These datasets can be found here: (https://scihub.copernicus.eu/ (accessed on 20 April 2019)) (https://earthexplorer. usgs.gov/ (accessed on 26 September 2017)), and (https://www.intelligence-airbusds.com/en/9317 -sample-imagery-detail?product=35822 (accessed on 26 January 2022)). The dataset presented in this study can be available on request from the author.
13,269.2
2022-04-07T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Aberrant promoter methylation may be responsible for the control of CD146 (MCAM) gene expression during breast cancer progression* The CD146 (also known as MCAM, MUC-18, Mel-CAM) was initially reported on in 1987, as a protein crucial for melanoma invasion. Recently, it has been confirmed that CD146 is involved in progression and poor overall sur- vival of many other cancers, including breast cancer. Im-portantly, in independent studies, CD146 was reported to be a trigger of epithelial to mesenchymal transition in breast cancer cells. The goal of our current study was to verify possible involvement of an epigenetic mechanism behind regulation of the CD146 expression in breast cancer cells, as it has been previously reported for prostate cancer. First, we analysed the response of breast cancer cells, varying in the initial CD146 mRNA and protein content, to an epigenetic modifier, 5-aza-2-deoxy- cytidine, and subsequently the methylation status of CD146 gene promoter was investigated, using direct bisulfite sequencing. We observed that treatment with a demethylating agent led to induction of CD146 expres- sion in all analysed breast cancer cell lines, both at the mRNA and protein levels, which was accompanied by an elevated expression of selected mesenchymal markers. Importantly, CD146 gene promoter analysis showed aberrant CpG island methylation in 2 out of 3 studied breast cancer cells lines, indicating epigenetic regulation of the CD146 gene expression. In conclusion, our study revealed for the first time that aberrant methylation may be involved in expression control of CD146, a very po-tent EMT inducer in breast cancer cells. Altogether, the data obtained may provide basis for novel therapies, as well as diagnostic approaches enabling sensitive and very accurate detection of breast cancer cells. INTRODUCTION Death of breast cancer patients is mainly caused by metastasis, transforming the locally confined disease into a disseminated and usually incurable one (Felipe Lima et al., 2016). Metastasis can be defined as spreading of cancerous cells into distinct organs, followed by formation of a secondary tumor site. At the molecular level, this process is accompanied by actin cytoskeleton rearrangement and attenuation of cell-cell and cell-extracellular matrix adhesive interactions, which altogether resembles the Epithelial to Mesenchymal Transition (EMT), a morphogenetic process observed during development (Brabletz et al., 2018;Nieto et al., 2016). EMT is driven by mesenchymal transcription factors, including Snail, Slug, Zeb, and Twist, which are responsible for altering the epithelial transcriptional profile into mesenchymal one (Nieto et al., 2016;Felipe Lima et al., 2016). Breast cancer cells undergoing EMT become highly motile and invasive, which is especially apparent in the most aggressive estrogen-negative, progesterone-negative, human epidermal growth factor receptor-negative subtype (ER−/PR−/HER2−), defined as a triple-negative breast cancer (TNBC) (Khaled & Bidet, 2019;Felipe Lima et al., 2016). TNBC diagnosed patients have a relatively poor prognosis and cannot be subjected to endocrine therapy or therapies directed against human epidermal growth factor receptor type 2 (HER2) (Schneider et al., 2008). Interestingly, recent publication and meta-analysis, covering a high number of solid tumors, revealed a significant association of CD146 protein expression, EMT and poor survival of cancer patients (Zeng et al., 2017). Importantly, independent studies reported that CD146 is highly expressed in TNBC and in metastatic breast cancer, which is contrary to normal tissue and benign tumors (de Kruijff et al., 2018;Garcia et al., 2007;Jang et al., 2015;Zabouo et al., 2009). Moreover, preclinical in vitro and in vivo studies revealed that aberrantly overexpressed CD146 is sufficient to induce acquisition of mesenchymal phenotype in breast cancer cells (Zeng et al., 2012;Imbert et al., 2012). Nevertheless, although overexpressed CD146 is considered an important oncogene in breast carcinogenesis, there is still lack of information about potential regulation of the CD146 gene expression in breast cancer cells. Despite the fact that CD146 is overexpressed in cancer cells, its amplification or mutation has been excluded so far (Wang & Yan, 2013). In the study presented here, by using breast cancer cell lines as a model, we revealed that CD146 gene promoter is aberrantly methylated in breast cancer cells and the Vol. 66, No 4/2019 619-625 demethylating agent, 5-aza-2-deoxycytidine, can trigger its expression. According to our knowledge, this is the first study suggesting a role of epigenetics behind the CD146 expression in breast cancer cells. Noteworthy, this finding not only sheds new light on regulation of CD146 expression during breast carcinogenesis, but also provides an important rationale for novel therapeutic strategies in future. If confirmed with primary samples, it may contribute to a significant improvement of the diagnostic process, allowing early and very sensitive detection of breast cancer cells. R2 Database and statistical analysis of publically available data. The R2 database (http://r2.amc.nl) is a simple to use website-related tool for analysis and data visualization created at the Department of Oncogenomics in the Academic Medical Center (AMC) in Amsterdam, Netherlands. It gives an opportunity to perform different analyses based on well annotated datasets. In case of the study presented here, we selected tool "Correlate Genes" in the panel of 10 breast cancer data sets in order to check correlation between CD146 and the panel of mesenchymal markers. Bonferroni corrected p-value of the Spearman correlation coefficient was used to show that the panel of mesenchymal marker encoding genes significantly correlated with CD146 in breast cancer patients. Since the 9 mesenchymal markers were compared with CD146, p value below 0.0055 was considered as significant (0.05/9). The ten breast cancer data sets chosen for analysis included two sets (GSE7396 and GSE46563) composed of only lymph node negative patients and eight data sets (GSE1456, GSE12276, GSE2109, GSE3494, GSE102484, GSE29271, GSE69031, GSE36771) in which lymph node negativity was not the criterion for patient selection. Since the number of genes correlated with CD146 was lower in data sets with confirmed lymph node negativity in comparison to other data sets, we applied cluster analysis (nearest neighbor, Euclidean distance) algorithm to verify if these two data set are indeed different from all the others (Statistica, TIBCO Software Inc.). For the purpose of analysis, the non-significant Spearman correlation coefficient was set as zero. RNA isolation and cDNA synthesis. Total RNA was isolated from harvested cells using Gene Matrix Universal RNA kit (EURx, Gdansk, Poland). Na-noDrop ND-1000 Spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA) was used to determine concentration and quality of the isolated RNA. Total RNA (1 µg) was used for cDNA synthesis according to the manufacturer's protocol (EURx, Gdansk, Poland). Reverse transcription polymerase chain reaction (RT-PCR). For PCR reaction, the Color OptiTaq PCR Master Mix (2x) (EURx, Gdansk, Poland) was used according to the manufacturer's protocol. Sequences of reverse and forward primers used in this study are presented in Table 1. For CD146, HPRT1, SNAI1, TWIST, ZEB1 and SNAI2, the PCR reactions were carried out as previously described (Dudzik et al., 2019). For CAD-HERIN1 and CADHERIN2, the conditions were as following: initial denaturation at 95°C for 5 min; followed by 30 cycles: 30 s at 95°C, 30 s at 58°C and 30 s at 72°C; and final extension at 72°C for 10 min. For MMP2 and MMP9 the following conditions were applied: initial denaturation at 94°C for 3 min; followed by 30 cycles: 45 s at 94°C, 60 s at 59°C and 60 s at 72°C; and final extension at 72°C for 5 min. PCR product was visualized on 1.5% agarose gel stained with ethidium bromide and photographed with Bio-Rad ChemiDoc™ XRS+ System (Bio-Rad, Hercules, CA, USA). HPRT1 was used as internal control to ensure equal sample loading. In case of MDA-MB-231 cells, the RT-PCR analysis of CD146 was performed according to a modified, adjusted PCR protocol in which 25 cycles were used instead of 30 cycles, whereas all other conditions remained unchanged. The CD146 and HPRT1 primers were manufactured by IBB PAN (Warsaw, Poland). All other primers were purchased from Sigma-Aldrich (St. Louis, MO, USA). The images of the gels were captured using Bio-Rad ChemiDoc™ XRS+ System (Bio-Rad, Hercules, CA, USA) and subsequently analyzed by means of publicly available ImageJ software. All values were normalized to the HPRT1 signal. The relation between CD146 and mesenchymal profile in breast cancer cell lines In spite of a number of studies confirming the relationship between CD146 and tumor invasiveness, poor prognosis or mesenchymal features of breast cancer cells, a recent work by de Kruijff et al. questioned the existence of EMT-CD146 association in a group of analysed breast cancer patients (de Kruijff et al., 2018). Although these authors indeed reported CD146 as a prognostic factor for metastasis free and overall survival in a univariable analysis, they did not find a correlation between CD146 and mesenchymal markers at the level of mRNA expression (de Kruijff et al., 2018). In order to perform an independent verification, we analysed the relation between CD146 expression and the panel of mesenchymal markers in 10 independent transcriptomic data sets of breast cancer patients. The patients' data sets selected for this analysis included 2 data sets containing breast cancer patients with confirmed lymph node negativity (lymph node negative data sets) and 8 data sets in which patients with lymph node negative and positive status were combined (non-lymph node negative data sets). As shown in Table 2, at least 6 out of 9 mesenchymal markers were significantly correlated with CD146 in non-lymph node negative data sets, whereas in lymph node negative datasets we have found only 2 mesenchymal markers correlated with CD146 in the GSE7396 data set, and only one (vimentin) correlated with CD146 in the GSE46563 data set. In fact, vimentin was the only gene correlated Table 2. Correlation analysis between CD146 and nine selected mesenchymal markers in 10 independent transcriptomic data sets of breast cancer patients. White rectangle indicates lack of significance, grey rectangle indicates a significant Spearman correlation coefficient below 0.5, black rectangle indicates a significant Spearman correlation coefficient above 0.5. Bonferroni corrected p-value below 0.0055 was considered as statistically significant. with CD146 in all analysed data sets, regardless of the lymph node status. In order to verify if two lymph node negative data sets are different from the others, cluster analysis was performed. This approach further confirmed that two lymph node negative data sets clustered together as being different from all the others. Thus, the results of our analysis clearly demonstrated, based on transcriptomic data, the existence of association between CD146 expression, mesenchymal profile and the progression of breast cancer. The effect of 5-aza-2-deoxycytidine on expression of CD146 in breast cancer cell lines In order to check whether CD146 expression is regulated at the transcriptional level by an epigenetic modifier, 5-aza-2-deoxycytidine, as it has been shown previously in prostate cancer cell lines (Dudzik et al., 2019), two epithelial breast cancer cell lines (MCF7 and T47D) (Dai et al., 2017) and a mesenchymal one (MDA-MB-231) (Dai et al., 2017) were cultured with and without 5-aza-2-deoxycytidine (10 μM) for 6 days, as previously described, followed by CD146 expression analysis at the mRNA and protein level. As expected, basal expression of CD146 was low in the MCF7 and T47D cancer cell lines with epithelial characteristics, and high in the MDA-MB-231 cell line with mesenchymal expression profile. As shown in Fig. 1, CD146 expression was apparently induced in the MCF7 and T47D cells, whereas in MDA-MB-231 its high basal expression disabled correct assessment of expression difference between the control and 5-aza-2-deoxycytidine treated cells. Thus, in order to verify if high CD146 expression in MDA-MB-231 is still further inducible, we performed an expression analysis according to an adjusted, modified PCR protocol, as described in the Materials and Methods section. Interestingly, in MD-MB-231, the CD146 expression was also significantly induced, regardless of the high basal level and the mesenchymal characteristics of these cells (Fig. 1B). As for the protein analysis (Fig. 1C), the demethylating compound had significantly induced CD146 expression in all analysed cell lines. This effect was clearly visible in the epithelial lines with very low basal expression of CD146, as well as in the MDA-MB-231 cells, where the relatively high basal expression was still apparently inducible, in accordance with the data obtained at the mRNA level. Altogether, this data suggest that epigenetic silencing plays an important role in the control of CD146 expression in breast cancer cells with epithelial characteristics (MCF7 and T47D), whereas in mesenchymal breast cancer cell line (MDA-MB-231) expression of CD146 is apparently not fully unleashed and still partially trapped by the epigenetic mechanism. The effect of 5-aza-2-deoxycytidine on induction of the mesenchymal profile To determine if CD146 gene expression increase is associated with a mesenchymal profile induction, we analysed the influence of an epigenetic modifier, 5 aza-2-deoxycytidine (10 µM), on expression panel of mesenchymal markers: Slug (SNAI2), Twist1 (TWIST1), Zeb1 (ZEB1), Snail (SNAI1), N-cadherin (CADHERIN 2), Vimentin (VIM), matrix metalloproteinase-2 (MMP2) and matrix metalloproteinase-9 (MMP9), and an epithelial marker E-cadherin (CADHERIN 1). The mesenchymal markers subjected to this analysis were selected based on the expression correlation studies performed in breast cancer patients ( Table 2). As shown in Fig. 2A, treatment of breast cancer cell lines with the demethylating agent resulted in gene expression changes at the mRNA level for several mesenchymal markers, such as SANI1, . As for the epithelial marker, CADHERIN 1, the basal mRNA expression was high in cell lines with epithelial characteristics and low in MDA-MB-231. Of note, in all three independent experiments, we observed an increase of CADHERIN 1 expression at the mRNA level in the MDA-MB-231 cells, regardless of the induction of mesenchymal marker expression (SNAI1 and TWIST1) in these cells, which are the well known transcription factors inhibiting the CADHERIN 1 gene promoter. Concerning protein analysis, E-cadherin was not visibly induced in MDA-MB-231, opposite to N-cadherin, whose expression was reproducibly enhanced after treatment with 5-aza-2-deoxycytidine, in two independent experiments. As for vimentin, it was apparently expressed only in the MDA-MB-231 cell line and not further induced by treatment with 5-aza-2-deoxycytidine. Undoubtedly, however, the fact that an epigenetic mechanism appears to control expression of such an important oncogene in breast cancer seems to be of high importance. To further determine if 5-aza-2-deoxycytidine, resulting in CD146 induction, may as well trigger the changes in morphology of epithelial breast cancer cells and/or induce the protein expression changes at the level of a single cell, we performed an immunofluorescence analysis of E-cadherin and Vimentin in the MCF7 and T47D cells cultured for 6 days with and without 5-aza-2-deoxycytidine. As determined in Fig. 3, we neither observed an apparent changes in cell morphology, nor, in accordance with the Western blot results, an alteration in the E-cadherin and Vimentin cell content. The analysis of CD146 gene promoter methylation in breast cancer cell lines Since presence of the CpG island in CD146 gene promoter has been already described and its proper location reported by our group (Kocemba et al., 2016), we subjected the CD146 promoter region, including the CpG island encompassing the transcriptional start site in exon 1, to methylation analysis using direct bisulfitesequencing (BS) of PCR products. DNA isolated from human fibroblasts and in vitro methylated DNA was modified and subsequently used in BS sequencing as unmethylated and methylated control, respectively, to validate our experimental set-up. BS analysis revealed that two (MCF7 and MDA-MB-231) out of the three breast (Fig. 4) cancer cell lines tested (MCF7, T47D and MDA-MB-231) displayed hypermethylation of CpG island in the CD146 promoter area, whereas T47D was methylation-free. Notably, BS analysis revealed heterogeneous methylation in MCF7 and MDA-MB-231, suggesting clonal variation in the methylation pattern of the CpG island area in these cell lines. Importantly, aberrant methylation detected in the CpG island of the CD146 gene (Fig. 4) was present in the promoter area encompassing exon one, the region reported previously as crucial for epigenetic silencing of gene expression (Brenet et al., 2011). DISCUSSION To our knowledge, this is the first study showing methylation of the CD146 promoter region in breast cancer cells, and concomitantly suggesting that an epigenetic mechanism may be important in expression control of this relevant metastasis related-oncogene. In our previous paper, using prostate cancer cell lines, we have shown that the CD146 gene is induced by a demethylating compound, 5 aza-2-deoxycytidine, however, analysis of the CpG island region in those cells did not confirm presence of promoter methylation (Dudzik et al., 2019). Of note, the link between epigenetics and expression of the CD146 gene was reported previously in prostate cancer patients but the authors of that study incorrectly localized the CpG island in the CD146 gene promoter (Liu et al., 2008), and in consequence misinterpreted the results suggesting, contrary to the dogma, that presence of the CpG island methylation correlates with increase in the CD146 gene expression in prostate cancer cells (Liu et al., 2008). This mistake was pointed out in a Letter to the Editor published in The Prostate in 2016 (Kocemba et al., 2016), also determining proper localization of the CpG island in the CD146 gene promoter (Kocemba et al., 2016). So far, in literature, the relation between DNA methylation in the promoter area (regardless of presence of a CpG island) and gene expression inactivation has been well confirmed (Han et al., 2011;Smith & Meissner, 2013). It has been verified that canonical promoter methylation is associated with expression of imprinted genes, the process of X chromosome inactivation and tissue specific gene regulation (Smith & Meissner, 2013;Urbano et al., 2019). On the other hand, CpG islands observed around transcription start sites in 50% of gene promoters are methylation free in non-cancerous cells, regardless of the expression state of the gene of interest (Jones, 2012). However, de novo hypermethylation of canonical CpG islands has been observed in cancer cells in association with inactivation of gene expression (Jones & Baylin, 2007). Although we observed the CpG island methylation of the CD146 promoter in 2 out of 3 analysed breast cancer cell lines, it does not have to be the only and unique methylated region which controls CD146 expression, since in the T47D breast cancer cell line and three prostate cancer cell lines published previously (Dudzik et al., 2019), the CD146 expression was induced by demethylating treatment, while the analysed CpG island was methylation free. Importantly, recent studies revealed that methylation in the upstream and downstream CpG island shores may inhibit gene transcription in cancer cells, whereas the CpG island itself can remain methylation free (Rao et al., 2013;Irizarry et al., 2009). Even more, in a recent publication, Skvortsova et al. revealed that methylation present in the CpG island shores, particularly in the breast cancer cells, is involved in transcriptional silencing of gene expression. These data clearly confirm that promoter methylation in silenced genes in cancer does not have to be confined only to the CpG island (Skvortsova et al., 2019). To verify exactly which promoter part must be demethylated to unleash the CD146 expression would be definitely of high importance. Whereas overexpressed CD146 is considered to be oncogenic in breast cancer cells, as reported by independent studies (Zeng et al., 2012;de Kruijff et al., 2018;Garcia et al., 2007;Zabouo et al., 2009;Imbert et al., 2012), the methylation-based silencing of its expression in these cells seems to be quite surprising. According to our hypothesis, at the beginning of the disease CD146 may be targeted for aberrant promoter methylation/or it has been already methylated in consequence of tissue specific epigenetic silencing, whereas the loss of methylation in advanced tumour unleashes CD146 expression, leading to metastasis. The fact that CD146-dependent induction of a mesenchymal profile has been already described in the literature (Imbert et al., 2012;Zeng et al., 2012;Zabouo et al., 2009) allows us to speculate that increase in expression of mesenchymal related genes may result from CD146 induction, triggered by an epigenetic modifier. On the other hand, we cannot exclude that the epigenetic modifier, in parallel to the increase in CD146 expression, directly induces mesenchymal profile in the breast cancer cell lines, as the EMT induction in consequence of 5-aza-2-deoxycytidine application has been already reported for breast cancer cells (Su et al., 2018). Definitely, more research is needed to elucidate the independent role of CD146, expression triggered by an epigenetic modifier, in EMT induction in breast cancer cells. Interestingly, in a mesenchymal cell line (MDA-MB-231), we observed an increase in the epithelial marker, E-cadherin, at the mRNA level, which can be the direct consequence of demethylating agents, since epigenetic silencing of E-cadherin expression by aberrant promoter methylation has been reported previously in cancer cells of mesenchymal characteristics and what is more, exactly in MDA-MB-231. Thus, it can be concluded that our results for CADHERIN 1 in MDA-MB-231 simply indicate a proper experimental setup for the demethylating treatment. It is also important to mention that Imbert et al., after overexpression of CD146 in the MCF7 cell line, reported lack of alteration in TWIST1, SNAI1, MMP2 and MMP9 expression and an increase in CADHERIN 2 and VIM (Imbert et al., 2012), whereas changes in the mesenchymal markers' profile were completely different in our current study, which is most probably the consequence of combined action of CD146 and the demethylating agent. Undoubtedly, however, showing that expression of CD146 is controlled by an epigenetic mechanism seems to be the most important message from our study. Moreover, our analysis also suggests coexistence of cells with methylated and unmethylated CD146 gene promoter, which indicates that alterations in the CD146 gene promoter methylation may reflect the process of clonal selection during breast cancer progression. In this context, the aspect of epigenetic control of CD146 expression takes on a particular significance in research centred on application of 5-aza-2-deoxycytidine in treatment of breast cancer patients, and should be considered in planning therapy combined with epigenetic modifiers. It is especially important, since this kind of therapy for breast cancer patients has been currently investigated (Connolly et al., 2017). Our data undoubtedly requires further verification to potentially reveal the scenario of methylation changes in the CD146 gene promoter during development of breast cancer progression, as we cannot exclude that CD146 acts as a tumour suppressor at the initial stage of carcinogenesis, as suggested by the study of Shih and others (Shih et al., 1997), and turn into an oncogene in the advance stage, when the transcriptional profile of cancer cells is significantly altered. Overall, our study provides strong basis for further research on epigenetic regulation of CD146, which can significantly contribute to novel therapies and/or development of DNA methylation-based assay for sensitive detection of breast cancer cells.
5,170.2
2019-12-11T00:00:00.000
[ "Biology" ]
Dielectric, electric and thermal properties of carboxylic functionalized multiwalled carbon nanotubes impregnated polydimethylsiloxane nanocomposite The dielectric, electric and thermal properties of carboxylic functionalized multiwalled carbon nanotubes (F-MWCNT) incorporated into the polydimethylsiloxane (PDMS) were evaluated to determine their potential in the field of electronic materials. Carboxylic functionalization of the pristine multi walled carbon tubes (Ps-MWCNT) was confirmed through Fourier transform infrared spectroscopy, X-ray diffraction patterns for both Ps-MWCNTs and F-MWCNTs elaborated that crystalline behavior did not change with carboxylic moieties. Thermogravimetric and differential thermal analyses were performed to elucidate the thermal stability with increasing weight % addition of F-MWCNTs in the polymer matrix. Crystallization/glass transition / melting temperatures were evaluated using differential scanning calorimeter and it was observed that glass transition and crystallization temperatures were diminished while temperatures of first and second melting transitions were progressed with increasing F-MWCNT concentration in the PDMS matrix. Scanning electron microscopy and energy dispersive x-ray spectroscopy were carried out to confirm the morphology, functionalization, and uniform dispersion of F-MWCNTs in the polymer matrix. Electrical resistivity at temperature range (100–300°C), dielectric loss (tanδ) and dielectric parameters (ϵ/ ϵ//) were measured in the frequency range (1MHz–3GHz). The measured data simulate that the aforementioned properties were influenced by increasing filler contents in the polymer matrix because of the high polarization of conductive F-MWCNTs at the reinforcement/polymer interface. Introduction Polydimethylsiloxane (PDMS) has shown good flexibility, thermal stability, electrical resistance and dielectric strength due to the cross-linking density and degree of polymerization [1]. PDMS composites are used for the wide range of applications from sensor technology to radar absorbing materials [2,3]. In addition, PDMS is used for substrate pattern transfer to various substrates due to its soft and flexible nature [4,5]. During fabrication it leads to residual deformation, the large expansion coefficient of the polymer induces residual stresses in the polymer. 2 To whom any correspondence should be addressed. This phenomenon causes misalignment of different structures. The misalignment of structure is usually resolved by incorporating fillers e.g. whiskers, nanoparticles and nano-tubes, etc [6,7]. The highly conductive nano particles/tubes filled polymer matrix composites are a rapidly developing field in switchable and flexible microelectronics technology [8,9]. The dielectric property of PDMS is modified by incorporating F-MWCNTs as nano filler in the PDMS matrix. Since carbon nanotubes (CNTs) possess large aspect ratio with extra ordinary thermal, mechanical & electrical properties; CNTs help in enhancing the thermal, mechanical and electromagnetic properties of the PDMS composite [10]. The focus of this research is on the fabrication of carboxylic functionalized multiwalled carbon nanotubes (F-MWCNT) reinforced in PDMS based composite. These PDMS composites are designed for high thermal stability and dielectric strength. When used as interconnects in semi-conducting devices, the conducting multiwalled carbon nanotubes (MWCNTs) can route electrical signals at speeds up to 10 GHz; this is usually attributed due to electron transfer over a wide range of length without intermediate interruption [11]. In this paper, Ps-MWCNTs are functionalized for uniform dispersion in PDMS matrix to study the effect on electric and thermal properties of the composite. The dielectric, electrical, morphological, spectroscopic and thermal analyses are performed and it is observed that all these properties are strongly influenced by increasing F-MWCNTs content in the polymer matrix. Materials Ps-MWCNTs with 90% purity and average length and diameter were found to be around 30µm and 25nm were purchased from Nanoport. Co. Ltd, China. Room temperature vulcanized -polydimethylsiloxane was purchased from Wacker, Germany and it was used as received. Nitric acid, ammonium hydroxide and toluene were received from Merk. Polypropylene (PP) membranes with 0.2µm pore size were bought from Pall.Co., China. Sample preparation 2.2.1. Surface modification of Ps-MWCNTs To functionalize Ps-MWCNTs, purification was performed at 450 0 C with a temperature rise rate of 10 0 C per minute for 6 hours followed by their treatment with hydrochloric acid to remove metal traces, amorphous carbon and other impurities. In the second step, 2g purified MWCNTs were immersed in 68% conc. HNO 3 at room temperature. Then, the solution was sonicated in ultrasonicator bath at 40 KHz, 90 0 C for 4hours. Subsequently, the solution was washed with ultra-pure water five times and the pH of the solution was neutralized with ammonium hydroxide. The pH value was maintained at 5.5 and the solution was filtered with 0.2µm PP membrane. At the end, F-MWCNTs were collected by drying the nanotubes in the oven at 100 0 C for 12 hours. Functionalization mechanism is demonstrated in figure 1b. Fabrication of F-MWCNTs based PDMS composite The dispersion of Ps-MWCNTs in the polymeric systems is a difficult task. The carboxylic moieties make inter bond between multiwalled carbon nanotubes and polymer matrix. Initially, F-MWCNTs were dispersed in toluene using ultrasonication bath for 2 hours. This was followed by dispersion in PDMS using a mechanical stirrer at 5000rpm for 30 minutes. Furthermore, the F-MW with 90% purity and average length and diameter were found to be around 30µm and 25nm NT/PDMS solution was poured into the 6" x 6" x 2.5" mold. The pre-curing of F-MWNT/PDMS composite was carried out for 1 hour at 120 0 C and then temperature was raised to 160 0 C. The composite was kept at 160 0 C for 30 minutes. Characterizations The spectroscopic characterizations of Ps-MWCNTs /F-MWCNTs were performed using Perkin Elmer Fourier Transformation Infrared (FTIR) spectrometer with KBr disc to study the functionalization and X- Ray Diffraction (XRD) was done to confirm the crystalline nature of MWCNTs. Thermal properties of pristine/functionalized MWCNTs and PDMS composites were elaborated using Perkin Elmer Diamond thermal gravimetric/differential thermal analysis (TG/DTA). The isothermal crystallization (Tc)/glass transition (Tg) /melting (Tm) temperature responses of PDMS composites were investigated by using Perkin Elmer differential scanning calorimeter (DSC). The surface modification and dispersion of F-MWCNTs in the host polymer matrix were analyzed using scanning electron microscope along with the energy dispersive x-ray spectroscopy (SEM/EDS) Jeol JSM 6490 A. HP 4339B high resistance meter was used to measure the electrical properties with temperature augmentation. The dielectric parameters (ε / , ε // , tanδ) were measured using the bridge LCR meter (Model HP 4284 A) and Agilent network analyzer. .91 corresponding to graphite structure derived from MWCNTs. Furthermore, the intensity of (002) and (100) peaks of F-MWCNTs are much closer to Ps-MWCNTs, provides the evidence for the fact that the treatment does not damage the graphene layer organization. In conclusion, thermograms confirm that F-MWCNT/PDMS composites exhibit more thermal stability than pristine PDMS due to the heat quenching ability of F-MWCNTs as obvious in DTA thermograms. In DTA curves, it is clear that F-MWCNTs absorb more heat than Ps-MWCNTs due to bond cleavage of covalently attached moieties on the surface of MWCNTs. The DTA behavior of both polymer matrix and CNTs dispersed composites is completely endothermic but the polymer absorbed additional heat due to the presence of F-MWCNTs. Figure 5b illustrates the upward shift in thermograms which portrays that addition of F-MWCNTs enhance the thermal stability of the host polymer. Figure 6 elaborates the DSC scrutiny obtained from pure PDMS and different wt% reinforced F-MWCNTs in PDMS after heating at a rate of 10 0 C/min from -165 0 C to 250 0 C. Crystallization temperature (Tc, Tc*) gradually decreases with the wt% increment of F-MWCNTs from 0.1% to 0.7% due to the reduction of spherulite sites in the polymeric composite [11]. The same behavior is revealed in the melting temperature (Tm, Tm 1 ) response of the composite specimens. Electrical, dielectric and microwave permittivity measurements The dc electrical resistivity was measured in the temperature range 100-300 0 C using two probe resistivity apparatus. Standard Arrhenius equation [12,13] was used to calculate dc-electrical resistivity (ρ dc ) for all fabricated composites. The dc electric resistivity with respect to temperature is represented by figure 7(a). It is observed that dc resistivity decreases with the increase in temperature which confirms the semiconducting behavior of fabricated composites. It is also observed that dc electric resistivity decreases with the increase in F-MWCNTs contents and minimum value is observed for 0.7 Wt% F-MWCNTs filled composite. The dc electrical resistivity at 100 0 C was found in the range 0.1-1.25x10 12 Ω-cm. The electrical resistivity data confirms that as F-MWCNTs wt % increases, the electric conductivity of fabricated composite increases. The accumulation of electrostatic charging, usually observed on insulating matrix surface leads to a serious problem and high electric conductivity above σ=10 -6 Sm -1 is required. F-MWCNTs (highly electrical conducting nanofiller) were used to get a maximum value of electrical conductivity in the polymeric matrix. The dielectric constants (ε / ) and dielectric loss (tanδ) of various filler concentrations with frequency, represented in figure 7 (b, c), and were calculated using standard relations [14]. The value of both these of parameters was found in ranges 31-75 and 0.092-0.31 at 100Hz. This indicates that with the increase in nanotubes concentration, both of these parameters are influenced. The observed dielectric behavior in fabricated composites is due to the shift in electric charges from their mean or equilibrium position between the PDMS/F-MWCNT interfaces. Furthermore, the wt% increase in F-MWCNTs results in heterogeneous phases having different polarizability, which in turn leads to accumulation of different charges at interfaces. As a result, these parameters decrease with the increase in frequency [15,16]. The effect of microwave frequency on dielectric parameters was measured in the frequency range (1MHz-3GHz). The behavior of dielectric parameters at frequency range 0 to 3GHz for P0-P4 is represented in figure 8 and 9. The effective decrease in measured values is observed for both of these parameters at 1GHz. The permittivity originates at the interface of composites due to the orientation polarization, space charge induced polarization, electrostatic and electronic polarization. The polarization is influenced by resonance generated at the expanse of higher frequencies, especially at microwave frequency range leads to electronic polarization at interfaces [17]. The negative values of permittivity for composites are attributed by quantum dot excitation at interfaces explained theoretically [18]. Conclusion The wt% increment of the nanotubes contents in PDMS matrix influences and improves dielectric/electrical & thermal behavior of fabricated PDMS/F-MWCNT composites. The remarkable enhancement in thermal stability and heat quenching capability of PDMS with the incorporation of F-MWCNTs are clearly observed in TG/DTA. The reduction of Tc & Tg, and improvement in Tm are augmented by DSC analyses. The uniform dispersion of F-MWCNTs in PDMS is obvious from the SEM images. XRD results corroborate that crystalline nature does not change with the carboxylic functionalization of Ps-MWCNTs. The effective increase is observed in dc electric conductivity with% increase in nanofiller content. The decrease in electric resistivity by temperature and dielectric parameters with frequency confirmed the semiconducting behavior of fabricated composite. The dielectric parameters are influenced with increasing F-MWCNTs incorporation in PDMS due to the interface polarization between the filler and matrix. These polymeric composites are resonated due to atomic, electrostatic and electronic polarization at 3GHz.
2,520
2013-06-10T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
In silico design of a T-cell epitope vaccine candidate for parasitic helminth infection Trichuris trichiura is a parasite that infects 500 million people worldwide, leading to colitis, growth retardation and Trichuris dysentery syndrome. There are no licensed vaccines available to prevent Trichuris infection and current treatments are of limited efficacy. Trichuris infections are linked to poverty, reducing children’s educational performance and the economic productivity of adults. We employed a systematic, multi-stage process to identify a candidate vaccine against trichuriasis based on the incorporation of selected T-cell epitopes into virus-like particles. We conducted a systematic review to identify the most appropriate in silico prediction tools to predict histocompatibility complex class II (MHC-II) molecule T-cell epitopes. These tools were used to identify candidate MHC-II epitopes from predicted ORFs in the Trichuris genome, selected using inclusion and exclusion criteria. Selected epitopes were incorporated into Hepatitis B core antigen virus-like particles (VLPs). Bone marrow-derived dendritic cells and bone marrow-derived macrophages responded in vitro to VLPs irrespective of whether the VLP also included T-cell epitopes. The VLPs were internalized and co-localized in the antigen presenting cell lysosomes. Upon challenge infection, mice vaccinated with the VLPs+T-cell epitopes showed a significantly reduced worm burden, and mounted Trichuris-specific IgM and IgG2c antibody responses. The protection of mice by VLPs+T-cell epitopes was characterised by the production of mesenteric lymph node (MLN)-derived Th2 cytokines and goblet cell hyperplasia. Collectively our data establishes that a combination of in silico genome-based CD4+ T-cell epitope prediction, combined with VLP delivery, offers a promising pipeline for the development of an effective, safe and affordable helminth vaccine. Introduction Trichuriasis, caused by the whipworm Trichuris trichiura, is one of the most widespread soiltransmitted helminths (STH) in the world [1]. Global mass drug administration (MDA) programmes are being implemented, but cure rates are low, repeated treatments are costly and may prevent the development of acquired immunity. Further, the existence of drug-resistant parasites is a constant concern [2][3][4]. Using the mouse model of human trichuriasis, Trichuris muris excretory/secretory (ES) products [5], ES fractions [6], extracellular vesicles (EVs) [7], and, more recently, T. muris whey acidic protein [8] in the context of the adjuvant alum, have shown considerable potential in a number of pre-clinical protection trials. Despite these successes, developing a vaccine based on native antigens is associated with many manufacturing challenges, including cost, time consumption, difficulties in purifying large quantities of worm antigens and control over differences between batches [9,10]. The advent of the genome era has provided alternative strategies for vaccine development [11]. For example, the reverse vaccinology (RV) approach combines genome information with immunological and bioinformatics tools to overcome some of the limitations of conventional methods of screening vaccine candidates [12][13][14]. These observations, combined with the economic challenges of producing a low-cost vaccine, prompted us to examine the potential of using virus-like particles (VLPs) as a scaffold for the presentation of predicted Trichuris MHC-II epitopes. The Hepatitis B core protein (HBc) has been widely used as a VLP: it forms stable self-assemblies, which can accommodate T-cell epitopes, is cheap to produce and is safe for human use [15,16]. Structural analysis has shown that each HBc monomer forms helical structures, which protrude as spikes from the capsid [17]. Antigens, in the form of short T-cell epitopes or whole globular domains, can be inserted at the tip of each spike. Designing vaccines based on 'multi-stage' antigens which are expressed at different stages of infection have shown promising results against several complex pathogens, such as M. tuberculosis and Plasmodium [18,19]. Further, CD4+ Th2 cells play essential roles in the development of protective immunity against Trichuris spp [20].The principal objective of this study was therefore to develop a novel MHC II T-cell epitope-based vaccine predicted from multi-stage Trichuris proteins, which will induce Th2 protective immunity. To achieve this aim, first, a systematic review was performed to select the optimal MHC class II in silico prediction tools. Second, potential Trichuris MHC-II T-cell epitope vaccine candidates obtained from the Trichuris genome were identified using the selected in silico prediction tool. Third, these epitopes were produced in a commercially viable manner by fusing epitopes in the hepatitis B core antigen (HBc-Ag) virus-like particle (VLP) vaccine delivery system. These VLPs+T-cell epitope vaccine candidates were then tested in vitro for their ability to activate antigen presenting cells (APCs). Finally, in vivo experiments were conducted to test the protective capacity of VLPs expressing different Trichuris T-cell epitopes in vivo using T. muris infection of mice. Collectively the results of this research represent the first significant progress towards identifying a novel, epitope-based vaccine for trichuriasis. The IEDB and NetMHC-II 2.2 tools exhibited similarly high levels of sensitivity for predicting epitopes with strong affinities Based on a list of search terms (S1 Table) used on Google and other websites (S2 Table), 88 servers that predict T-cell epitopes based on MHC class I and II binding were identified (S3 Table). Of these 88, only 48 tools which could predict MHC-II epitopes were identified. Since our primary focus was MHC-II T-cell epitope prediction, only tools with that functionality were further evaluated using the inclusion criteria ( Fig 1A). Five tools met the inclusion criteria: IEDB, SYPEITHI, NetMHC-II 2.2, Rankpep and ProPred. These tools were then scanned for the ability to predict MHC-II T-cell epitopes for two mouse alleles, I-Ab and I-Ad. The ProPred tool was excluded from further analysis because it only predicted HLA-DR binding sites. The four epitope prediction tools that met all the selection criteria were subsequently evaluated using an epitope training set to calculate the sensitivity with which they could predict MHC II T-cell epitopes. Using the epitope training set (S4 Table), it was observed that IEDB and NetMHC-II 2.2 tools had high levels of sensitivity (~78.00%) for predicting MHC-II T-cell binding epitopes with high affinities, while Rankpep and SYFPETHI exhibited low sensitivity (10.61% and 8.33%, respectively). Collectively, the data indicate that the best prediction tools across all MHC-II T-cell prediction servers considered in this study are IEDB and NetMHC-II 2.2. Data acquisition and identification of potential vaccine component proteins The stichosome, which forms the majority of the whipworm anterior region, is thought to release ES products which trigger the host immune response and induce host immunity after infection [7,21]. Research conducted by Dixon et al., [22] and Else et al., [23], showed that antibody recognition of high molecular weight proteins in both T. muris adult and larval ES correlated with resistance to T. muris infection. To increase the chances of the identification of antigens which induce a strong immune response, secreted and surface-exposed proteins associated with the anterior region of L2, L3, male and female adult worms were selected for this study [24]. Predicted ORFs from the 85-Mb genome (*11,004 protein-encoding genes) of T. muris and the 73-Mb genome (*9650 protein-coding genes) of T. trichiura [24] were scanned for potential vaccine candidates; 637 proteins were selected. From this subset, only proteins that possess signal peptides and did not exhibit transmembrane domains were included, reducing the total to a sample of 156 proteins. Full-length sequences of the Trichuris proteins were obtained from the Universal Protein Resource (UniProt) database in FASTA format http:// www.uniprot.org/. Elimination of closely homologous mouse and human proteins To eliminate potential autoimmune reactions when tested in mice and humans, the 156-protein subset was checked to determine homology with human and mouse proteins using the basic local alignment search tool (BLAST). All proteins with any degree of homology with humans or mice were excluded, leaving 60 candidate proteins. Of these, only upregulated proteins in the anterior region of L2, L3 and adult worms were selected based on their transcript expression level. This criterion was based on high-throughput transcriptome data generated from the RNA of T. muris and Gene Ontology (GO) term enrichment analyses, a transcriptional upregulation in a particular protein refers to a �10 Log2 (normalised read count) PLOS PATHOGENS A vaccine candidate for whipworm infection PLOS PATHOGENS A vaccine candidate for whipworm infection transcript expression level [24]. Implementing this criterion, 27 proteins were selected for MHC-II T-cell epitope prediction (Fig 1B). Prediction of Trichuris MHC-II T-cell binding epitopes All 27 selected proteins were screened to predict T-cell MHC class II epitopes using the IEDB (consensus method) prediction tool [25]. The analysis was carried out to predict the binding affinity to the MHC class II allele I-Ab mouse strain and the 11 most prevalent human class II HLA allele supertypes [25][26][27][28]. The consensus method was used to select only those peptides with a low median percentile rank according to three different prediction methods to reduce the chance of failure during prediction. To cover common global alleles, peptides were selected based on their ability to bind to at least three different human alleles. Conservation and allergens To assess how well the predicted MHC-II T-cell peptides were conserved within the T. trichiura genome, the IEDB conservancy analysis tool was used. This tool calculates the degree of conservancy (i.e. similarity) of a peptide within a specified protein sequence [29]. Only peptides that were >70% conserved with at least one homologous T. trichiura protein were selected for further analysis. Of the 219 MHC-II T-cell peptides, only 33 met these criteria. The 33 MHC-II peptides were then assessed for the prediction of IgE epitopes and allergenic potential using the AllerTOP v.2.0 server http://www.ddg-pharmfac.net/AllerTOP/ [30]. The final set of 10 Trichuris MHC-II T-cell epitopes containing 33 overlapping peptides were predicted to have no allergenic potential. The 10 epitopes were further triaged to a final four epitopes based on solubility once expressed on HBc-Ag VLPs. A flow diagram summarising the approach used is shown in Fig 1B, Purification and assembly of VLPs expressing Trichuris T-cell epitopes Four HBc-Ag fusion proteins were designed, incorporating each predicted T-cell epitope into the major immunodominant region. The construct also included a Strep tag at the Cterminus, for affinity purification (S1 Fig). A second round of purification using size exclusion chromatography was performed to produce a homologous population of assembled VLPs. HBc-Ag preparations expressing Trichuris MHC-II T-cell epitopes were pure by SDS PAGE (S1A-S1E Fig). The endotoxin levels in all purified VLPs used in this study were <0.2 endotoxin units (EU) as assessed using the ELISA-based endotoxin detection assay (Data not shown). VLPs expressing Trichuris T-cell epitopes induced the production of proinflammatory cytokines in vitro and were internalised and co-localized in APCs BMDCs stimulated with different VLPs, irrespective of whether they include T-cell epitopes, produced high levels of IL-6 ( Fig 2A) and TNF-α (B) at levels equivalent to LPS and ES-stimulated BMDCs. Similiarly, all VLPs activated BMDMs, inducing the secretion of high levels of both IL-6 ( Fig 2F), and TNF-α. (G) BMDC-derived IL-10 showed a different reverse vaccinology approach to identify potential vaccine candidates (MHC-II T-cell epitopes) from the T. muris genome. The number on the arrow represents the number of proteins or epitopes selected for the next step. (C) List of 4 Trichuris MHC-II T-cell epitopes which have potential as vaccine candidates. https://doi.org/10.1371/journal.ppat.1008243.g001 PLOS PATHOGENS A vaccine candidate for whipworm infection pattern. Here VLPs bearing CBD241-257 evoked less IL-10 from BMDC than the control VLP ( Fig 2C). The in vivo significance of this is not clear but this may be desirable in order to avoid the induction of regulatory responses. In this study the VLPs bearing the 4 different T-cell epitopes were screened against BMDM and BMDC individually. Whether any synergistic innate cell stimulation is apparent when all four epitope bearing VLPs are added together woould be interesting to explore in the future. To visualise VLP internalisation by APCs, BMDCs and BMDMs were stimulated with or without fluorescein-conjugated VLPs. Images of individual cells taken by merging the brightfield (BF) and FITC (green) channels demonstrated that the fluorescein-conjugated VLPs were internalised by both APCs (Fig 2D & 2I). To confirm that the VLPs were co-localized within the APC lysosome and not the cell surface, BMDCs and BMDMs were stained with the lysosome-specific LysoTracker dye. Cells were then subsequently stimulated with or without fluorescein-conjugated VLPs. Merged images of the brightfield (BF), FITC (green), and Lysotracker (red) channels revealed that the fluorescein-conjugated VLPs were accumulated in the lysosome compartment of the BMDCs ( Fig 2E) and BMDMs ( Fig 2J). Immunization of mice with VLPs expressing Trichuris T-cell epitopes induced a significant reduction in worm burden following challenge infection The protective capacity of the 4 T-cell epitopes was assessed in the T. muris-mouse accelerated expulsion model [31] (Fig 3A) where significant reductions in worm burden and elevation in parasite specific antibodies and cytokines can be readily detected at day 14 post infection. Mice vaccinated with pre-mixed four VLPs+T-cell epitopes showed a statistically significant (P<0.01) reduction in worm burden by day 14 p.i. compared to the native VLP (HBc-Ag). In comparison, ES/Alum immunised mice harboured no parasites ( Fig 3B). Immunization of VLPs expressing Trichuris T-cell epitopes induced humoral immunity following challenge infection To evaluate T. muris-specific serum antibody responses induced by vaccination with VLPs+Tcell epitopes, parasite-specific IgM, IgG1, and IgG2c serum antibody levels were determined at d14 p.i. Following vaccination and infection, VLPs+T-cell epitopes and ES/Alum vaccinated VLPs (HBc-Ag, HBc-H 112-128 , HBc-CBD 1243-1259, HBc-CBD 241-257 , HBc-CLSP 143-158 and HBc-CLSP 398-416 ) and with 50 μg/ml ES and 0.1 μg/ml LPS as positive controls. Unstimulated BMDCs and BMDMs served as negative controls. Supernatants were harvested after 24 hours for IL-6, TNF-α and IL-10 cytokine analyses measured by LEGENDplex or CBA. The bars represent mean ± SEM. Statistical analyses were carried out using the Kruskal-Wallis test (multiple comparisons). Significant differences between groups are represented by � (P�0.05) with a line. Chart bars represent BMDCs and BMDMs grown from three individual mice from one representative experiment of two separate experiments. (D) Representative images of fluorescein-conjugated VLPs internalisation in the BMDCs and BMDMs (I). BMDCs and BMDMs at 1X10 6 /ml were incubated with 10 μg/ml fluorescein-conjugated VLP (HBc-Ag and HBc-CLSP 398-416 ) for 24 hours. As a negative control, unstimulated BMDCs and BMDMs were examined. Cell internalisation was determined by Amnis ImageStreamX cytometer compared to unstimulated BMDCs and BMDMs. Images shown, from left to right, show individual Brightfield images (BF) in the white channel, fluorescent-labelled stimulus (FITC) in the green channel and the combination of both BF/ FITC merged channels. The internalisation mean absolute deviation (MAD) is included above its images. The positive MAD value represents internalisation, and negative values represent poor internalisation. (E) Representative images of fluorescein-conjugated VLPs co-localization in the BMDCs and BMDMs (J). BMDCs and BMDMs at 1X10 6 /ml were stained with Lysotracker to visualise the cellular lysosome compartment and subsequently stimulated with 10 μg/ml fluorescein-conjugated VLP (HBc-Ag, and HBc-CLSP 398-416 ) for 24 hours. As a negative control, unstimulated BMDCs and BMDMs were examined. Intracellular co-localization was determined by Amnis ImageStreamX cytometer. Images shown, from left to right, show individual Brightfield images (BF) in the white channel, fluorescentlabelled stimulus (FITC) in the green channel, stained lysosome (Lysotracker) in the red channel and the combination of both FITC/ Lysotracker merged channels. The Similarity bright detail score (SBDS) from the IDEAS quantitative co-localization analysis is included above its image. SBDS values around 1 represent co-localization, and 0 values represent poor co-localization. Scale bars represent 10 μm. https://doi.org/10.1371/journal.ppat.1008243.g002 PLOS PATHOGENS A vaccine candidate for whipworm infection mice had statistically significant higher levels of parasite-specific IgM ( Fig 3C) and IgG2c (E) compared to the control PBS and native VLP (HBc-Ag) injected mice. However, high levels of parasite-specific IgG1 were only detected in the serum of mice vaccinated with ES/Alum following Trichuris infection ( Fig 3D). There was no or very low levels of parasite-specific IgM, IgG1, and IgG2c detected in the serum of native VLP (HBc-Ag) and PBS/alum injected mice at day 14 p.i., as shown in (Fig 3F-3H). Similarly, statistically significant higher levels of VLPs+T-cell epitopes-specific IgM (Fig 3F), IgG1 (G) and IgG2c (H) were produced following Trichuris infection of mice vaccinated with VLPs+T-cell epitopes, compared to mice given native VLP (HBc-Ag) or PBS. Notably, mice vaccinated with ES/Alum also produced high levels of VLPs+T-cell epitopes-specific IgM following Trichuris infection, as shown in Fig 3F. Immunisation of mice with VLPs expressing Trichuris T-cell epitopes induces a mixed Th1/Th2 immune response following challenge infection To analyse the cellular immune responses at the primary site of adaptive immune cell activation following T. muris infection [32], MLN cells of mice vaccinated with VLPs+T-cell epitopes were re-stimulated in vitro with Trichuris ES. The MLN is the most appropriate lymph node to assay for the presence of antigen-specific cytokines given that it drains the site of infection. Supernatants were assayed for Th2 cytokines (IL-4, IL-5, IL-9 and IL-13), Th1/Th17 cytokines (IL-2, IFN-γ and IL-17), proinflammatory cytokines (IL-6 and TNF-α), and the anti-inflammatory cytokine (IL-10) production by CBA (Fig 4 Collectively these data support the in vivo immunogenicity of the novel VLPs+T-cell epitope vaccine, evidencing the presence in the MLN of a mixed Th1/Th2 immune response and are in keeping with the worm expulsion and antibody data. Proximal colon goblet cells were quantified in mice vaccinated with ES/Alum, PBS, VLP (HBc-Ag) and VLPs+T-cell epitopes day 14 post T. muris infection. (Fig 4E). Interestingly, VLPs+T-cell epitopes and ES/Alum vaccinated mice exhibited a significantly elevated goblet cell hyperplasia (P<0.05) compared to mice injected with PBS or native VLP (HBc-Ag) ( Fig 4F). Control vaccinated (mice injected with VLP only) had shorter crypts than vaccineprotected groups (mice vaccinated with VLP plus T-cell epitopes and mice vaccinate with ES/ alum) (Fig 4G & 4H). Thus, these data provide no evidence for increased epithelial cell turnover in vaccine mediated immunity, which would have predicted shorted crypts in the more resistant mice. i. sera were titrated against T. muris ES antigens to assess parasite-specific IgM (C) IgG1 (D), and IgG2c (E) levels in VLPs+T-cell epitopes, VLP (HBc-Ag), PBS and in ES/Alum vaccinated mice by ELISA (reading at 405 nm). Day 14 p.i. sera were titrated against premixed VLPs +T-cell epitopes antigens to assess VLPs+T-cell epitopes-specific IgM (F) IgG1 (G), and IgG2c (H) levels in VLPs+T-cell epitopes, VLP (HBc-Ag), PBS and in ES/Alum vaccinated mice. Statistical analyses were carried out using Kruskal-Wallis test (multiple comparisons compared to the VLP (HBc-Ag)). Significant differences between groups are represented by ( � P�0.05, �� P�0.01, ���� P�0.0001) with a line. Results are shown as mean ± SEM. n = 6 mice per group. This experiment was repeated two times, and the ELISA results shown here are representative of the two experiments. https://doi.org/10.1371/journal.ppat.1008243.g003 PLOS PATHOGENS A vaccine candidate for whipworm infection It is possible therefore that the protective mechanism induced by ES in alum differs from that induced by the VLP plus T-cell epitope vaccine; alternatively, both IgE and mMCPT-1 may simply reflect the strength of the Th2 immune response. Discussion Trichuris trichiura is one of the most common human STH parasites and remains a major health concern for humans worldwide [1]. A number of pre-clinical vaccines against trichuriasis have been reported, containing whole Trichuris antigens or fractions [6,7,22]. However, developing a vaccine for Trichuris based on native antigens has several limitations, such as cost, time consumption and difficulty in purifying large quantities of worm antigens. It is also challenging to control for differences in batches in order to develop a commercially stable vaccine [10]. An alternative strategy embraces the use of informatics to predict and assess MHC-II T-cell epitopes using criteria to maximize their in vivo protective potential. Identification of novel Trichuris MHC-II T-cell epitopes as promising vaccine candidates The expulsion of T. muris is known to be CD4+ T-cell dependent [40]. Thus we focussed our study on identifying protective CD4+ T-cell epitopes. Initiation of an antigen-specific immune response requires presentation of antigenic peptides to CD4+ T-cells in the context of MHC Class II molecules [41]. Selection of appropriate MHC Class II binding peptides is therefore a critical first step in developing an epitope-based vaccine. There are more than 80 computerbased prediction tools for identifying peptides that bind to MHC class I and II molecules, but not all are equivalent; some epitope prediction tools may fail to predict all significant epitopes [42]. The performance of in silico prediction tools is affected by several factors. For example, MHC-II CD4+ T-cell epitope prediction tools have much lower accuracy than MHC-I tools because the MHC-I binding groove is closed, while the MHC-II groove is open at both ends [42,43]. Also, the use of a limited training dataset to evaluate the tools may affect performance [25,44,45]. Further, the low performance of MHC class II prediction tools may not only be due to poor algorithm performance; the genetic diversity of human populations presents additional challenges [46]. Thus, choosing the best bioinformatics tool to predict MHC class II Tcell epitopes is critical when designing epitope-based vaccines [47,48]. This study presents a systematic review of existing MHC-II restricted T-cell epitope prediction tools and evaluated four tools in order to establish the most appropriate bioinformatics tools currently available for predicting Trichuris MHC-II T-cell epitopes. The IEDB and NetMHC-II 2.2 tools achieved similarly high levels of sensitivity of predicting binding epitopes with high affinities, while Rankpep and SYFPETHI exhibited low PLOS PATHOGENS A vaccine candidate for whipworm infection sensitivity. Each tool applies different methods of prediction; the IEDB tool uses a quantitative consensus method that combines the strengths of various methods [25]; the NetMHC-II 2.2, another quantitative tool, uses an NN-align algorithm and weight matrix [28]; and both SYF-PEITHI [49] and Rankpep [50] are qualitative tools that use motif PSSMs. All the tools were 'user-friendly' but SYFPETHI was limited in its coverage of mouse and human MHC-II alleles. The IEDB tool has features that the other three do not, including a browser to input protein sequence formats in a National Centre for Biotechnology Information (NCBI) database, seven different prediction methods and an easy method of downloading the prediction output into an Excel spreadsheet. Several studies have compared the performance of different MHC class II peptide binding prediction tools [51-53], but the comparison presented in this study is different for two main reasons. First, the MHC-II prediction tools were selected in a systematic way using inclusion/ exclusion criteria. Second, a unique dataset consisting of a new 'test' dataset which has not been used to build or evaluate IEDB was used. For example, Zhao and Sher (2018) evaluated the MHC-II prediction tools hosted on the IEDB analysis resource server using newly available, untested data of both synthetic and naturally processed epitopes. Among the 18 predictors that were benchmarked, NetMHC-II outperformed all other tools, including NetMHCIIpan and the consensus method for both MHC class I and class II predictions [54]. Furthermore, Andreatta et al. (2018) created an automated platform to benchmark six commonly used MHC class II peptide binding prediction tools using new 59 datasets. Their evaluation suggested that NetMHCIIpan is currently the most accurate tool, followed by NN-align and the IEDB (consensus) tool [55]. Despite differences in the datasets used for comparison, these studies agree with the comparison study conducted in our study, which found that the IEDB (consensus) and NETMHC-II 2.2 (ANN) tools are among the best MHC-II prediction tools. However, NetMHCIIpan could not be included in this study because it did not meet the inclusion criteria. Collectively, we recommend the use of IEDB and NETMHC-II 2.2 prediction tools in any MHC class II epitope prediction study to reduce the experimental cost of identifying epitopes. However, the output of these tools needs to be carefully evaluated in vitro and in vivo before they are used to bring an epitope-based vaccine to trial. Since the advent of immunoinformatics tools for prediction of antigenic epitopes and protein analysis, several VLP-based vaccines have been engineered to carry foreign antigens and have proven to be highly immunogenic [56]. To our knowledge, the data presented in this study is the first to identify novel Trichuris MHC-II T-cell epitopes as potential vaccine antigen candidates. The framework used here to identify potential epitope vaccine candidates within the T. muris and T. trichiura genome could be used in the future to identify potential vaccine candidates for other parasite species. The final set of Trichuris MHC-II T-cell epitope vaccine candidates were derived from chitin-binding domain-containing proteins and chymotrypsinlike serine proteases (Fig 1C). Chitin-binding domains genes are highly expressed in different life stages in many nematodes including the parasites T. trichiura, Ascaris lumbricoides and Ancylostoma ceylanicum and the free-living nematode Caenorhabditis elegans, [57][58][59][60]. In particular, these proteins are thought to be associated with eggshell formation and early development at the single-cell stage [61,62]. Furthermore, given that, in addition to T. muris, more than 40 other helminth species express high levels of chymotrypsin-like serine proteases, these proteins may also be promising vaccine candidates for other helminths [24,63]. Chymotrypsin-like serine proteases are thought to play central roles in either the invasion process or modulation of the host immune response to enhance parasite survival [64][65][66][67]. In addition, numerous publications, using preclinical models of parasitic infection, have noted that protective immunity can be consistently achieved using helminth protease molecules [68]. For example, vaccinating BALB/c mice with the whole recombinant serine protease of T. spiralis prior to challenge infection led to a reduction in worm burden and induced a mixed Th1/Th2 immune responses [69][70][71]. Furthermore, Shears et al. [6] showed that vaccinating mice with the T. muris ES fraction containing serine proteases induced high parasite-specific antibody responses. Remarkably all the proteins selected in this study have been identified within the most immunogenic fractions of T. muris ES following vaccination of mice [6]. Identification of a novel VLP-based vaccine against trichuriasis All VLP recombinant proteins stimulated a non-specific inflammatory response characterized by the secretion of high levels of proinflammatory cytokines. Furthermore, by identification of intracellular co-localization with lysosomes, VLPs were shown to be taken up by both BMDC and BMDMs. These results are consistent with those of Serradell et al. [72] and Wahl-Jensen et al. [73] who examined the activation of APCs in response to VLPs. They also showed that oral vaccination with the recombinant protein protected mice from influenza infection and generated protective humoral and cellular immunity. These results suggest that the VLPs are well placed to act as delivery systems to drive immune responses. Further, the data raises the exciting prospect of modifying the VLPs using adhesion molecules and/or cytokines co-displayed on the VLP surface, in order to target the epitopes to specific APC subsets, thus enhancing the activation of antigen-specific T-cells [74]. Remarkably, upon challenge with T. muris infection, mice vaccinated with 50 μg of premixed VLPs+T-cell epitopes (HBc-CBD 1243-1259 , HBc-CBD 241-257 , HBc-CLSP 143-158 , and HBc-CLSP 398-416 ), in the absence of any additional adjuvant, showed a significantly reduced worm burden. Parasite-specific IgM and IgG2c were detected in the sera of mice vaccinated with the VLPs+T-cell epitopes. Levels were equivalent to the control ES in alum-vaccinated mice, and significantly higher than control vaccinated and infected mice. These results suggest that these VLPs+T-cell epitopes are antigenic and can boost antibody response sufficiently to recognize specific small peptides in the T. muris ES. Importantly, analysis of the serum from immunised mice showed that vaccination with the VLPs+T-cell epitopes elicited high levels of IgM, IgG1 and IgG2c to the VLPs+T-cell epitope recombinant protein pool (HBc-CBD 1243-1259 , HBc-CBD 241-257 , HBc-CLSP 143-158 , and HBc-CLSP 398-416 ) with limited recognition of the native VLP (HBc-Ag) protein. In keeping with these data, the malaria VLP-based vaccine (Malarivax) [75][76][77] composed of HBc-Ag expressing P. falciparum T cell and B-cell epitopes identified from the circumsporozoite protein developed long-lasting immunity, eliciting a CD4+ T-cell immune response and is currently undergoing a clinical trial [15,78,79]. The results of these studies support some critical insights into VLP-based vaccines, including confirming that an HBc virus-like particle, in particular, is an excellent delivery system for developing potential vaccine candidates for parasites. A proportion of mice vaccinated with VLPs+T-cell epitopes produced detectable levels of MLN-derived Th2 cytokines IL-5, IL-9, and IL-13 in response to re-stimulation in vitro with Trichuris ES. The Th1 cytokine IFN−γ was also significantly elevated above levels detected in control mice. These data indicate that vaccine-induced protective immunity is characterized by a mixed Th1/Th2 immune response, as has been previously reported [21,80]. Similarly, Gu et al. [80], reported that the protective immunity to Trichinella spiralis infection induced by vaccination with CD4+ T-cell epitopes was associated with both Th1 and Th2 cytokines. A vaccine candidate for whipworm infection Whilst the mechanism by which vaccination protects mice from T. muris infection remains unclear, this study reveals that vaccination of mice with VLPs+T-cell epitopes or ES/Alum promoted a marked goblet cell hyperplasia [81]. Goblet cells, and the mucins they produce, have been implicated in Th2-mediated defence in mice resistant to a primary T. muris infection [82,83]. However, it remains to be determined whether the goblet cell hyperplasia seen here in vaccinated mice is simply a Th2 correlate or is functionally important in the protection observed. Future work will include determining the effector mechanism(s) at play in vaccine-mediated protective immunity. Using a whole ES vaccine preparation Dixon et al has previously hypothesised that the effector response is distinct from that seen in a primary infection [21]. Thus this study reported no elevation in epithelial cell turnover in vaccine-protected mice, but rather an accumulation of cell-bound IgG1 in the lamina propria. Prior to elucidating mechanism however we aim to improve the efficacy of our VLPs+T-cell epitope based vaccine, as the 50% protection offered by the current preparation is a limitation. We will adopt a number of strategies. We will assess each VLPs+T-cell epitope in vivo for its ability to drive Th1, Th2, Th17 or Regulatory T-cells and exclude any epitope that might promote Th2-antagonistic Th1, Th17 or regulatory responses. We also aim to modify our VLP delivery platform by incorporating targeting antibodies which will preferentially deliver the T-cell epitopes to Th2-inducing antigen presenting cells [72]. In summary, the current study describes the development and efficacy of a novel epitope-based vaccine against trichuriasis. VLPs expressing different Trichuris MHC-II T-cell epitopes, predicted from chitin-binding domain containing proteins and chymotrypsin-like serine proteases, were shown to promote protective immunity in vivo. Collectively, given the right combination of immunoinformatics and immunogenicity screening tools, epitope-based vaccines will undoubtedly limit the cost and effort associated with bringing a Trichuris vaccine to trial. Search strategy A protocol was designed to identify the bioinformatics tools that can predict MHC-II T-cell epitopes in accordance with the well-defined Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [84]. The search was limited to the English language and used the search terms shown in (S1 Table). A list of the search terms was first used in December 2015 on Google and other websites (S2 Table) to screen for MHC class I and II in silico prediction tools. The number of citations (>200), number of publications (>200), online availability, last update and community were considered to determine whether the tools would be selected for further analysis. The number of publications and citations for each tool were obtained from Google Scholar. In the case of duplicate citations, the highest number of citations was used. A flow diagram of the systematic review screening process for MHC-II T-cell bioinformatics prediction tools is shown in Fig 1A. Construction of an epitope training set To evaluate the performance of the tools, a literature search for MHC-II CD4+ T-cell peptide binding datasets was performed using the search terms 'universal T-cell epitopes' and 'MHC class II T-cell epitopes' on Google Scholar in December 2015. These datasets included epitopes with publicly available sequences that the literature had experimentally validated to be immunogenic in mice. Because training sets of peptides were used for the development of epitope prediction tools, care was taken to exclude the training datasets used for tool development [85][86][87]. PLOS PATHOGENS In order to find the optimal set of epitopes, two different sets of epitopes were analysed. The first set was composed of publicly available series of peptide sequences from 15 different proteins that can bind to MHC class II molecules. The second set included one untested protein of T. muris, which served as a control to evaluate the performance of the tools. Collectively, the training dataset was composed of 145 epitopes from 16 different proteins (S4 Table). Prediction of Trichuris MHC-II T-cell epitopes Full-length sequences of proteins containing T-cell epitopes were obtained from the Universal Protein Resource (UniProt) database http://www.uniprot.org in FASTA format. The Immune Epitope Database (IEDB), NetMHC-II 2.2, Rankpep and SYFPEITHI tools were used to predict T-cell epitopes for mouse strains with I-Ab and I-Ad mouse alleles. Each tool applied a different prediction method and scoring system to generate the predictions output. For instance, the IEDB tool uses the consensus method for prediction, which combines the NN-align, SMM-align, combinatorial library and Sturniolo methods [25], while the NetMHC 2.2 tool uses ANNs [28]. The median percentile rank (%) of the three prediction methods was used to generate the rank for the consensus method. A small numbered percentile rank (%) indicates that a peptide has a high binding affinity to MHC class II alleles [25].The output of the IEDB and NetMHC 2.2 tools (the binding affinities) were expressed as half-maximal inhibitory concentration IC50 (nM) values. Epitopes that bind with an affinity of <50 nM are considered to have high affinity, those that bind with <500 nM have an intermediate affinity and those that bind with <5000 nM have a low affinity [28]. All epitopes with high or intermediate affinity are considered 'true binders', while epitopes with low affinity are considered 'nonbinders'. No known T-cell epitope has an IC50 value of >5000 nM [25]. The scoring system of the SYF-PEITHI prediction tool depends on whether peptide amino acids are frequently occur in anchor positions. Optimal anchor residues are given the value 15 and scores of -1 or -3 points are given to amino acids that have a negative effect on epitopes' binding ability at a certain sequence position. Epitopes that bind strongly are among the top 2% of all peptides predicted in 80% of all predictions results [49]. The Rankpep tool uses position-specific scoring matrices (PSSMs) to predict MHC-II T-cell epitopes [50]. A high peptide score percentage indicates that the epitope is likely to bind to the set of aligned peptides that bind to a given MHC-II molecule [50]. The peptide lengths in all the resulting sets were based on the complex of 15 mer core region peptides of MHC class II molecules. Evaluation and statistical analysis Using the training set of epitopes, the performance of the four MHC-II epitope prediction tools, selected through our inclusion/exclusion criteria, were assessed as weak, intermediate or high binders. The prediction results were classified into two categories, true positive (TP) and false negative (FN), based on the threshold values [88]. In addition, the evaluation assessed sensitivity (TP/ [TP+FN]). Nonparametric Spearman correlation and Bland-Altman analyses were performed to show the relationship and agreement between the scores derived from the NetMHC-II 2.2 and IEDB tools. The level of significance was set at p < 0.05 for the correlation test. Mice and parasites 6-8 weeks male C57BL/6 mice (Envigo) were fed autoclaved food and water and were maintained under specific pathogen-free conditions. Parasite maintenance, ES collection from adult T. muris worms and the method used for infection and evaluation of worm burden were carried out as described previously [89]. PLOS PATHOGENS A vaccine candidate for whipworm infection Ethics statement All animal experiments were approved by the University of Manchester Animal Welfare and Ethical Review Board and performed under the regulation of the Home Office Scientific Procedures Act (1986), and the Home Office approved licence 70/8127. Plasmid construction The coding sequence for native VLP (HBc-Ag) was engineered with BamHI + EcoRI sites to allow insertion of peptide antigen sequences into the major immunodominant region (MIR). The entire HBc-Ag coding sequence was inserted into the pET17b expression vector between the NdeI (CATATG) and XhoI (CTCGAG) restriction sites. The insertion of each MHC-II Tcell epitope into the MIR was obtained by annealing the relevant oligonucleotide primers (S4 Table) with BamHI + EcoRI restriction sites and ligation into BamHI/EcoRI cut HBc-Ag in the pET-17b vector. Constructs containing MHC-II T-cell epitopes were confirmed by DNA sequencing. Production and purification of VLP recombinant proteins The recombinant plasmids were transformed into ClearColi-BL21 (DE3) electrocompetent cells (Lucigen) by heat-shock transformation. The bacterial culture was grown in LB media containing 100 μg/mL ampicillin and incubated at 37˚C for 14 hours in a shaker incubator. The transformed cells at a starting optical density (0.1 OD600) were inoculated into LB liquid media supplemented with 100 μg/mL ampicillin for 3-4 hours until the optical density at 595 nm (OD600) reached 0.6. Isopropyl-b-D-thiogalactopyranoside (IPTG) was added to the culture to a final concentration of 0.4 mM, and the culture was grown by continuous shaking for 12-16 hours at 16˚C. The cells were then harvested by centrifugation, resuspended and sonicated in Strep-Tag washing buffer (100 mM Tris-HCl, 150 mM NaCl, 1 mM EDTA) contaning 1 tablet of cOmplete TM EDTA-free protease inhibitor cocktail (Sigma-Aldrich) per 50 mls of the resuspended cells along with 5 μg/ml DNase I (Sigma-Aldrich). The cell suspension was then disrupted by ultra-sonication on ice using Banddelinuw 3200 (Sonoplus) amplitude (Am) in 35% for 5 sec on and 10 sec off for 5-8 minutes. The supernatant containing soluble recombinant protein was harvested following centrifugation at 18,900 x g for 40 mins at 4˚C using Sorvall RC Plus with the Fiber-Lite F21 8x50y rotor. Finally, the soluble supernatant was filtered through 0.22 μm pore size filters. After filtration, Strep(II)-tag proteins were purified using affinity column chromatography using a StrepTrap column prepacked with StrepTactin Sepharose by following the manufacturer's protocol (GE Healthcare). The VLP recombinant proteins were further purified by size exclusion chromatography (Superose 6, 10/300 GL;GE Healthcare). The level of endotoxin in all the purified VLPs was measured with an ELISAbased endotoxin detection assay (Hyglos) following the manufacturer's protocol. SDS-PAGE The VLP recombinant protein samples were subjected to 10% SDS-PAGE and subsequently characterized by TEM as described previously [20]. Briefly, the VLP recombinant proteins were separated on 10% polyacrylamide gels and stained with instant blue protein stain (Expedeon). Fluorescein-conjugated VLP internalization and localization in APCs BMDCs and BMDMs on day 8 were collected at 1X10 6 /ml and incubated with 10 μg/ml with fluorescein-conjugated VLP separately (HBc-CBD 1243-1259, HBc-CBD 241-257, HBc-CLSP 143-158 and HBc-CLSP 398-416 ) for overnight at 37˚C in 5% CO 2 . Unstimulated BMDCs and BMDMs were used as negative controls. Next day, cells were incubated with 50 mM LysoTracker dye obtained from (Invitrogen) for 45 min prior to harvesting to visualise the lysosomal localisation of fluorescein-conjugated VLPs by ImageStreamX cytometry. A Brightfield-1 filter was employed to image dendritic cells and macrophages, a fluorescein isothiocyanate (FITC-488 nm) filter to image fluorescein-conjugated VLPs, and an (APC-592 nm) filter to image lysosomes. The data were analysed using IDEAS software version 6.2.187.0. The degree of co-localization was measured by the Bright Detail Similarity (BDS-R3) on a cell-by-cell basis. A Bright Detail Similarity value of 1.0 indicates a high degree of similarity between two images in the same spatial location (correlated) and a value around 0 has no significant similarity (uncorrelated). Immunization schedule and challenge infection Mice were divided randomly into 4 groups: mice were inoculated s.c. with overnight ES antigens emulsified with an equal volume of Alum-an aluminum salt adjuvant [90] obtained from Thermo Scientific to achieve a vaccine dose of 100 μg ES in 100 μl Alum as a positive control. Ag-specific antibody detection in the serum Blood samples were collected immediately from mice by cardiac puncture and left at room temperature to clot. Parasite-specific and VLP recombinant protein-specific antibodies (IgM, IgG1 and IgG2c) were determined in sera by enzyme-linked immunosorbent assay (ELISA) as previously described [91]. Briefly, 96-well plates were coated with 50 μg/well of the overnight T. muris E/S antigen at 5 μg/ml or with 50 μg/well of the purified VLP recombinant protein in 0.5 M carbonate-bicarbonate buffer for overnight at 4˚C. The plates were washed and then PLOS PATHOGENS A vaccine candidate for whipworm infection blocked with 3% BSA (Sigma-Aldrich) in PBS Tween-20 (PBS-T20) (0.05% Tween 20, Sigma-Aldrich) for 1 h at 37˚C. After washing, diluted sera were added and incubated for 1 h at 37˚C. Antibody responses were detected using the Biotinylated rat anti-mouse IgM, IgG1 and IgG2a/c (BD Biosciences). After washing, streptavidin peroxidase (Sigma-Aldrich) was added to the plates and incubated for 1 hour at room temperature. The TMB ELISA substrate (3, 3', 5, 5'-tetramethylbenzidine-Thermo) was used to develop color and stopped with 0.003% (H 2 SO 4 ). The optical density was measured with a Dynex MRX11 plate reader (DYNEX Technologies) at 450 nm with a reference of 570 nm. IgE ELISA Serum was assayed for total IgE antibody production. 96 well plates were coated with purified anti-mouse IgE (2ug/ml, Biolegend, Clone: RME-1) in 0.05M carbonate/ bicarbonate buffer and incubated overnight at 4˚C. Following coating, plates were washed in PBS-Tw and non-specific binding blocked with 3% BSA (Sigma-Aldrich) in PBS for 1 hour at room temperature. Plates were washed and diluted serum (1:10) added to the plate and incubated for 2hrs at 37˚C. After washing HRP conjugated goat anti-mouse IgE (1ug/ml; Bio-rad) was added to the plates for 1 hour. Finally, plates were washed and developed with TMB substrate kit (BD Biosciences, Oxford, UK) according to the manufacturer's instructions. The reaction was stopped using 0.18M H 2 SO 4 , when sufficient colour had developed. The plates were read by a MRX II microplate reader (DynexTechnologies, VA, USA) at 450nm, with reference of 570nm subtracted. Histology At autopsy, colonic tissue samples were removed and fixed at room temperature for overnight in 10% neutral buffered formalin, prior to storage in 70% EtOh and processing and embedding in paraffin wax. 5 μm thick serial sections were cut on a Microrn HM325 microtome (Microm International, Germany), de-waxed in citroclear and rehydrated prior to periodic acid-Schiff and Hematoxylin and Eosin staining. Stained slides of proximal colon sections were scanned using 3D Histech Pannoramic 250 flash slide scanner. Photographs of the sections were taken at 100X magnification using panoramic viewer version 1.15.4 software. The number of goblet cells was determined as the total number of PAS-positive cells in 60 randomly selected crypts in three fields of view from each section. All samples were counted in a blinded fashion. PLOS PATHOGENS A vaccine candidate for whipworm infection Statistics Statistical analyses were performed using Graph Pad Prism, version 7.00 software. In all tests, P�0.05 were considered statistically significant and were determined using the Kruskal-Wallis non-parametric ANOVA for comparing multiple groups. PLOS PATHOGENS A vaccine candidate for whipworm infection
9,906.4
2019-11-28T00:00:00.000
[ "Medicine", "Biology" ]
Rational local systems and connected finite loop spaces Greenlees has conjectured that the rational stable equivariant homotopy category of a compact Lie group always has an algebraic model. Based on this idea, we show that the category of rational local systems on a connected finite loop space always has a simple algebraic model. When the loop space arises from a connected compact Lie group, this recovers a special case of a result of Pol and Williamson about rational cofree $G$-spectra. More generally, we show that if $K$ is a closed subgroup of a compact Lie group $G$ such that the Weyl group $W_GK$ is connected, then a certain category of rational $G$-spectra `at $K$' has an algebraic model. For example, when $K$ is the trivial group, this is just the category of rational cofree $G$-spectra, and this recovers the aforementioned result. Throughout, we pay careful attention to the role of torsion and complete categories. Introduction The category of non-equivariant rational spectra is very simple; it is equivalent to the derived category of Q-modules. Greenlees has conjectured that for a compact Lie group G, the category of rational equivariant G-spectra is equivalent to the derived category of an abelian category A(G) [Gre06, Conjecture 6.1]. For example, when G is a finite group, the conjecture holds, and is relatively elementary to prove [GM95,Appendix A]. The conjecture has also been proved in various other cases including (but not limited to) tori [GS18], O(2) [Bar17], and SO(3) [Kȩd17]. In these cases, we say that the category of rational G-equivariant spectra has an algebraic model. One can additionally ask for more structure to be preserved, for example one can ask for an equivalence of symmetric monoidal categories. Inside the category of G-spectra sit the category of free and cofree (or Borel complete) G-spectra. The category of free G-spectra consists of those G-spectra that can be constructed from free cells Σ ∞ + G. More specifically, it can be constructed as the localizing subcategory inside G-spectra generated by Σ ∞ + G. Equivalently, these are the G-spectra for which EG + ⊗ X → X is an equivalence, where EG + is the suspension spectra of the universal free G-space (see Section 3.2). The category of cofree G-spectra is the Bousfield localization of Sp G at Σ ∞ + G, or equivalently the Gspectra for which X → F (EG + , X) is an equivalence. Similarly, we can construct the categories of free and cofree rational G-spectra, which we denote by Sp free G,Q and Sp cofree G,Q , respectively. In fact, these categories are equivalent, although not by the identity functor. These categories fit into a general construction of torsion and complete categories, see Section 2.1. It is reasonable to conjecture that there is an algebraic model for these categories, and this is indeed the case [GS11, GS14,PW20]. We state the result for a connected compact Lie group, however we note that the cited results consider more generally arbitrary compact Lie groups. Here the categories Mod I−tors H * (BG),inj and Mod I−comp H * (BG),proj are the categories of Itorsion dg-H * (BG)-modules and L I 0 -complete dg-H * (BG)-modules respectively, equipped with an injective and projective module category structure, respectively (see Section 2.3). Moreover, the second equivalence is even shown to be symmetric monoidal. 1 In fact, Greenlees and Shipley have given two proofs for the equivalence between free G-spectra and torsion H * (BG)-modules when G is a connected compact Lie group. The first [GS11] passes from equivariant homotopy to algebra almost immediately, while the second [GS14] (which also deals with the non-connected case) stays in the equivariant world as long as possible. As noted by the authors, staying in the equivariant worlds seems to help the extension to the non-connected case. In the cofree case, the authors also stay in the equivariant world as long as possible. Our approach is to move away from equivariant homotopy immediately, and as such is closer in spirit to the original proof of Greenlees and Shipley. Indeed, we begin with the observation that there is a symmetric monoidal equivalence of ∞-categories Sp cofree G,Q ≃ ⊗ Fun(BG, Mod HQ ), (1.2) see Proposition 3.11, where Fun(−, −) denotes the ∞-category of functors and BG is considered as an ∞-groupoid. We call this the ∞-category of rational local systems on BG. An advantage of moving away from equivariant homotopy is that one can work more generally. For a space Y (again thought of as an ∞-groupoid) we let Loc HQ (Y ) = Fun(Y, Mod HQ ) denote the ∞-category of rational local systems on Y . then Sp G,Q, G ≃ Sp G G,Q ≃ ⊗ Sp Q , the ordinary category of rational non-equivariant spectra, and this is just the statement that the rational stable homotopy category is equivalent to the derived category of Q-vector spaces. We finish by constructing an Adams spectral sequence in the category Loc HQ (BX) for X a connected finite loop space. In fact, we show that the Adams spectral sequence can easily be constructed using the universal coefficient spectral sequence for ring spectra [EKMM97,Theorem IV.4.1]. Conventions. We work throughout mainly with ∞-categories although some results need to be translated from model categories to ∞-categories; in Appendix A we give a very brief recap of what we need, as well as references to more detailed accounts. An adjunction F : C ⇆ D : G between symmetric monoidal stable ∞-categories will be called symmetric monoidal if F is a symmetric monoidal functor. Note that in this case G automatically acquires the structure of a lax symmetric monoidal functor [Lur17, Corollary 7.3.2.7]. For a compact Lie group G, we will write Sp G for the ∞-category of G-equivariant spectra; in the non-equivariant case, we write Sp. For a space X, and an ∞-category C, we will write Fun(X, C) for the ∞-category of functors from X to C, where X is thought of as an ∞-groupoid. For example, when X = BG, the category Fun(BG, C) denotes the ∞-category of objects in C with a G-action. A localizing category D of C is a full, stable, subcategory of C that is closed under extension, retracts, and filtered colimits. It is additionally an ideal if X ∈ D and Y ∈ C implies X ⊗ Y ∈ D. Given a collection of objects {X i } i∈I ∈ C we will write Loc({X i | i ∈ I}) for the smallest localizing subcategory of C containing each X i . In the case of a single object X, we simply write Loc(X). Finally, if C is a closed symmetric monoidal category with internal hom object F (−, −) and monoidal unit ½, then we write DX = F (X, ½) for the internal dual of an object X. Completion and torsion in algebra and topology We begin by reviewing the construction of torsion and complete categories in a symmetric monoidal stable ∞-category. We consider torsion and completion for ring spectra and dg-algebras, and relate the latter to algebraic categories of torsion and complete objects. 2.1. Torsion and complete objects. We recall the basics of torsion and complete objects in a symmetric monoidal presentable stable ∞-category (C, ⊗, ½). For simplicity, we assume that C is compactly generated by dualizable objects. Note that our assumptions imply that C is closed monoidal, and we write Hom C (−, −) for the internal Hom object in C. They also imply that all compact objects are dualizable [BHV18b, Lemma 2.5] (with the converse holding if the unit ½ is compact). The theory in this section goes back to (at least) Hovey-Palmieri-Strickland [HPS97], and has also been considered by Dwyer-Greenlees [DG02], Mathew-Naumann-Noel [MNN17], and Barthel-Heard-Valenzuela [BHV18a]. We consider three full subcategories of C defined in the following way. Definition 2.1. Let A = {A i } be a set of compact (and hence dualizable) objects of C. (1) We say that M ∈ C is A-torsion if it is in the localizing subcategory of C generated by the set A. We let C A −tors ⊆ C denote the full subcategory of A-torsion objects. Remark 2.2. Note that we do not assume that C A −tors is a localizing ideal, i.e., is not automatically closed under tensor products. However, in practice, we will often be in the situation where every localizing subcategory is automatically a tensor ideal (for example, this holds whenever the category C has a single compact generator [HPS97, Lemma 1.4.6]) The following is shown in [HPS97, Theorem 3.3.5] or [BHV18a, Theorem 2.21]. Theorem 2.3 (Abstract local duality). Let C and A be as above. (1) The inclusion functor ι tors : C A −tors ֒→ C has a right adjoint Γ A , and the inclusion functors ι loc : C A −loc ֒→ C and ι comp : and Λ A , respectively. (2) There are cofiber sequences for all X ∈ C. In particular, Γ A is a colocalization functor and both −[A −1 ] and Λ A are localization functors. (3) The functors Λ A : C A −tors → C A −comp and Γ A : C comp → C tors are mutually inverse equivalences of stable ∞-categories. (4) Considered as endofunctors of C, there are adjunctions and Hom between Γ A and Λ A . Remark 2.4. We note that the functors and categories above do not depend on the set A, but only on the thick subcategory it generates. Remark 2.6. In the literature A-torsion objects are also sometimes referred to as A-cellular objects, for example in [GS13] (see in particular [GS13, Proposition 2.5 and Corollary 2.6]) Pictorially, we can represent the functors and categories in the following digram. y y s s s s s s s s s s (1) Let K be in C and suppose that the following hold: (a) K is compact is C, and F (K) is compact in D. (b) The unit η : K → GF (K) is a natural isomorphism. Then, there is an equivalence of ∞-categories (2) Let L be in D and suppose that the following hold: (a) L is compact in D, and G(L) is compact in C. Then, there is an equivalence of ∞-categories Proof. We prove (1), and leave the minor adjustments for (2) to the reader. We first claim that (F, G) gives rise to an adjunction Indeed, because F preserves colimits, F (Loc(L)) ⊆ Loc(F (K)), see, for example, [BCHV19, Lemma 2.5]. We can therefore take F ′ to be the restriction of F to Loc(K). Setting G ′ = Γ K G, one verifies that (F ′ , G ′ ) form an adjoint pair, which we claim is an equivalence. Indeed, consider the full subcategory of C tors consisting of those X for which the unit X → GF (X) is an equivalence. This is a localizing subcategory containing K by assumption. Since K generates C K−tors this localizing subcategory is all of C tors . Likewise, the full subcategory of D tors consisting of those Y for which the counit F G(Y ) → Y is an equivalence, is localizing. Moreover, it contains F (K) by the triangle identities, and hence is equal to D F (K)−tors . A sort of dual result, due to Pol and Williamson, is the compactly generated localization principle [PW20, Theorem 3.14]. Again, we only prove a special case of their theorem which will suffice for our purposes. (1) Let E ∈ C and suppose that the following hold: (a) L E C is compactly generated by K and L F (E) D is compactly generated by F (K). (b) The unit map η K : K → GF (K) is an equivalence. Then, there is a symmetric monoidal equivalence of ∞-categories (2) Let E ′ ∈ D and suppose that the following hold: (a) L E ′ D is compactly generated by L and L G(E ′ ) D is compactly generated by G(L). (b) The counit maps ǫ L : F G(L) → L and ǫ E ′ : F G(E ′ ) → E ′ are equivalences. Then, there is a symmetric monoidal equivalence of ∞-categories Proof. We prove (1); the proof for (2) is similar -the extra assumption is only used to ensure that the adjunction descends to the localized categories, as we now describe in (1). First observe that if Y ∈ C is E-acyclic, then F (Y ) ∈ D is F (E)-acyclic because F is a symmetric monoidal functor. We claim it follows that if N ∈ L F (E) D, then G(N ) ∈ L E C. To see this, choose an E-acyclic Y , then we must show that then by inspection we have a symmetric monoidal adjunction First, because F (K) ∈ L F (E) D it is not hard to see that assumption (b) implies that the unit map η ′ K : K → G ′ F ′ (K) is also an equivalence. Note that F ′ preserves colimits, and since it preserves compact objects by assumption (a), its right adjoint G ′ preserves colimits as well. It follows that the unit is always an equivalence, and that F ′ is fully-faithful. It then follows from the triangle identities that the counit F ′ G ′ (F (K)) → F (K) is also an equivalence, and a localizing subcategory argument shows then that the counit is always an equivalence. Hence, G ′ is also fully faithful, and (F ′ , G ′ ) is an adjoint equivalence as claimed. 2.2. Torsion and completion for graded commutative rings. Throughout this section we fix a graded commutative ring A, and let Mod A denote the category of dg-A-modules. We can give this category the projective model structure [BMR14,Theorem 3.3] with weak equivalences the quasi-isomorphisms, fibrations degreewise surjections, and cofibrations the subcategory of maps which have the left lifting property with respect to every map which is simultaneously a fibration and a weak equivalence. This is a compactly generated (in the sense of [BMR14, Definition 6.5]) monoidal model category, and we write D A for the associated symmetric monoidal stable ∞-category (see Appendix A for a very brief summary of the translation between model categories and ∞-categories). We can also give Mod A the injective model structure with weak equivalences the quasi-isomorphisms, cofibrations degreewise monomorphisms, and fibrations those maps which have the right lifting property with respect to every map that is simultaneously a cofibration and a weak equivalence. Because the weak equivalences are the same as in the projective model structure, the underlying ∞-category D A does not depend on which model structure we use. However, the injective model structure is not monoidal, and so from this perspective one does not see the symmetric monoidal structure on D A . For any x ∈ A, we define the unstable Koszul complex as where the fiber is taken in D A , and the stable Koszul complex where, as usual, A[x −1 ] is defined as the colimit of the multiplication by x map. Let I = (x 1 , . . . , x n ) be a finitely generated ideal, and then define Definition 2.12. Let D I−tors A denote the localizing subcategory of A generated by the compact object K(I). Accordingly, applying the general machinery of Section 2.1, we have the following categories and functors: as well as an equivalence of ∞-categories D I−tors Remark 2.14. The notation − where the tensor product is taken in D A . In particular, we see that M ∈ D I−tors This characterization will prove useful later. can both be characterized purely homologically. Indeed, using the local cohomology and homology spectral sequences (see [BHV18a,Proposition 3.20] or [DG02, Section 6]) one sees that complete} where the I-torsion and L I 0 -completion are discussed in more detail in Section 2.3. 2.3. Algebraic torsion and completion for graded rings. In this section, we compare the categories constructed via local duality in the previous section with derived categories of certain abelian categories. We now suppose that A is Noetherian, and that I is generated by a regular sequence. These assumptions can be weakened; it would suffice to take A to be a commutative ring and I to be a weakly proregular sequence (see [PSY14,Definition 3.21]), however they suffice for our purposes. Let I ⊂ A be an ideal, and let Mod I−tors A be the abelian subcategory of Itorsion modules, i.e. those M ∈ Mod A for which every element of the underlying graded module is annihilated by a power of I, see [BS13]. We note that Mod I−tors A is Grothendieck abelian, see [Sta20, Tag 0BJA] and is hence locally presentable [Bek00, Proposition 3.10]. We recall that there is an adjunction We now move onto the completion functor. Here, the algebraic version of completion we use is not I-adic completion (which is neither left nor right exact in general) as one may expect, but rather L I 0 -completion, which we recall now (for a useful summary, see [HS99, Appendix A]). Definition 2.18. Let L I 0 denote the zero-th left derived functor of the (non-exact) I-adic completion functor, then M is said to be Example 2.19. In the simple case where A = Z and I = (p), Bousfield and Kan defined a notion of Ext −p completeness by asking that the natural map This turns out to be equivalent to asking that M is L I 0 complete. For a dg-module M , we say that M is L I 0 -complete if the underlying graded module is, and let Mod I−comp A denote the full subcategory of L I 0 -complete dg-modules. There is an adjunction The subcategory Mod I−comp A of L I 0 -complete modules is abelian, but not Grothendieck, as filtered colimits are not exact. Following unpublished notes of Rezk [Rez18], Pol and Williamson [PW20, Proposition 7.5] showed that Mod I−comp A admits a projective model structure with weak equivalences the quasi-isomorphisms, fibrations degreewise surjections, and cofibrations the subcategory of maps which have the left lifting property with respect to every map which is simultaneously a fibration and a weak equivalence. This model structure is symmetric monoidal, and the above adjunction is a Quillen adjunction [PW20, Proposition 7.7], which is symmetric monoidal because L I 0 is monoidal and the unit A is cofibrant. , then there is a symmetric monoidal adjunction of stable ∞-categories There is a symmetric monoidal equivalence of ∞-categories Proof. As shown by Rezk [Rez18, Theorem 10.2], the counit of the above adjunction is an equivalence (i.e., i is a fully-faithful functor and L I 0 is a Bousfield localization), with image these complexes whose homology is L-complete. The essential image is then precisely D I−comp A , see Remark 2.15. The equivalence is symmetric monoidal because L I 0 is a symmetric monoidal functor. 2.4. An algebraic geometric description of local objects. Let X be a quasicompact separated scheme, then we can associate to it the derived ∞-category Given a morphism f : X → Y of quasi-compact separated schemes we can define (derived) pushforward and pullback functors where the pair (f * , f * ) are adjoint. We now continue with the notation as in the previous section, and so we fix a graded Noetherian ring A and a homogeneous ideal I = (x 1 , . . . , x n ). Geometrically, we let X = Spec(A) (the spectrum of homogeneous prime ideals in the graded ring A), Z = V (I), the closed subset of X defined by I, and U = X − Z. We then have an open immersion j : U → X . We define the ∞-category D Z qc (X ) as the full-subcategory of D qc (X ) consisting of those F for which j * F ≃ 0 in D qc (U). Observe that U can be written as a union of open subschemes of the form Using this, we can given an identification of the local category D I−loc A . We learned that such an approach is possible from [PSY14, Section 7]. Theorem 2.22. Let X , Z and U be as above. (2) There is an equivalence of ∞-categories where the right-hand side denotes the essential image of j * . Proof. (1) follows by applying the classical flat base-change theorem (see, for example, [Nee20, Proposition 3.1.3.1]) to the diagram which is a pull-back because j is an open-embedding. Indeed, it implies that the counit j * j * → id is an equivalence, so that j * is fully-faithful as claimed. Let us write E for the essential image of j * . Let ⊥ E denote the left orthogonal to E, i.e., the full subcategory of D qc (X ) on those objects F for which 2.5. Torsion and complete objects for ring spectra. We now consider the case where C = Mod R for a commutative ring spectrum R with π * R Noetherian. Suppose we are given an ideal I = (x 1 , . . . , x n ) ⊆ π * R. We first construct natural analogs of the Koszul complexes we constructed for graded rings. To that end, for x ∈ π * R we let K(x) be the fiber of the map Σ |x| R x − → R, and then define the unstable Koszul complex as We then define Mod I−tors R to be the category of torsion objects with respect to the compact object A = K(I), and so we also obtain categories Mod I−loc R and We also define K ∞ (x) to be the fiber of R → R[1/x], and then The following is implicit in the proof of [DGI06, Proposition 9.3]. Proposition 2.23. Suppose that k is a field, R is a coconnective commutative augmented k-algebra, and that π * R is Noetherian, such that the augmentation induces an isomorphism π 0 R ∼ = k. Let I denote the augmentation ideal, then there is a symmetric monoidal equivalence of ∞-categories where L k Mod R is the Bousfield localization of Mod R at k in the category of Rmodules. Proof. By Lemma 2.8 we have Mod I−comp R ≃ L K(I) Mod R , so it suffices to show that there is an equivalence of Bousfield classes k = K(I) , i.e., that for any It is clear that π * K(I) is finite dimensional over k, and hence by [DGI06, Proposition 3.16] K(I) is in the thick subcategory of R-modules generated by k (note that it is here where the conditions on R and k are required). This easily implies that if k ⊗ R M ≃ 0, For the converse, we first claim that k is in the localizing subcategory generated by K(I). Indeed, k⊗ R K ∞ (I) ≃ Γ I (k) ≃ k by Remark 2.13, and so k ∈ Loc R (K(I)). Once again, a simple argument now shows that if K(I)⊗ R M ≃ 0, then k ⊗ R M ≃ 0. This completes the proof. Equivariant homotopy theory In this section we study the stable equivariant category of a compact Lie group G. To that end, we let Sp G be the symmetric monoidal ∞-category of genuine G-spectra for G a compact Lie group, see [MNN17, Section 5], which is based on the model theoretic foundations of Mandell and May [MM02]. This is compactly generated by the set {G/H + ∈ Sp G } H≤G where H ≤ G is a closed subgroup (we are omitting the suspension from our notation). Moreover, these objects are dualizable by [LMSM86, Corollary II.6.3]. The category Sp G is closed-monoidal, and we will let F (−, −) denote the internal hom object in G-spectra. 3.1. Change of group functors. There are a variety of functors in use in equivariant homotopy. Here we recall what we need. Details can be found in, for example, [LMSM86] or Appendix A of [HHR16] or [Sch18, Chapter 3]. (1) Any group homomorphism f : H → G induces a symmetric monoidal functor f * : Sp G → Sp H . If f is the inclusion of a subgroup, then we denote this as Res G H : (2) Restriction has a left adjoint, given by induction. Specifically, where we again observe that Φ K : Sp K → Sp has a residual action by the Weyl group W G K (see [Sch18, Remark 3.3.6]). By [Sch18, Proposition 3.3.10] the functors {φ K } as K runs through the closed subgroups of G are jointly conservative. These also have the property that for any G-space X and that they are symmetric monoidal, colimit preserving functors. 3.2. Torsion and complete objects for genuine equivariant G-spectra. We now review the construction of the category of free and cofree (or Borel complete) G-spectra in the context of torsion and complete objects as studied in Section 2.1. We recall the definition of a family of subgroups. Definition 3.2. A family of closed subgroups is a non-empty collection F of closed subgroups of G closed under conjugation and passage to subgroups. Associated to F are G-spaces EF and EF with the properties that In fact, the G-spaces EF and EF are determined up to homotopy by their behavior on fixed points [Lüc05, Theorem 1.9]. Associated to these spaces is a cofiber sequence of pointed G-spaces We will also let EF + and EF denote the suspension spectra of the same pointed G-space. (1) If F e = {{e}}, the family consisting only of the trivial subgroup, then a model for EF e is the universal G-space EG. (2) If F = All, the family of all closed subgroups of G, then a model for EF is a point. Given a family F we let The situation can be shown diagrammatically as follows. y y s s s s s s s s s s The following is essentially the content of [Gre01, Section 4]. For finite G, see also [MNN17, Propositions 6.5 and 6.6]. Proposition 3.8. The A F -torsion, localization, and completion functors are given by Proof. For finite G this is [BHV18a, Theorem 8.6], however the same proof works for a compact Lie group. Indeed, the key observation is due to Greenlees [Gre01, Section 4], who shows that Γ AF (S G ) = EF + . Because Γ AF is smashing, this determines its behavior on all of Sp G . The identification of −[A −1 F ] then comes from comparing the cofiber sequences of Theorem 2.3(2) and (3.4), while local duality (Theorem 2.3(4)) gives the identification of Λ AF . Definition 3.9. X is said to be free (respectively, cofree) if it is A F -torsion (respectively, A F -complete) for the family F = {{e}} consisting only of the trivial subgroup. The following is [MNN17, Proposition 6.19] in the case when G is a finite group. The same proof works for compact Lie groups, with the exception that we only need to use closed subgroups because {G/H + ∈ Sp G } H≤G is a set of generators for Sp G , where H ≤ G is a closed subgroup. Proposition 3.10. Suppose X is a G-spectrum with underlying spectrum with Gaction X u ∈ Fun(BG, Sp). Then the following are equivalent: (1) X is cofree, i.e., the natural map X → F (EG + , X) is an equivalence in Sp G . (2) For each closed subgroup H ≤ G the map X H → X hH u is an equivalence of spectra. We now introduce an alternative model of cofree G-spectra. For finite G, this is [MNN17, Proposition 6.17] or [NS18, Theorem II.2.7], where for the latter we use Proposition 3.10 to identify Scholze and Nikolaus' Borel-complete G-spectra with cofree spectra. The latter proof generalizes to compact Lie groups. Proposition 3.11. There are equivalences of symmetric monoidal ∞-categories Sp) and Sp cofree G,Q ≃ ⊗ Fun(BG, Mod HQ ). Proof. We explain the global case; the rationalized case is identical. We first observe that there is a natural functor Sp G → Fun(BG, Sp), see [NS18,p. 249]. Alternatively, this is just the observation that the restriction from Sp G → Sp naturally lands in Fun(BG, Sp). Using Proposition 3.8 the same argument 4 as in [NS18, Theorem II.2.7] shows that the functor Sp G → Fun(BG, Sp) factors over Λ G (which is the functor denoted L by Nikolaus-Scholze) and that, moreover, the functor Sp cofree 3.3. The category of G-spectra at K. We now construct a category of G-spectra 'at K', where K is a closed subgroup of G. If K = {e} is the trivial subgroup, then this will just be the category of cofree G-spectra, while if K = G itself, then this will be equivalent to the ordinary category of non-equivariant spectra. Definition 3.12. For a closed subgroup K ≤ G, let F ≥K denote the family of closed subgroups H of G such that K is not subconjugate to H. This defines a localized category Sp G [A −1 F ≥K ]. Additionally, let F ≤K denote the family of closed subgroups H that are subconjugate to K, and F <K the family of proper subgroups subconjugate to K. If we let (H) denote the conjugacy class of a closed subgroup H ≤ G, and write (H) ≤ (K) when H is subconjugate to G, then we can write Remark 3.13. If K is a closed normal subgroup, then Sp G [A −1 F ≥K ] is known as the category of G-spectra concentrated over K, see [LMSM86, Chapter II.9]. Lemma 3.14. The following are equivalent for a G-spectrum X: ( Proof. See [QS19, Lemma 3.20] for the finite group case, although the argument holds equally well in the case of compact Lie groups. For the benefit of the reader, we spell the details out. If (1) holds, then X → EF ≥K ⊗ X is an equivalence by Proposition 3.8. Given that φ H is symmetric monoidal, (3.1) and the behavior of fixed points of EF ≥K (see (3.3)) show that (2) then must hold. Conversely, suppose that (2) holds. To show that (1) holds, it suffices to show that X ⊗EF ≥K ≃ 0. By [Sch18, Proposition 3.3.10] we can test this after applying φ H , as H runs through the closed subgroups of G. We then have By assumption (2) and (3.3) this is always trivial, as required. The following is [LMSM86, Corollary II.9.6] in the global case, and the rational case follows with an identical argument. Proposition 3.15 (Lewis-May-Steinberger). Let G be a compact Lie group, then for any closed normal subgroup N G categorical fixed points induces equivalences of symmetric monoidal ∞-categories More specifically, the (non-rationalized) equivalence is given as the composite with inverse given by inflation followed by the localization. Remark 3.16. The geometric fixed points functor Φ N : Sp G → Sp G/N is defined as the composite In general, the above composite makes sense for arbitrary K ≤ G, and defines a functor Φ K : Sp G → Sp WGK . We claim that Φ K ≃ Φ K , where the latter is defined in Section 3.1. In order to make the dependence on the group clear, we where Sub(N G K) is the set of closed subgroups of N G K. It is also then not hard to check using fixed points that Res G NGK ( EF G ≥K ) is a model for EF NGK ≥K . To see that the two functors are the same, we first claim that Res G NGK : Sp G → Sp NGK restricts to a functor Res G NGK : , then by Lemma 3.14 we must show that Since H ∈ F NGK ≥K we see that H ∈ F G ≥K as well. By Lemma 3.14 and the assumption on M , we deduce that φ H (Res G NGK M ) ∼ = φ H M ∼ = 0, as required. It now follows that the diagram commutes; the first square commutes by the discussion above, the middle square is clear, and the third square commutes by definition of (−) K . This is precisely the claim that Φ K ≃ Φ K . As noted in [QS19, Remark 3.28] a set of compact generators for Sp G [A −1 F ≥K ] is given by {G/H + ⊗ EF ≥K | H ∈ F ≥K a closed subgroup} (this also follows from the fact that the localization is smashing and Proposition 3.8). Of course, we can make similar definitions in the rational case. Diagrammatically the situation is as follows. as H runs through the conjugacy classes of subgroups of G. In other words, the geometric isotropy of X is exactly K. Proof. We have already seen that X ∈ Sp G [A −1 F ≥K ] if and only if φ H (X) ≃ 0 for all H ∈ F ≥K . A similar argument shows that X ∈ Sp G, K if and only if φ H (X) ≃ 0 for the set {H | H ∈ F ≥K or H ∈ F ≤K }. This set contains all the subgroups of G except for K. Finally, note that because X is non-trivial, we must have φ K (X) = 0 by [Sch18, Proposition 3.3.10]. Remark 3.21. The categories Sp K G and Sp G, K appear naturally in the work of Ayala-Mazel-Gee-Rozenblyum [AMGR19] and Balchin-Greenlees [BG20]. In fact, Corollary 3.24 proved below is essentially the identification of the K-th stratum of Sp G , in the sense of Ayala-Mazel-Gee-Rozenblyum, as the category Fun(BW G K, Sp). Such a result is also obtained in [AMGR19, Theorem 5.1.26]. Using Lemma 3.20 one sees that the rational category Sp G,Q, K also appears in Greenlees' computation of the localizing tensor ideals of Sp G,Q [Gre19], where it is denoted G-spectra K . Greenlees proves that these are precisely the minimal localizing tensor ideals in Sp G,Q . This is a composite of right adjoints, and so has a left adjoint F given as the composite Proof. We use the compactly generated localization principle Proposition 2.11 applied to the adjunction Here T (M ) = M K and F (L) = (G + ⊗ (NGK)+ L) ⊗ EF ≥K . Note that the category Sp K G is compactly generated by the object For simplicity we let E ′ denote this object. The category Sp cofree WGK is compactly generated by (W G K) + . Hence, it suffices to show that T (E ′ ) = (W G K) + and that F T (E ′ ) → E ′ is an equivalence. 5 The second in fact follows from the first condition, as then F T (E ′ ) ≃ (G + ⊗ (NGK)+ (N G K/K) + ) ⊗ EF ≥K ≃ E ′ , and one checks using the triangle identities that F T (E ′ ) → E ′ is indeed an equivalence. Finally, for the first equivalence, we argue similar to the proof of Theorem 3.22 of [AMGR19]; we have equivalences where the last step uses that (G/K) K = W G K as W G K-spaces. Thus, the assumptions of Proposition 2.11(2) are satisfied and show that By Lemma 2.8 this is the statement that By local duality, or by a similar argument using the cellularization principle (Proposition 2.9(2)), we deduce the following. is the category of cofree G-spectra, and the above result is just Proposition 3.11. On the other hand, if K = G, then BW G K ≃ B{e}, the one point space, and this is just the obvious equivalence between Sp and Fun(B{e}, Sp) that holds more generally for any category. Unipotence In this section we review the unipotence criterion of Mathew, Naumann, and Noel [MNN17], and give conditions on E that ensure that Loc E (BX) is unipotent for a connected finite loop space X. 4.1. A unipotence criterion. Throughout this section we fix a presentable symmetric monoidal stable ∞-category (C, ⊗, ½). We recall that there is an adjunction where R = End C (½), the left adjoint is the symmetric monoidal functor given by − ⊗ R ½ and the right adjoint is given by Hom C (½, −). (1) A is compact and dualizable in C. (2) DA is compact and generates C as a localizing subcategory. (3) The ∞-category Mod C (A) is generated by A itself, and A is compact in Mod C (A). (4) The natural map is an equivalence, where R = End C (½). Then C is unipotent. More specifically, the adjunction (4.1) gives rise to a symmetric monoidal equivalence of ∞-categories (Mod(R)) and the Bousfield localization is taken in the category of R-modules. Remark 4.4. We now show how to recover the unipotence criterion Proposition 4.3 from the compactly generated localization principle Proposition 2.11. In fact, the proof of the unipotence criterion uses [MNN17, Proposition 7.13], so we assume the existence of a commutative algebra object A satisfying the following: (1) A is compact and dualizable in C. (2) DA generates C as a localizing subcategory. (3) A belongs to the thick subcategory generated by the unit. Assuming these three conditions, we show how to use the compactly generated localization principle to deduce that C ≃ ⊗ L AR Mod R . We will apply Proposition 2.11 to the adjunction By [MNN17, Proposition 2.27] DA R is a compact generator for L AR (this uses assumption (1)), and F (DA R ) ≃ DA is a compact generator of L F (AR) C ≃ C by assumption (2). Thus, applying Proposition 2.11 we deduce that there is an equivalence of symmetric monoidal stable ∞-categories as claimed by the unipotence criterion. 4.2. Unipotence for local systems. We begin be recalling the definition of local systems on a space. Definition 4.5. Let E be a commutating ring spectrum, then for Y a connected space, we let Loc E (Y ) = Fun(Y, Mod(E)) be the ∞-category of E-valued local systems on Y . This is a presentable symmetric monoidal stable ∞-category, where the monoidal structure is given by the pointwise tensor product. Remark 4.6. With E and Y as above we also define spectra Note that because E is a commutative ring spectrum, so is C * (Y ; E), via the diagonal map. If E = HQ, we will simply write C * (Y ; Q) and C * (Y ; Q). We will usually be interested in the case where E = HQ, but there is no harm in working more generally for now. Let e : * → Y correspond to a choice of base-point for the connected space Y . By the adjoint functor theorem, the symmetric monoidal pullback functor e * : Loc E (Y ) → Loc E ( * ) ≃ Mod E has a left and right adjoint, denoted e ! and e * respectively (these are given by left and right Kan extension along e, respectively, see [Lur09, Section 4.3.3]). The following is a special case of [HL17, Lemma 4.3.8] (recall that we assume Y connected). Lemma 4.7. The ∞-category Loc E (Y ) is generated under colimits by e ! (E). Remark 4.8. Suppose more generally that f : X → Y is a map of connected spaces, then there is a symmetric monoidal pull-back functor f * : Loc E (Y ) → Loc E (X), which, by the adjoint functor theorem, has a left and right adjoint, denoted f ! and f * . We now introduce the class of spaces we are most interested in. Definition 4.9. A connected finite loop space is a triple (X, BX, e) where X is a connected finite CW -complex, BX is a pointed space, and e : X → ΩBX is an equivalence. We will often just refer to the finite loop space as X. To apply the unipotence criteria we need to discuss the relevance of the Eilenberg-Moore spectral sequence. We recall the definition from [MNN17] here. Definition 4.10. Let Y be a space and E a commutative ring spectrum. We say that the E-based Eilenberg-Moore spectral sequence (EMSS) is relevant for Y if the square is a pushout of commutative ring spectra, i.e., the induced map E ⊗ C * (Y ;E) E → C * (ΩY ; E) is an equivalence. Finally, we need the following, which is a special case of a definition in [DGI06, Section 8.11]. Definition 4.11. We say that C * (Y ; E) is a Poincaré duality algebra if there exists an a such that C * (Y ; E) → Σ a Hom E (C * (Y ; E); E) is an equivalence. In the case that R = Hk for a field k, then C * (Y ; k) satisfies Poincaré duality if and if H * (Y ; E) satisfies algebraic Poincaré duality. Now suppose that Y is a finite CW -complex, then we have With these preliminaries in mind, we now have the following, which is strongly inspired by the closely related result [MNN17,Theorem 7.29]. Remark 4.12. We recall that given a commutative ring spectrum R, we can form Bousfield localization in the category of R-modules, see for example [EKMM97,Chapter VIII] Then, there is always a localization of M ∈ Mod R , i.e., a map λ : M → L E M such that λ is an E-equivalence, and M E is E-local, i.e., F E (W, M E ) = 0 for any E-acyclic R-module W . For the following, we apply this in the case R = C * (BX; E) with E, considered as an R-module via the natural augmentation C * (BX; E) → E. Theorem 4.13. Let X be a connected finite loop space and E a commutative ring spectrum. Suppose that C * (X; E) is a Poincaré duality algebra, then Loc E (BX) is unipotent if and only if the E-based Eilenberg-Moore spectral sequence for BX is relevant. Moreover, if this holds then there is a symmetric monoidal equivalence of ∞-categories where the Bousfield localization is taken in the category of C * (BX; E)-modules. Proof. We first show that if the E-based EMSS for X is relevant, then Loc E (BX) is unipotent. To do this, we will apply the unipotence criteria of Mathew-Naumann-Noel given in Proposition 4.3 to the commutative algebra object A = C * (X; E) in C = Loc E (BX). Throughout, we let p : BX → * and e : * → BX denote the canonical maps. Note that A = e * (E), and that e ! (E) ≃ C * (X; E). Moreover, the functor e ! preserves compact objects (as its right adjoint e * preserves small colimits), and so we deduce that we deduce that C * (X; E) is compact in C. We also have and We now show that the (4) conditions of Proposition 4.3 are satisfied, which will imply that Loc E (BX) is unipotent. (1) Because e ! (E) ≃ C * (X; E) is compact in C, the assumption that C * (X; E) is a Poincaré duality algebra implies that C * (X; E) is also compact. (2) DA ≃ C * (X; E) ≃ e ! (E), and hence is compact. That C is compactly generated by DA follows from Lemma 4.7. (3) Consider the adjoint pair (e * , e * ). The left adjoint is given by forgetting the basepoint, and the right adjoint takes M ∈ Mod E to C * (X; M ). Using this, one sees that the projection formula holds, i.e., that is an equivalence for N ∈ C and M ∈ Mod E . Because e ! and e * agree up to a shift, e * commutes with arbitrary colimits. Finally e * is conservative. We can now apply [MNN17,Proposition 5.29], which shows the adjunction (e * , e * ) gives rise to an equivalence of ∞-categories Mod C (A) ≃ Mod E , which implies the result because e * (E) ≃ C * (X; E) = A. (4) By assumption the E-based EMSS for BX is relevant, and hence by [MNN17,Proposition 7.28] the natural map is an equivalence. Conversely, assume that Loc E (BX) is unipotent. By [MNN17,Corollary 7.19] the natural map is an equivalence, because A is compact in C by (1) above. It follows from [MNN17,Proposition 7.28] that the E-based EMSS for BX is relevant. Rational cochains and algebraic models We now put the results of the previous sections together and construct an algebraic model for Loc HQ (X) for a connected finite loop space X. Proposition 5.1. Let X be a connected finite loop space, then there is a symmetric monoidal equivalence of ∞-categories where the Bousfield localization is taken in the category of C * (BX; Q)-modules. Proof. This is a consequence of Theorem 4.13 in the case E = HQ. Indeed, π * C * (X; Q) ∼ = H − * (X; Q) ∼ = Λ Q (x 1 , . . . , x r ), and in particular satisfies algebraic Poincaré duality, and hence C * (X; Q) is a Poincaré duality algebra. Thus, it suffices to show that the Eilenberg-Moore spectral sequence for BX is relevant, but because BX is simply connected and we work over Q, [Dwy74] applies to show this. Applying Proposition 2.23 we deduce the following. Corollary 5.2. Let X be a connected finite loop space, then there are symmetric monoidal equivalence of ∞-categories Loc HQ (BX) ≃ ⊗ Mod I−comp C * (BX;Q) . In order to identify the right hand side of this equivalence, we begin by first identifying Mod C * (BX;Q) with dg-modules over the graded ring H * (BX). In order to do this, we first need a few words on free E ∞ -algebras. In particular, we recall that the free E ∞ -ring on a generator t is defined (as a spectrum) by S{t} = ⊕ n≥0 BΣ k , and is characterized by the property that naturally. In particular, given a ring spectrum with a class x ∈ π 0 R, we obtain a map of commutative algebras S{t} → R sending the class t ∈ π 0 S{t} to x. More generally, if A is an E ∞ -ring spectrum, the free E ∞ -A-algebra on a generator t is defined as where the Σ n action is by permutation on the factors. If we wish t to have degree d, then we can define A{t} = Sym * (Σ d A). Iterating this procedure, we can define A{t 1 , . . . , t n } as (A{t 1 , . . . , t n−1 }){t n }. If the degrees of the t i are all even, then there is a canonical map which is not an equivalence in general. Here H is the generalized Eilenberg-Maclane spectrum functor, which is right inverse to the functor π * : Sp → GrAb. However, in the case that A = HQ this canonical map is an equivalence because the higher rational homology of symmetric groups is trivial. We deduce the following. This is one part of the input into the following proposition. of E ∞ -ring spectra, which is clearly an equivalence. As such one gets a symmetric monoidal equivalence of ∞-categories Because θ is symmetric monoidal it preserves the tensor unit, i.e., θ(C * (BX; Q)) ≃ H * (BX). It follows (again using that θ is symmetric monoidal) that θ(K(I)) ≃ K(I), and one deduces the following. Corollary 5.5. The equivalence θ restricts to a symmetric monoidal equivalence of ∞-categories θ : Mod I−cmpl C * (BX:Q) ≃ ⊗ D I−cmpl H * (BX) . We now come to our main theorem. Theorem 5.6. Let X be a connected finite loop space, then there is a symmetric monoidal equivalence of ∞-categories Loc HQ (BX) ≃ ⊗ D(Mod I−comp H * (BX) ). Proof. Combine Corollaries 5.2 and 5.5 and Theorem 2.20. Using Corollary 3.23 we deduce the following result. Corollary 5.7. Let G be a compact Lie group and K a closed normal subgroup such that the Weyl group W G K is a connected compact Lie group. There is an equivalence of symmetric monoidal ∞-categories . Indeed, in the following diagram each of the three outer categories on the left is equivalent to the corresponding category on the right: 6 h h Using the algebraic models constructed in Theorems 2.16 and 2.22 we deduce the following. Corollary 5.8. Let G be a compact Lie group and K a closed normal subgroup such that the Weyl group W G K is a connected compact Lie group. (1) There is an equivalence of symmetric monoidal ∞-categories where the right-hand side denotes the essential image of the fully-faithful functor j * : D qu (U) → D qu (X ). An Adams spectral sequence In this final section we construct an Adams spectral sequence in the category C = Loc HQ (BX) when X is a connected finite loop space. We once again fix a graded commutative Noetherian ring A. We will denote the abelian category Mod I−comp A of L I 0 -complete dg-A-modules by A. As we will see, this category has enough projectives, and so we can construct an Ext functor, denoted Ext, in this category. We also a notion of homotopy groups in C. Definition 6.1. For M ∈ C, let π C * (M ) = π * Hom C (½, M ). We also recall that H * (X) ∼ = Λ Q (x 1 , . . . , x n ); we say that the rank of a finite connected loop space is the integer n. The spectral sequence then takes the following form. Theorem 6.2. Let X be a finite connected loop space, then for M, N ∈ C, there is a natural, conditionally and strongly convergent, spectral sequence of H * (BX)modules with N ). Moreover, E s,t 2 = 0 when s > rank(X). 6 Note that the middle categories are definitely not equivalent, however. As we will see, working with ring spectra makes the construction of such a spectral sequence very simple; it is just an example of the universal coefficient spectral sequence constructed in [EKMM97,Theorem IV.4.1]. We first observe that A has enough projectives; these are the I n -adic completion of free-modules (also known as pro-free modules), see [ 0 → F n → · · · → F 2 → F 1 → F 0 → M. A simple inductive argument on the short exact sequences associated to the resolution shows that 0 → L I 0 (F n ) → · · · L I 0 (F 2 ) → L I 0 (F 1 ) → L I 0 (F 0 ) → L I 0 (M ) ∼ = M is a projective resolution of M in A. Thus, A has projective dimension n. See also [Hov04, Proposition 1.10]. Because L I 0 is left adjoint to the inclusion functor A → Mod H * (BX) , we deduce the following, see also [PW20, Proposition 5.6] or [Hov04, Theorem 1.11]. Proposition 6.3. Let Ext denote the Ext-groups in A, then for P, S ∈ A we have Ext H * (BX) (P, S) ≃ Ext H * (BX) (P, S). We also have the following, which is proved identically to [BF15, Corollary 3.14]. Lemma 6.4. Suppose A ∈ Mod C * (BX;Q) , then A ∈ Mod I−comp C * (BX;Q) if and only if π * A ∈ A. Combining the previous two results we deduce the following. We now construct the Adams spectral sequence. Proof of Theorem 6.2. We recall that there is an equivalence of categories C ≃ ⊗ Mod I−comp C * (BX;Q) , given by sending M ∈ C to Hom C (½, M ) ∈ Mod I−comp C * (BX;Q) . Under this then, we have π t−s (Hom C (M, N )) ∼ = π t−s Hom C * (BX;Q) I−comp (Hom C (½, M ), Hom C (½, N )) ∼ = π t−s Hom C * (BX;Q) (Hom C (½, M ), Hom C (½, N )) where the last step uses that Mod I−comp C * (BX;Q) → Mod C * (BX;Q) is fully-faithful. The universal spectral sequence [EKMM97, Theorem IV.4.1] then takes the form E s,t 2 ∼ = Ext s,t H * (BX) (π C * M, π C * N ) =⇒ π t−s Hom C (M, N ). In general this spectral sequence is only conditionally convergent but in this case it is strongly convergent because E s,t 2 = 0 for s > n since H * (BX) has projective dimension n. Along with Proposition 6.3, this proves the theorem. Translating back into equivariant homotopy, we deduce the following. Corollary 6.6. Suppose G is a compact Lie group, and K a closed subgroup such that the Weyl group W G K is connected. For X, Y ∈ Sp K G,Q , there is a natural, conditionally and strongly convergent, spectral sequence of H * (B(W G K))-modules with E s,t 2 ∼ = Ext s,t H * (B(WGK)) (π WGK * (X K ), π WGK * Moreover, E s,t 2 = 0 when s > dim(W G K). When K = {e} is the trivial group we recover the connected case of [PW20, Theorem 10.6]. Using that there is an equivalence Sp G, K ,Q ≃ Mod I−tors C * (BX;Q) , a similar argument gives the following. Proposition 6.7. Suppose G is a compact Lie group, and K a closed subgroup such that the Weyl group W G K is connected. For X, Y ∈ Sp G, K ,Q , there is a natural, conditionally and strongly convergent, spectral sequence of H * (B(W G K))-modules with E s,t 2 ∼ = Ext s,t H * (B(WGK)) (π WGK * Moreover, E s,t 2 = 0 when s > dim(W G K). When K = {e} is the trivial group we recover the spectral sequence of Greenlees and Shipley [GS11, Theorem 6.1]. Appendix A. Model categories and ∞-categories Throughout we work with ∞-categories as developed in [Lur17]. Since much of the existing work on rational models has used model categories, here we present a very short summary of the relationship between model categories and ∞-categories. If C is a symmetric monoidal model category, then C is a symmetric monoidal ∞-category [Lur17, Example 4.1.3.6]. Moreover, if F is a symmetric monoidal left Quillen functor, then F is a symmetric monoidal functor, and because G is right adjoint to F by Proposition A.3, G is lax symmetric monoidal by [Lur17, Corollary 7.3.2.7].
12,884.2
2020-08-13T00:00:00.000
[ "Mathematics" ]
Surface exciton polariton in monoclinic HfO2: an electron energy-loss spectroscopy study Surface exciton polaritons (SEPs) were mostly expected in materials displaying sharp excitonic absorptions. Using electron energy-loss spectroscopy with a spatial resolution of 0.2–2 nm and associated calculations, we demonstrated SEPs upon rather weak excitonic oscillator strengths (broad interband transitions) in insulating, monoclinic HfO2 above its optical band gap. Broad interband transitions exist in many semiconductors and insulators above the band gap, and our work could stimulate future explorations of SEPs in a wide spectrum of materials and corresponding applications in optics. 3 spectral feature of the interband-transition excitation overwhelms the weak SEP excitation at ∼7.5 eV. The unambiguous observation of SEP in bulk HfO 2 can only be accomplished at a grazing incidence of the electron probe along the specimen edge just outside the bulk, so-called aloof geometry [13,14,17,18]. In such a probe-sample geometry, volume electronic excitations (e.g. interband-transition excitation here) can be much reduced, propitious for the predominant observations of surface excitations such as SEPs [13,14,17,18]. Spatially resolved STEM-EELS investigations were also performed on monoclinic HfO 2 films (5 nm) grown on GaAs(001) [19], showing good agreement with the corresponding calculations. Monoclinic HfO 2 with its high static dielectric constant is a technically important material for high-κ dielectrics applications [19]- [24], and interests in spatially resolved investigations of its electronic excitations have been strong [20]- [22]. Excitations of SEP in monoclinic HfO 2 have, however, never been documented or discussed in previous reports [20]- [22]. Experiment Two types of stoichiometric, monoclinic HfO 2 materials were investigated by STEM-EELS in this work, HfO 2 bulk ceramics and HfO 2 films (5 nm) grown on GaAs(001) substrates. The material synthesis details were published elsewhere [19,25]. The study of bulk ceramics is to tackle the intrinsic electronic excitations of HfO 2 , and HfO 2 /GaAs(001) provides a more practical sample geometry to perform the aloof STEM-EELS for further unveiling the SEP physics in HfO 2 . Specimens for STEM-EELS investigations were prepared by standard tripod polishing, followed by a quick ion milling at 3 kV for a few tens of seconds. The specimens were then subjected to careful plasma cleanings in order to remove carbon contaminations before STEM-EELS experiments. All STEM-EELS spectra were acquired on an FEI fieldemission STEM/TEM, Tecnai F20, operated at 200 kV and equipped with an electron monochromator. Throughout the STEM-EELS experiments, respective spectrum collection and probe convergence semi-angles of 4.9 and 13 mrad were used. Both the deconvolution of single scattering STEM-EELS spectra from raw results and the subsequent KKA were conducted on the DigitalMicrograph EELS package written on the basis of [15]. Results and discussion Figure 1(a) shows the STEM-EELS spectra acquired on bulk HfO 2 with the incident electron probe moved from the material interior to vacuum by a 2-nm probe step and normalized among each other with reference to the zero-loss peak (ZLP) intensity. Using an electron monochromator, the probe size is 2 nm and the energy resolution is 0.22 eV. The black curve indicated by the solid arrow (figure 1(a)) exhibits the spectrum taken in vacuum and right at the grazing incidence to the sample edge. Positioning the electron probe in the bulk (blue, figure 1(a)), the distinct spectral feature at 15.9 eV is characterized as the volume-plasmon excitation in HfO 2 , and this observed value is in good agreement with the literature (∼15.7-16 eV) [20,26]. The two small peaks above the volume plasmon, 18 and 19.8 eV, arise from high-energy interband transitions [20], whereas the origin for the broad intensity maximum from ∼7 to ∼11 eV was not clearly documented [20,26]. Considering that the macroscopic physics of STEM-EELS can be understood in the framework of the dielectric response of materials [10], we thus derived The intensities of the spectra were normalized to each other with reference to ZLP. The color circles (inset) denote probe locations and the corresponding spectra are shown by the same colors. Solid arrow, the spectrum acquired in grazing incidence to the sample edge. Dashed arrow, the intensity decrease of broad interband-transition excitation at ∼7-11 eV as a function of probe positions toward vacuum. (b) The complex dielectric function of HfO 2 derived from the blue spectrum in (a). The black, dark gray and gray spectra are the blow-ups of those in (a) with ZLP tail intensities below ∼5 eV being ignored for clarity. the frequency(ω)-dependent dielectric function of HfO 2 (figure 1(b)) by performing KKA on the blue spectrum in figure 1(a) with ε(ω) normalized to the refractive index of 2.1 at ω → 0 [20]. It should be noted that each experimental spectrum in figure 1(a) integrates electronic contributions from a large number of reciprocal-space vectors due to the noticeable probe convergence semi-angle of 13 mrad. This STEM-EELS convergence semi-angle results in transmission/reflection discs of 13 mrad in radius, and covers many Bragg reflections in HfO 2 (for example, ∼4.6/ ∼9.6 mrad for low-index, symmetry-allowed (100)/(002) reflections, respectively). Specific anisotropic electronic contributions ascribed to the monoclinic symmetry of HfO 2 would then be averaged out throughout all spectra in figure 1(a). The thus-determined ε(ω) in figure 1(b) could be empirically regarded as an isotropic counterpart and is essential for , the loss function for the L − -mode (red) at the given k r and k i of HfO 2 (thickness, 30 nm) and that for an infinitely thick HfO 2 in the large-k limit (green) with featureless characteristics instead. The loss probability of the infinitely thick HfO 2 was normalized to that of the L − -mode at the lower energy end in the inset. following theoretical derivations of SEPs and STEM-EELS excitations (figures 2-4), both of which are on the basis of isotropic considerations [13,27]. In figure 1(b), ε 1 passes through zero at ∼15.9 eV accompanied by a decrease in ε 2 , leading to a maximum in the volume loss function ∝ Im{ −1 ε(ω) } at the given energy. Such a feature is characteristic of volume-plasmon excitations [10], and the derived ε 1 and ε 2 (figure 1(b)) thus faithfully capture the electronic characteristics of HfO 2 . Further inspection of the absorption features in figure 1(b) (ε 2 ) indicates an interband transition (∼6.2 eV) right above the band gap onset (∼5.1 eV) [16] and a weaker transverse oscillator strength at ∼11 eV. The broad STEM-EELS feature from ∼7 to ∼11 eV (blue, figure 1(a)) is then taken as a result of the interband-transition excitation. Moving the electron probe from the bulk (blue, figure 1(a)) to the grazing incidence near the edge (black, figure 1(a)), the predominant spectral weight at 15.9 eV (volume plasmon) gradually red-shifts to SPP at ∼13.4 eV [20] (ε = −0.83 + i1.74, figure 1(b)) and the intensities of interband-transition excitations (∼7-11, 18 and 19.8 eV) also weaken accordingly. Eventually SPP becomes the predominant feature with a broad shoulder from ∼7.5 eV, which is shown more clearly in figure 1(b) (black; ignoring intensities ascribed to the ZLP tail below ∼5 eV). The intensity decrease of interband-transition excitation at ∼7-11 eV is most visible for the 10-eV hump (dashed arrow, figure 1(a)). It is noted that the evanescent wave fields of surface excitations are well extended into vacuum, while the volume excitations are relatively more confined within the bulk of materials [13,14,17,18]. With the probe positioned toward the vacuum ( figure 1(a)), the electromagnetic field coupling between the probe and the material thus favors SPP, leading to diminished contributions from the volume-related electronic excitations. With further increases in distances from the probe to the sample edge (e.g. dark gray, gray, etc; figures 1(a) and (b)), significant intensity decreases at ∼13.4 eV can be observed that are characteristic of the exponential decay of SPP wave fields from the material surface [13,14]. . These green, red, orange and purple spectra were aligned and normalized to ZLP, then deconvoluting ZLP and vertically shifted for clarity. The blue spectrum was taken from that in figure 1(a) for the convenience of comparison. The purple spectrum was acquired at grazing incidence, and the green one was taken at the center of the film. The red and orange spectra were recorded at ∼0.6 and ∼1.2 nm from the green electron probe position (center of the film), respectively. (b) Theoretical counterparts of (a) with the probe-sample geometry used for calculations depicted in the inset, which is the schematic side view of that in (a). Inset, the red electron trajectory giving the red calculated spectrum. A, SPP in HfO 2 ; B, interface plasmon; C, interband-transition excitations in GaAs and HfO 2 ; D, CR from GaAs (see also figure 4). It is, however, surprising that the broad shoulder at ∼7.5-10.5 eV ( figure 1(b)) persists and evolves into a prominent spectral onset at ∼7.5 eV (ε = 0.95 + i6.28; dark gray and gray in figure 1(b)). This clearly indicates the nature of evanescent surface wave fields around ∼7.5 eV, which has not been documented before [20]- [22]. Compared to SPP (∼13.4 eV, figure 1(b)), the slower intensity decays at ∼7.5 eV are consistent with the smaller wave-field decay constant, ∼ ω/υ (υ, the velocity of incident electrons; ∼0.7c at 200 kV; c, the speed of light) [13,17,18], characteristic of surface excitations with a lower eigen-frequency. Considering the circumstance of ε 2 > ε 1 > 0 at ∼7.5 eV and the close correlation with the interband-transition absorption at ∼6.2 eV ( figure 1(b)), this spectral onset at ∼7.5 eV raises a strong possibility that its physical origin is due to SEP [13], which is further exemplified in figure 2. In addition to the satisfaction of ε 2 |ε 1 | 0 and ε 2 > ε 1 > 0, a specific requirement for SEPs excitations is the small magnitude of k i , which decreases with decreasing material thickness (d) [13]. The excitations of SEPs thus also require a small material thickness that is ultimately determined by the respective magnitudes of ε 1 and ε 2 in materials [13]. Using ε = 0.95 + i6.28 at ∼7.5 eV and the isotropic SEP theory [13], we have calculated k r and k i of SEP in HfO 2 as a function of d, figure 2, with reference to the asymptotic limits, k r ∞ and k i∞ , 8 for an infinitely thick HfO 2 (k r ∞ = 0.98k 0 with k 0 = ω/c; k i∞ = ∼10 −1 k r ∞ ), where SEP is not favorable due to the large value of k i∞ . For SEP excitations, k i∞ (k i ) needs to be at least of the order of 10 −2 k r ∞ (k r ) [13] and smaller k i∞ (k i ) will result in more prominent SEP features [7,8,13]. The interpretation of the spectral onset at ∼7.5 eV (figure 1) as SEP would then suggest a small thickness of the HfO 2 ceramics. A thickness estimation using EELS logratio analysis [15] on the blue spectrum in figure 1 yields a sample thickness of ∼30 nm along the incident probe direction. At this small thickness, wave fields at both surfaces of the sample couple with each other, giving rise to symmetric (L + ) and antisymmetric (L − ) surface modes according to the charge-density symmetries across the material (figure 2) [7]- [13]. where ε 0 is the dielectric function of the surrounding free space (vacuum throughout this work, ε 0 = 1), and α 0 and α are the wave-field decay constants normal to the surface towards vacuum and HfO 2 , respectively. In figure 2(a), calculated k r , relative to k r ∞ , points out a given SEP mode being located to either the left or right of the light line k 0 (gray dot-dashed line) [7]- [9], [13]. The calculated k i in figure 2(b) further indicates whether the SEP mode can exist in thin HfO 2 (30 nm), where the existence of L − -SEP mode is unambiguous (k i ∼0.059k r ∞ ; see the profile of the corresponding loss function near ∼7.5 eV, inset) and that of L + -SEP is not possible because k i > k i∞ (meaningless and featureless in the calculated loss function, and thus not shown). In figure 2(b) (inset, green), the featureless loss probability for an infinitely thick HfO 2 in the large-k limit, ∼Im{ −1 ε(ω)+1 }, in the spectral regime is, however, intentionally shown to demonstrate that the SEP excitation in HfO 2 does require a small material thickness. The spectral onset at ∼7.5 eV in HfO 2 as a result of SEP excitation is hence conclusive and it cannot be observed clearly without positioning the electron probe away from the sample edge to disentangle contributions from the broad intensities of nearby interband-transition excitation (∼7-11 eV). Although the thin specimen of HfO 2 (figure 1) consists of SEP and SPP that might lead to spurious ε 1 (ω) and ε 2 (ω) structures upon KKA analyses [28], the very weak intensity of SEP ( figure 1(b)) and the broad and relatively weak excitation of SPP (blue, figure 1(a)) actually produce little effect on ε(ω). Now, we further investigate the SEP excitation in HfO 2 with a well-defined material lateral dimension perpendicular to the interface, 5 nm films grown on GaAs(100) substrates (inset, figure 3(a)) [19,25]. The vanishing k i of L + -and L − -SEPs in a 5 nm HfO 2 slab ( figure 2(b)) suggests favorable conditions for their excitations. However, no noticeable SEP excitations at ∼7.5 eV can be observed by spatially resolved probing HfO 2 /GaAs at the grazing incidence (purple, figure 3(a)) and positions further away from the HfO 2 surface (spectra similar to purple and weaker, and thus not shown) even using an electron probe with a better spatial resolution of 0.2 nm (energy resolution, 0.66 eV). Instead, broad intensities onset at ∼8-10 eV (C) and new spectral features, B (∼5.7 eV) and D (∼3.7 eV) in figure 3(a), appear in addition to SPP in HfO 2 (A, ∼13.4 eV). Probing HfO 2 /GaAs from the HfO 2 film center (green, figure 3(a)) to grazing incidence (purple) leads to a predominance of SPP over volume excitations, as revealed in the bulk material ( figure 1). Moreover, the volume-and surface-plasmon peak positions observed in HfO 2 films agree well with those observed in bulk (blue spectrum in figure 1(a) incorporated 9 into figure 3(a) for comparison), indicating that the thin HfO 2 films grown on GaAs possess electronic properties similar to bulk. This similarity is crucial for further explorations of the origins for B, C and D and the absence of SEP on the basis of macroscopic dielectric theory for STEM-EELS excitations (figures 3(b) and 4) [27]. Figure 3(b) shows the STEM-EELS spectra calculated per unit path length along the electron trajectory, and the optical constants of GaAs were taken from [29]. The calculations were performed using probe-sample geometries identical to those in experiments (schematic inset, side view of that in figure 3(a)) and integrations of k ≈ k r (k i vanishing in thin HfO 2 , and thus ignored), out of the paper plane, from 0 to 1 nm −1 . Integrations up to larger k make no visible changes to figure 3(b). The agreements between figures 3(b) and (a) are remarkably good, reproducing the experimentally observed SPP predominance (peak A, from green to purple probe positions), peak-B onset, the absence of SEP (∼7.5 eV) accompanied by intensities at C and the broad feature below the optical band gap (D). The calculated energy-loss probability maps in figures 4(a)-(c) visualize their respective origins. Figure 4(a) exhibits the map calculated for a bare 5-nm HfO 2 film, symmetrically bound by vacuum, with the electron probe passing along one of the two surfaces at grazing incidence. The null loss probability below the band gap has been ignored to enhance the figure contrasts. Otherwise, the dispersive spectral details (A and F, figure 4(a)) become obscure for their observations as a result of the associated change in logarithmic color scale. Figure 4(b) shows the map calculated for the actual material system investigated in figure 3(a), and figure 4(c) represents that for the pure GaAs substrate, i.e. equivalent to figure 4(b) without the HfO 2 layer as a control calculation set. Comparing figures 4(a)-(c), the sharp intensity at B unambiguously arises from the HfO 2 /GaAs interface plasmon [20], which is absent in bare HfO 2 (figure 4(a)) and observed as an intensity dip at the given energy in GaAs (dashed arrow, figure 4(c)). The wave-field delocalization of peak B from the interface, estimated to ∼ υ/ω ≈ 24 nm [17,18], gives rise to its excitation at a few nanometers from the interface ( figure 3). Peak C excited at ∼8-10 eV (figures 4(b) and 3) is primarily attributed to delocalized excitations of interband transitions [30,31] in GaAs and HfO 2 . In contrast, peak E at nearly the same energy in figure 4(c) shows mixed contributions from SPP and interband-transition excitations in GaAs [30]. Due to significant ε 1 ∼14 in GaAs (ω → 0) [29], the intense dispersive features below 4.4 eV (D, figures 4(b) and (c)) signify the Cherenkov radiation (CR) in GaAs, satisfied upon (υ/c) 2 · ε 1 > 1 and proportional to the material lateral dimension perpendicular to the surface as a result of its volume excitation character [30,32], and account for the broad maximum D in figure 3. It should also be mentioned that surface excitations actually have negative contributions to volume excitations when the material investigated is thin enough in its lateral dimension [18,30]. The CR excitation in HfO 2 is, therefore, negligible here (though satisfied) due to the small material lateral dimension of 5 nm (infinity for GaAs, instead). Most importantly, the dispersion of peak F (dashed line, figure 4(a)), similar to that of SPP in bare HfO 2 (dashed line, peak A), is a strong signature for its surface character [10]- [12] due to the associated L + -and L − -SEP excitations that are otherwise energetically indistinguishable considering the broad feature of F ( figure 4(a)). In the actual material system with GaAs ( figure 4(b)), SEP in HfO 2 is, however, effectively damped out, as observed by an intensity dip at F (dashed arrow). It has been demonstrated that the presence of an asymmetrically bound absorbing material (ε 2 = 0) deteriorates SEP resonances, leading to their negligible excitations [8]. GaAs is exactly characterized by ε 2 = 0 throughout the spectral regime in this work [29]. figure 4(a). The calculated large-(red) and small-k (blue) spectra in figure 4(d) display SEP (F, ∼7.5 eV), SPP (A) and volume-plasmon (∼15.9 eV) excitations in HfO 2 and obviously resemble to the experimental spectra acquired in bulk HfO 2 with the probe at grazing incidence (black, figure 1(b)) and several nanometers away from the bulk edge (dark gray and gray, figure 1(b)), respectively. This resemblance is not surprising, because a larger integrated k corresponds to a smaller impact-parameter (the probe-to-sample distance) in real space for STEM-EELS probing [27]. The existence of SEP in HfO 2 upon interband-transition absorption, ∼6.2 eV, above the band gap with ε 2 > ε 1 > 0 is affirmative from all consistencies (figures 1-4). Conclusions Using STEM-EELS with an ultimate spatial resolution of 0.2-2 nm and corresponding spectral calculations, we have firmly established the existence of SEP (∼7.5 eV) in insulating HfO 2 upon the weak excitonic absorption, ∼6.2 eV, above the optical band gap (∼5.1 eV). The relaxed SEP-excitation condition of ε 2 > ε 1 > 0 is satisfied at ∼7.5 eV. Interband transitions along with ε 2 > ε 1 > 0 can be found in many semiconductors and insulators above the band gap, and this work could stimulate future interests in SEPs in various materials, where SEP excitations may find unexpected optics applications via manipulations of their surface wave fields analogous to SPPs for plasmonics [14]. More recently, we are becoming aware of some early STEM-EELS investigations in insulating MgO smoke cubes, where surface resonances closely correlated with interband-transition onsets (ε 2 > ε 1 > 0) have also been reported above its optical band gap [33]- [36]. Although the nature of these surface excitations in MgO was not clearly indicated then [33]- [36], they do bear strong resemblance to the SEPs elucidated here in monoclinic HfO 2 . In addition to the optical potentials proposed above, SEPs represent fertile ground for revisiting surface excitations in a wide spectrum of materials.
4,804.8
2009-10-01T00:00:00.000
[ "Physics" ]
eu H 2-OPTIMAL DISTURBANCE REJECTION BY MEASUREMENT FEEDBACK : THE SINGULAR CASE Abstract: This work concerns a new methodology to solve the H2-optimal disturbance rejection problem by measurement feedback in the singular case: namely, when the plant has no feedthrough terms from the control input and the disturbance input to the controlled output and the measured output, respectively. A necessary and sufficient condition for problem solvability is expressed as the inclusion of two subspaces — a controlled-invariant subspace and a conditioned-invariant subspace. Such subspaces are directly derived from the Hamiltonian systems associated to the H2-optimal control problem and, respectively, to the H2-optimal filtering problem. The proof of sufficiency, which is constructive, provides the computational tools for the synthesis of the feedback regulator. A numerical example is worked out in order to illustrate how to implement the devised procedure. Introduction The problem of H 2 -optimal disturbance rejection by measurement feedback consists in finding a dynamic feedback regulator such that the closed-loop system is asymptotically stable and the H 2 -norm of the transfer function matrix from the disturbance input to the controlled output is minimal.This problem is completely solved and well-settled in the so-called regular case: i.e., when, in the plant equations, the linear map from the control input to the controlled output is injective, the linear map from the disturbance input to the measured output is surjective, and the subsystems involved have no invariant zeros on the imaginary axis.Some recent references are, e.g., [1,2,3], although this problem has been considered, as the linear quadratic Gaussian optimal control problem, since the early sixties in a huge classic literature.Indeed, the continuous interest that H 2 -optimal control has attracted throughout the last sixty years is due not only to its intrinsic theoretic interest, but also to the number and variety of its practical applications [4,5,6,7,8,9,10,11,12,13,14,15,16] as well as to its flexibility in providing the tools to solve more complex control problems [17,18,19,20,21,22,23]. Nonetheless, the treatment of the problem of H 2 -optimal rejection by measurement feedback is much more difficult in the so-called singular case: namely, when the assumptions of injectivity and surjectivity mentioned above are dropped.As was pointed out, e.g., in [24], the separation principle, that makes it possible to reduce the regular problem to an optimal control problem by state feedback and an optimal filtering problem, does not hold anymore, in general, and the infimum of the H 2 -norm is not always attainable. As to the solutions of the singular H 2 -control problem available from the literature, those presented in [24,25,26] are based on an a-priori assumption on the structure of the dynamic feedback regulator: i.e., this is assumed either to have a feedthrough term from the measurement to the control or not to have it.The methodologies developed therein to prove necessary and sufficient conditions for problem solvability exploit tools like linear matrix inequalities [27] and special coordinate basis [28,29].In particular, the solution of a pair of linear matrix inequalities leads to an auxiliary system for which the original H 2 -optimal control problem by measurement feedback reduces to an exact decoupling problem by measurement feedback with stability.Then, if the latter problem is solvable, a structural decomposition of the system, known as the special coordinate basis, shows how to design the regulator, also pointing out possible degrees of freedom in the eigenvalue assignment. Instead, the approach introduced in [30] and further developed in this work leads to a synthesis procedure that, first of all, does not need any a-priori assumption on the structure of the dynamic feedback regulator.Actually, the possible absence of the feedthrough term is an outcome of the synthesis procedure, not a postulate.Secondly, the reasoning is completely developed in the framework of the geometric approach [31,32].Indeed, the geometric approach has recently shown to provide powerful tools to handle a variety of up-to-date challenging problems [33,34,35,36,37,38,39,40]. Actually, the methodological approach developed in this work is inspired by those shared by the previous articles [41,42,43,44,45,46], where H 2 -optimal control problems were solved by elaborating further on the properties of the associated Hamiltonian systems.In particular, the study of the geometric properties of the two Hamiltonian systems related to the H 2 -optimal rejection problem by measurement feedback -one associated to the H 2 -optimal control problem by state feedback and the other to the H 2 -optimal filtering problem -leads to a pair of resolving subspaces for the original problem: a controlled invariant subspace and a conditioned invariant subspace.In fact, the original problem is shown to be solvable if and only if the latter of the subspaces mentioned above is contained in the former.Moreover, the synthesis of the feedback regulator, when the problem admits a solution, consists in the computation of linear maps which are friends of the resolving subspaces and, respectively, projections connected to the latter of the two. In comparison with the earlier [30], this work is characterized by the reformulation of both the subproblems of H 2 -optimal control and H 2 -optimal filtering in strict geometric terms, as the search for subspaces and related linear maps enjoying some special properties.The treatment gains tidiness and compactness from this change of perspective.Moreover, this work investigates in detail the computational aspects involved in the synthesis procedure and illustrates the different stages of its implementation by means of a meaningful numerical example. This work is organized as follows.Section 2 introduces the problem.Sections 3 and 4 deal with the H 2 -optimal control problem and the H 2 -optimal filtering problem, respectively, in the geometric approach framework.Section 5 provides the necessary and sufficient geometric condition for solvability of the H 2 -optimal rejection problem by measurement feedback.Section 6 illustrates a numerical example.Section 7 presents some concluding remarks. Notation: R, R + , C, and C − stand for the sets of real numbers, nonnegative real numbers, complex numbers, and complex numbers with negative real part, respectively.Matrices and linear maps are denoted by slanted capital letters, like A. The spectrum, the image, and the kernel of A are denoted by S (A), Im A, and Ker A, respectively.The trace, the transpose, the inverse, and the Moore-Penrose inverse of A are denoted by Tr (A), A ′ , A −1 , and A † , respectively.The restriction of a linear map A to an A-invariant subspace J is denoted by A| J .The quotient space of a vector space X over a subspace V ⊆ X is denoted by X /V.The orthogonal complement of V is denoted by V ⊥ .The notation V ⊕ W = X stands for V + W = X and V ∩ W = {0}.The symbol ⊎ is used to denote union with multiplicity count.The symbol I stands for an identity matrix of suitable dimension.The symbol x denotes the Euclidean norm of the vector x ∈ R n .The symbol G H (s) denotes the complex conjugate transpose of the transfer function matrix G(s).The symbol G(s) H 2 denotes the H 2 -norm of G(s).The symbol v(t) ℓ 2 denotes the ℓ 2 -norm of the deterministic signal v(t).The symbol w(t) rms stands for the root mean square norm of the stochastic signal w(t). Problem Statement The plant Σ is defined as the continuous-time linear time-invariant system is the to-be-rejected disturbance input, y ∈ R q is the measured output, and e ∈ R r is the to-be-regulated output, with p, m, q, r ≤ n.The sets of the admissible control inputs and of the admissible disturbance inputs are defined as the sets U f and D f of all piecewise-continuous functions with finite values in R p and R m , respectively.A, B, D, C, and E are assumed to be constant real matrices.Moreover, B, D, C, and E are assumed to be full-rank.The pair (A, B) is assumed to be stabilizable.The pair (A, C) is assumed to be detectable. The dynamic feedback regulator Σ R is defined as the continuous-time linear time where x R ∈ X = R n is the state.N , M , L, and K are constant real matrices to be designed.The feedback regulator Σ R has the dynamic structure of a state observer and provides a feedback control which is a linear combination of the state estimate and of the plant measured output. The system Σ L is defined as the closed-loop interconnection of the plant Σ and the dynamic feedback regulator Σ R : i.e., where Hence, the problem of the H 2 -optimal disturbance rejection by measurement feedback can be stated as follows. Let the system Σ be given.Find a dynamic feedback regulator Σ R such that the closed-loop system Σ L is asymptotically stable and where Geometric Approach to H 2 -Optimal Control The purpose of this section is to review the basics of the geometric approach to H 2 -optimal control, so as to derive a subspace and a linear map that will play a key role in the solution of the H 2 -optimal rejection problem by measurement feedback -as will be shown in Section 5.The results concerning the geometric solution to the H 2 -optimal control problem in the singular case were first derived for discrete-time systems in [41].Later, they were extended to the continuoustime case -see, e.g., the more recent [47] and the references therein.First, the H 2 -optimal control problem is stated, in the time domain, in strict geometric terms.To this aim, it is worth recalling the notion of controlled invariant subspace and the related notions of friend and inner asymptotic stabilizability of a controlled invariant subspace -the reader is referred to [31,32] for more details.A subsystem -henceforth denoted by Σ C -of the plant Σ, where only the control input and the to-be-regulated output matter, is considered: i.e., The short notation B is used in place of Im B. A subspace V ⊆ X is said to be an (A, B)-controlled invariant subspace if A V ⊆ V + B. A subspace V ⊆ X is an (A, B)-controlled invariant subspace if and only if there exists a linear map controlled invariant subspace V is said to be inner stabilizable if there exists a friend K of V such that the restricted linear map (A + B K)| V is asymptotically stable. Problem 2 (H 2 -Optimal Control).Let the system Σ C be given.Find the maximal inner stabilizable (A, B)-controlled invariant subpace -henceforth, V * H 2 -and an inner stabilizing friend K H 2 such that the output of the compensated system satisfies the condition that e(t) ℓ 2 is minimal for all x 0 ∈ V * H 2 .The statement above is equivalent to the more usual one, referred to Σ and expressed in the frequency domain (see, e.g., [3,Section 11.2] for the statement in the nonsingular case), by virtue of Parseval's theorem -on the assumptions where λ(t) is an undetermined multiplier (called the costate).The state and costate equations and the stationarity condition are According to [41], the geometric approach to the solution of the H 2 -optimal control problem reduces the latter to a problem of output zeroing for the associated Hamiltonian system -i.e., the dynamical system Σ derived from (1a)-(1c) by setting p(t) = 2 λ(t) and η(t Therefore, Problem 2 is equivalent to the following, which refers to system Σ.The symbols B and Ẽ respectively stand for Im B and Ker Ẽ. Problem 3 (Perfect Decoupling).Let the system Σ be given.Find the maximal inner stabilizable ( Ã, B)-controlled invariant subspace contained in Ẽ -henceforth, Ṽ * g -and an inner stabilizing friend Kg .Problem 3 is a slight variant of the classic disturbance decoupling problem with stability [31, Section 5.6], [32, Section 4.2], in the sense that, for any initial state x0 ∈ Ṽ * g , the state trajectory, which evolves according to the compensated dynamics belongs to the null space of the output and converges to the origin as the time approaches ∞.The subspace Ṽ * g and a corresponding inner stabilizing friend Kg -note that it may also happen that Ṽ * g = {0} -can be determined, e.g., by means of the computational algorithms available from [32]. Hence, the last step of this reasoning consists in showing how a pair (V * H 2 , K H 2 ) that solves Problem 2 can be derived from a pair ( Ṽ * g , Kg ) that solves Problem 3. where the partition considered in (3) is consistent with that in (2).Then, a pair (V * H 2 , K H 2 ) that solves Problem 2 is given by Proof.Let V * H 2 and K H 2 be defined as (4).First, it will be shown that V * H 2 is an inner stable (A + B K H 2 )-invariant subspace.Let Ṽ * g be the compact notation for the basis matrix of Ṽ * g shown in (3a).Since Ṽ * g is an inner stable ( à + B Kg )-invariant subspace, there exists a matrix X such that Equation (5a) can also be written as where ( 2) and ( 3) have been taken into account.From the first block of rows in (6), one gets A V X + B (K X V X + K P V P ) = V X X, which can also be written as since 7), in light of (4a) and (5b), proves (A + B K H 2 )-invariance and inner stability of V * H 2 .As to maximality of V * H 2 and minimality of e(t) ℓ 2 , these facts follow from maximality of Ṽ * g . As was shown in [41, Section IV], in the discrete-time case, the matrix V X is an invertible matrix of dimension n, since the subspace of the admissible initial states is the state space of the original system.Instead, in the continuoustime case, the subspace of the initial states that can be driven asymptotically to the origin along trajectories corresponding to the minimal ℓ 2 -norm of the output, by means of a state-feedback, does not match the whole state space, in general (see, e.g., [47] and the references therein, but also [2,Chapter 6]).For this reason, in (4b) -which is the continuous-time counterpart of (18) in [41, Section IV] -the Moore-Penrose inverse of V X replaces the inverse. Geometric Approach to H 2 -Optimal Filtering The aim of this section is to state and solve the H 2 -optimal filtering problem in the geometric framework, in order to derive the second pair, made up of a subspace and a linear map, needed to solve the H 2 -optimal rejection problem by measurement feedback -refer to Section 5.The lines followed by the reasoning developed herein are similar to those drawn in Section 4. Indeed, duality arguments are extensively used. First, the H 2 -optimal filtering problem is given a time-domain formulation in pure geometric terms.To this aim, the notion of conditioned invariant subspace and the joint notions of friend and outer stabilizability of a conditioned invariant subspace are to be retrieved [32, Section 4.1].The system Σ E is derived from the plant Σ by only considering the disturbance input and the measured output, so that The input d(•) is assumed to be a zero-mean white Gaussian noise.The symbol C stands for Ker C. A subspace W ⊆ X is said to be an (A, C)-conditioned invariant subspace if A (W ∩ C) ⊆ W. A subspace W ⊆ X is an (A, C)-conditioned invariant subspace if and only if a there exists a linear map L such that (A + L C) W ⊆ W. Any such L is called a friend1 of W.An (A, C)-conditioned invariant subspace W is said to be outer stabilizable if there exists a friend L such that the induced linear map (A + L C)| X /W is asymptotically stable. Problem 4 (H 2 -Optimal Filtering).Let the system Σ E be given.Find the minimal outer stabilizable (A, C)-conditioned invariant subspace -henceforth, W * H 2 -and an outer stabilizing friend L H 2 such that the state of the system satisfies the condition that x(t) rms is minimal for all x 0 ∈ X /W * H 2 .The solution of Problem 4 can be derived from that of Problem 2 by duality arguments.Namely, let ) be a solution of Problem 2 where the triple (A, B, E) has been replaced by the triple Problem Solution In this section, a necessary and sufficient condition for the solution of Problem 1 is derived by exploiting the properties of the subspaces V * H 2 and W * H 2 and of their corresponding friends K H 2 and L H 2 , respectively introduced in Sections 3 and 4. The proof of sufficiency is constructive, so that it outlines the procedure for the synthesis of the dynamic feedback regulator introduced in Section 2. Beforehand, it is worth reviewing the notions of outer stabilizability of an (A, B)-controlled invariant subspace and inner stabilizability of an (A, C)conditioned invariant subspace, along with their respective relations with stabilizability of the pair (A, B) and detectability of (A, C).Indeed, these definitions are the obvious complement of those respectively given in Sections 3 and 4. An (A, B)-controlled invariant subspace V is said to be outer stabilizable if there exists a friend K of V such that the induced linear map (A + B K)| X /V is asymptotically stable.Similarly, an (A, C)-conditioned invariant subspace W is said to be inner stabilizable if there exists a friend L of W such that the restricted linear map (A + L C)| W is asymptotically stable.Moreover, as can be easily shown, if (A, B) is stabilizable, any (A, B)-controlled invariant subspace is outer stabilizable.Likewise, if (A, C) is detectable, any (A, C)-conditioned invariant subspace is inner stabilizable. It is also worth mentioning that inner and outer stabilizability of an (A, B)controlled invariant subspace are indipendent of each other -the same is true for an (A, C)-conditioned invariant subspace.This fact can be shown by considering a friend K of an (A, B)-controlled invariant subspace V and performing a state space basis transformation T = [T 1 T 2 ], where Im T 1 = V.In fact, with respect to the new coordinates, The upper-block triangular structure of  + B K shows the separation between the inner and outer dynamics of V. In particular, from the previous considerations, it follows that the subspace V * H 2 can be rendered outer stable without affecting the inner dynamics, assigned through K H 2 .More formally, there exists a friend K of V * H 2 such that Similarly, there exists a friend L of W * H 2 such that The necessary and sufficient condition for the solution of Problem 1 -Theorem 1 below -is preceded by a lemma concerning the matrices M and N of the to-be-designed feedback regulator.In particular, Lemma 1 establishes how to derive the matrices M and N , that define the linear combination between the state estimate and the measured output in the feedback control law, by exploiting the features of the subspace W * H 2 .Lemma 1.Let the system Σ be given.Let the subspace Q ⊆ X be such that Then, there exist linear maps M and N , such that Proof.The proof is constructive.First, note that Q ∩ C = {0} owing to (10).Let the subspace P ⊆ X be such that Then, let N be the projection on P along Q.Consequently, (11b) holds.Furthermore, the linear map I − N is the complementary projection -i.e., the projection on Q along P -which implies that Ker (I − N ) = P ⊇ C.Then, the matrix equation (11a) in the unknown M is solved by Proof.If.Let the to-be-designed matrices K, L, M , and N of Σ R be picked so as to satisfy ( 8), (9), and (11), respectively.Then, in order to show that the regulator Σ R thus determined solves Problem 1, consider the closed-loop system Σ L and apply the state space basis transformation T L = I 0 I −I .Hence, in the new coordinates, where (11a) has been taken into account.Then, it is sufficient to show that the subspace is an inner and outer asymptotically stable A L -invariant subspace.Since, V * H 2 is an inner and outer asymptotically stable (A + B K)-invariant subspace and, likewise, W * H 2 is an inner and outer asymptotically stable (A + L C)-invariant subspace, showing that R is an inner and outer asymptotically stable A Linvariant subspace reduces to showing that Equation ( 13) along with where the latter can be written as in light of (11a).Note that Therefore, from (15) it follows that Finally, ( 16), (10), and (11b) imply (14).Only if.It is direct consequence of minimality of W * H 2 and maximality of V * H 2 as resolving subspaces of the associated H 2 -optimal filtering problem and H 2 -optimal control problem, respectively. In light of the constructive proof of the if-part of Theorem 1, it is worth noting that, if W * H 2 ⊆ C, the feedthrough term from the measured output to the control input in the feedback regulator Σ R is zero, while the feedback involves the whole state estimate.In fact, in this case, W * H 2 ∩ C = W * H 2 .Consequently, according to (10), Q = {0} and, according to (12), P = X .Finally, according to (11), M = 0 and N = I.In few words, one can say that, in this case, the separation principle holds. An Illustrative Example In this section, a numerical example is worked out to illustrate the synthesis procedure discussed so far.The computational support consists of the Matlab files implementing the geometric approach algorithms [32].The variables will be displayed in scaled fixed point format with five digits, although computations are made in floating point precision. Let the system Σ be defined by the matrices The pair (A, B) is controllable, while (A, C) is observable.The Hamiltonian system Σ associated to Problem 2 is determined according to (2).The subspace Ṽ * g and a linear map Kg solving Problem 3 are Thus, the design of Σ R is complete.Correspondingly, the value of the H 2 -norm of the transfer function matrix of the closed-loop system is G(jω) H 2 = 0.3263. Conclusions A methodology to solve the singular H 2 -optimal rejection problem by measurement feedback has been completely developed in the framework of the geometric approach.A necessary and sufficient condition for the existence of a solution to the stated problem has been expressed in terms of the inclusion between two subspaces -a controlled-invariant subspace and a conditioned-invariant subspace -respectively derived from the Hamiltonian systems associated to the H 2 -optimal control problem and to the H 2 -optimal filtering problem.The if-part of the proof, which is constructive, shows the procedure for synthesizing the dynamic feedback regulator.No a-priori choices on the structure of the regulator (either with or without the feedthrough term) are required.Nonetheless, the method at issue retrieves the regulator without the feedthrough term when this is able to attain the minimum of the H 2 -norm or, equivalently, when the separation principle holds.A worked-out example illustrates how to apply the discussed techniques.
5,437.6
2016-10-09T00:00:00.000
[ "Mathematics" ]
Simulation of tectonic stress field and prediction of fracture distribution in shale reservoir* In this paper, a finite element-based fracture prediction method for shale reservoirs was proposed using geostress field simulations, uniaxial and triaxial compression deformation tests, and acoustic emission geostress tests. Given the characteristics of tensile and shear fractures mainly developed in organic-rich shales, Griffith and Coulomb – Mohr criteria were used to calculate shale reservoirs' tensile and shear fracture rates. Furthermore, the total fracture rate of shale reservoirs was calculated based on the ratio of tension and shear fractures to the total number of fractures. This method has been effectively applied in predicting fracture distribution in the Lower Silurian Longmaxi Formation shale reservoir in southeastern Chongqing, China. This method provides a new way for shale gas sweet spot optimization. The simulation results have a significant reference value for the design of shale gas horizontal wells and fracturing reconstruction programs. Introduction For low-porosity and low-permeability shale reservoirs, the nano-scale pores in the matrix have basically no seepage capability. Therefore, fractures not only provide important space for hydrocarbon storage, but also provide efficient channels for hydrocarbon migration [1][2][3]. The great success of the marine organic-rich shale gas industry in North America shows that natural fractures can promote the large-scale accumulation of hydrocarbons in shale reservoirs [4][5][6]. Fractures are a key factor in obtaining high yields in shale reservoirs [7][8][9][10][11][12]. A large number of oilfield data around the world show that the degree of fracture development in tight reservoirs is closely related to productivity [13][14][15][16]. For example, the degree of fracture development of Paleozoic marine shale in North America is positively correlated with total gas content and free gas content. The success rate of shale gas exploration in the fractured zone is high. In addition, the natural gas productivity in the organic-rich marine shale reservoirs of the Lower Paleozoic in the Sichuan Basin of China is also positively correlated with the degree of fracture development [17][18][19]. In this paper, a finite element-based fracture prediction method for shale reservoirs was proposed using geostress field simulations, uniaxial and triaxial compression deformation tests, and acoustic emission geostress tests. This technology has achieved good application effects in the prediction of fracture distribution in the Lower Silurian Longmaxi Formation shale reservoir in southeastern Chongqing, China. Moreover, it provides a new way for shale gas sweet spot optimization, and the simulation results have important reference value for the design of shale gas horizontal wells and fracturing reconstruction programs. Materials and methods Experiments. In this paper, acoustic emissions and rock mechanics experimental tests were used to obtain the rock mechanical properties and paleostress of the target shale. The tests were completed in the Beijing SGS Rock Physics Laboratory. A GCTS petrophysical testing system was used for rock mechanics testing. The pressure sensor error of this instrument was less than 1 %, the displacement sensor ranges were between ±50 mm, and the strain accuracy was 0.0001 mm. In addition, the acoustic emission instrument was the SAMOS TM acoustic emission detection system. Its core component is the PCI-8 acoustic emission function card that processes the PCI bus in parallel. It has 8 channels of realtime acoustic emission feature extraction, waveform acquisition and processing capabilities on one board. Modern digital signal processing technology (DSP) is adopted, which is currently the most advanced acoustic emission processing system in the world. Finite element model. This paper used the finite element method to simulate the tectonic stress field and then predicted the plane distribution of tectonic fractures based on rupture principles. The core technology of this method is to establish an accurate geological model, mechanical model, and calculation model of the simulated area. The measured rock mechanical property parameters and paleostress values were used to calibrate the fake stress field (Fig. 1). The organic-rich shale mainly develops tensile and shear fractures. Therefore, Griffith and Coulomb -Mohr failure criteria were used to calculate the tensile and shear failure rate, respectively. Finally, proposed the comprehensive rupture rate based on the coupling results of tensile and shear ruptures (Fig. 1). (1) Fig. 2. Grid model of Longmaxi Formation shale in southeastern Chongqing area Рис. 2. Сеточная модель сланцев формации Лунмаси в юго-восточной части района Чунцин Equation (1) can be simplified as: (2) In the formula, Ni, Nj and Nm are the morphological function or shape function of the element displacement, [N] is the shape function matrix, and [δ] e is the nodal displacement component matrix. The strain of the element is a geometric equation: When equation (1) is substituted into equation (3), the strain matrix of the element can be obtained: (4) In the formula, the conversion matrix [B] is a geometric matrix. For each element, the maximum principal stress is obtained through coordinate transformation: The maximum principal stress σ1 and the minimum principal stress σ3 can be obtained by solving the above formula. www.nznj.ru Griffith and Coulomb -Mohr criterions. Under the action of regional tectonic stress, there are two main types of ruptures inside the rock: tensile and shear fractures [8][9]. The expression of the plane rupture criterion of Griffith theory is: When σ1 + 3σ3 ≥ 0, the rupture criterion is: (6) When σ1 + 3σ3 < 0, the rupture criterion is: In the formula, σ1 is the maximum principal stress, MPa; σ3 is the minimum principal stress, MPa, and σT is the tensile stress of the rock, MPa. The Coulomb -Mohr criterion believes that the shear failure on a plane is related to the combination of the normal stress σ and the shear stress τ. The Coulomb -Mohr shear rupture criterion can be expressed as: In the formula, |τ| is the shear strength of the rock, MPa; σ is the normal stress, MPa; C is the cohesive force, MPa; φ is the internal friction angle, ˚; tanφ is the internal friction coefficient. Comprehensive rupture rate. In this paper, the tensile rupture rate It and the shear rupture rate In were introduced to characterize different types of fractures: = / . (9) In the formula, σT is the effective tensile stress, MPa; σt is the tensile strength of the rock, MPa. When It ≥ 1, tensile ruptures will occur. = /| |. (10) In the formula, τn is the effective shear stress, MPa, and |τ| is the shear strength of the rock, MPa. When In ≥ 1, shear ruptures will occur. The rupture mode of shale is a comprehensive reflection of tensile and shear stresses [9][10]. Therefore, in order to better quantitatively characterize the development degree of structural fractures in shale reservoirs, a comprehensive fracture coefficient was proposed. = ( + )/2. (11) In the formula, a and b are the ratios of tensile and shear fractures respectively. In this paper, a : b = 3 : 2. Similarly, when Iz ≥ 1, the rock reaches a fractured state, and the higher the comprehensive fracture rate value of shale, the greater the fractured degree. Results Palaeo-stress based on acoustic emission. In the simulation of in-situ stress, the assignment of reasonable rock mechanics parameter attributes of the geological model of the target layer is essential. Furthermore, the assigned geological model is converted to a mechanical model. According to regional tectonic movement and acoustic emission tests, it is believed that during the Yanshan period, the tectonic activity in southeastern Chongqing was the strongest (148.8 MPa maximum tectonic stress); followed by the Himalayan movement (122.5 MPa maximum tectonic stress) ( Table 1). Rock mechanics parameters. The faults in the study area were divided into first-order, secondorder, and third-order faults. At the same time, the fold areas were split into slot folds, battlement folds, and barrier folds. The rock mechanics test results of different types of shales in the Longmaxi Formation in the study area are shown in Table 2. Mechanical properties of fault and fold zones. The fault zone was defined as a "weak zone" whose elastic modulus was 50-70 % of the ordinary sedimentary strata. At the same time, the Poisson's ratio was more extensive than that of the ordinary sedimentary strata, and the differences between them were between 0.02 and 0.1. The folding zone was identified as a "tough zone", and its elastic modulus was 1.5 to 3 times that of the normal sedimentary formation. At the same time, the Poisson's ratio was smaller than the normal sedimentary formation, and the differences between them were between 0.01 and 0.15 (Table 3). Discussion Tectonic stress field distribution. It can be seen from the simulation results of the tectonic stress field (Fig. 3) that the maximum principal stresses of the Longmaxi Formation shale reservoir in southeastern Chongqing were concentrated between -217.404 and -4.109 MPa. Positive values were defined as tensile stress, and negative values were defined as compressive stress. The maximum principal stress inside the fault zone was lower than that of ordinary sedimentary strata, and the stress intensity values were mainly distributed between -46.768 and -4.19 MPa. For areas with underdeveloped faults, the maximum principal stress value distribution ranged from -103.647 to -46.768 MPa. The rocks inside the fold zone are severely deformed, especially the rocks at the shaft and turning ends of the folds are more severely deformed. The stress in these structural parts will Fig. 3. Distribution of maximum principal stress of Longmaxi Formation shale in southeastern Chongqing area Рис. 3. Распределение максимумов основных нормальных напряжений в сланцах формации Лунмаси в юго-восточной части района Чунцин www.nznj.ru be highly concentrated under the premise that there is no fault damage and cannot release the stress. Suppose the fold is clamped by reverse faults, such as the southwestern Huayuan and Pengshui west-trending fault fold belt, or the faultrelated folds adjacent to the fault, such as Longshan and Xiushan areas. In that case, the maximum principal stress value will be higher. In addition, the closer the fold is to the fault, the more obvious the stress gradient changes. In addition to the fold mentioned above belts, the fault belt's end and the fault's turning end are also the transition areas from the broken rocks inside the fault belt to continuous strata. The rocks in these areas are at the edge of ruptures. Therefore, the stress value is higher. The maximum principal stress distribution in these areas ranges from -217.404 to -103.647 MPa. The shear stresses of the target shale reservoir in the study area ranged from -4.707 to 49.222 MPa (Fig. 4). Among them, positive values were defined as left-handed and defined negative values as right-handed. The structures of the study area showed obvious strike-slip characteristics, and the NNE-trending "S"-shaped faults and folds had the attributes of counterclockwise rotation and twisting. It can see from Figure 4 that the shear stress value in the study area is mainly positive, reflecting the counterclockwise left-handed shear stress field in the Himalayan period in southeast Chongqing. The simulation results are consistent with the compression-torsional strike-slip structural deformation characteristics shown in the Himalayan period in the study area. Prediction of fracture distribution. In this paper, the degrees of fracture development in shale reservoirs were divided into five levels ( Table 4). According to Figure 5, there are widely distributed fractures of grade I-IV in the eastern part of the study area, and the fracture development coefficients are mainly 1.4-4. Among them, the sizeable trough-shaped fold axis in the northern part of Huayuan is an area with highly developed fractures (level IV), and the fracture development coefficient has even reached above 4 (the formation was severely broken). The southern (Xiushan, south of Huayuan) and northwestern (Lianhu area) areas are favorable areas for level II and level I fractures, respectively. The western part (Wulong, Pengshui) mainly develops firstlevel fractures, and the fracture development coefficients are between 1-1.4. Fig. 5. Distribution of fracture development coefficient in Longmaxi Formation shale in southeastern Chongqing area Рис. 5. Распределение коэффициента развития трещин в сланцах формации Лунмаси в юго-восточной части района Чунцин The areas with high TOC content and brittle mineral content in Longmaxi Formation shale reservoirs are mainly located in the deposition center of black shale, namely Lianhu-Qianjiang and South Longshan areas. With the same comprehensive fracture coefficient, shale fractures with high TOC content and brittle mineral content are more developed. The eastern and southern parts of the study area have the most developed fractures, especially in the areas adjacent to the faults and the relatively strong-deformed grooved fold shafts, where some normal tensile faults have appeared. The southern part of the study area is the development zone of sandy shelf facies shale. The rock elastic modulus is high, and Poisson's ratio is low. It is prone to develop fractured under the action of external forces. Conclusions (1) The core of the numerical simulation method of the tectonic stress field lies in the establishment of the accurate geological model, mechanical model, and mathematical model. Given the particularity of shale reservoirs, the geological model must be used as the basis during the simulation process, and shale types and rock mechanical properties must be classified according to rock facies. (2) For the interior of the fold belt in the study area, especially the shale reservoir near the axis of the fold and the turning end, has suffered severe structural deformation, which is a highly concentrated area of stress. The end of the fault zone and the turning end are the continuous transition area from the broken shale inside the fault zone to the ordinary sedimentary strata, which is at the edge of rupture and has high-stress values. The black carbonaceous, siliceous, and calcareous shale of shallow sea shelf facies with stable distribution and weak structural deformation in the deposition center have high elastic modulus and low Poisson's ratio. These brittle shales are prone to develop structural fractures. (3) The quantitative prediction of shale fracture distribution cannot be based on a single factor as the criterion. Otherwise, it will cause onesided and limited results. The total fracture rate that affects the development of fractures in shale reservoirs should be considered as much as possible. This study found that fracture development areas are mostly concentrated in high-stress areas with severe structural deformation.
3,274.6
2021-12-27T00:00:00.000
[ "Geology" ]
Territory production potential . Based on the study of theoretical and methodological provisions on the region renewal, quality development and use of the resources, as well as the peculiarities of the regional production process, a methodological approach to the analysis of the potential of fixed assets of municipalities is proposed. The author's approach is based on the analysis of spatial unevenness of distribution, inter-territorial comparison, identification and analysis of the causes of changes caused by natural-geographical, socio-economic, administrative-territorial and other prevailing conditions. The proposed assessment technique has been tested on the example of municipalities of the Republic of Bashkortostan. Introduction The current state of the Russian economy, caused by the impact of sanctions, quarantine measures, as well as the historical features of the development of territories, is characterized by insufficient development rates for its potential. One of the key reasons for the slow growth at the national level is the extremely low resulting values of production activities that have developed over decades in the regions-entities in comparison with the volume of resources spent on their maintenance. All this leads to the fact that the indicators of the efficiency of resource use in the economies of Russian regions lag significantly behind the level of developed countries [1]. In addition to the absence of resource productivity upward trends, an important problem is the disproportions in providing the production process with the resources of the required quantitative and qualitative composition, the unsystematic nature of the processes of renewal, accumulation and consumption of resources within the region, as well as many other theoretical and practical issues requiring deep scientific research. The above-mentioned issues substantiate the relevance of the study of the potential of territories' fixed assets. The potential of fixed capital assets (FCA) implies the possibility of using existing equipment and machinery, buildings and structures, transmission and transport vehicles, computers, both involved in the production of goods and services or the performance of work, and existing on the balance sheet of enterprises with the assumption of further use in production. The analysis of their availability, quality, and efficiency of use closely correlates with the assessment of production potential [5]. At the same time, in the general structure of the reproductive potential, the potential of fixed assets is included in the element "Accumulated wealth", but according to the principles. Systems of national accounts the construction of the balance of assets and liabilities is carried out according to the element "Fixed assets" separately in the structure of national wealth [1, p. 104]. The structure of fixed capital assets (FCA) depends on the specialization and cooperation of commodity producers, their remoteness from the places of product sales, natural and climatic conditions, the nature and volume of products, the level of mechanization of production processes. Thus, the purpose of the study is to analyze the renewal, qualitative development and use of the region's resources using the proposed author's approach. Materials and Methods The comparison of FCA potentials between municipalities will be carried out using the proposed author's approach to research through the analysis of quantity-quality-efficiency of use-investment in renewal. In order to determine the levels of renewal, quality and efficiency of resource consumption in a particular municipality of the Republic of Bashkortostan, the method of comparing the values of each indicator with the average value for the republic is used, as well as using the method of correlation-regression and spatial analysis. At the next stage, the ranking of municipalities by indicator values will be carried out, the grouping and typology of municipalities of the Republic of Bashkortostan will be constructed depending on the level of use of basic resources. The developed author's methodology for assessing the characteristics of resource use is based on the definition, grouping and statistical processing of a number of indicators of regional social and economic development. The analysis of theoretical developments allows us to assert that in domestic and foreign economic science [2,3,4,7,10,11] in assessing the level of renewal of resources, indicators are used that make it possible to judge the volume of investments in a particular resource in order to restore its consumer properties. Thus, in order to characterize the renewal of fixed assets, the indicator "Investments in fixed assets per 1 ruble of fixed assets, RUB." In turn, the assessment of the resource quality by most of the examined researchers is based on measuring that part of the total volume of available resources, which is characterized by the greatest return in the economic sense. To determine the quality of fixed assets, the most informative and adequate indicator is the "Degree of fixed assets depreciation". And, to determine the efficiency of resource consumption, following a number of scientific works by specialists in the field of regional economics, it seems necessary to use a set of indicators that could reliably reflect the return of a resource in the process of its consumption. First of all, it is necessary to point out that the efficiency of the fixed assets use is quantitatively illustrated by the "return on assets" indicator, calculated per 1 ruble. the cost of fixed assets. The set of resource renewal indicators considered above, as well as the indicators required for their calculation, are presented in Table 1. Investments in fixed assets per 1 ruble of fixed assets, RUB. is calculated by the ratio of the indicator "Investments in fixed assets" (in actual prices; million rubles), excluding investments in residential property, to the cost of fixed assets (at the year-end; at full book value; million rubles). In addition to renewal indicators, it is necessary to reflect the sources of resource quality measurement ( Table 2). The degree of fixed assets non-depreciation is the inverse indicator of the degree of depreciation, i.e. is determined as a deduction from 100% of the degree of depreciation of fixed assets (at the year-end; in percent). Methods of calculation and statistical sources of resource consumption efficiency indicators require special attention (Table 3). The capital productivity is calculated by the ratio of indicators "Gross Municipal Product, million rubles" and "Fixed assets value (at the year-end; at full book value; million rubles)". In order to assess the social and economic development of municipal districts of the Republic of Bashkortostan, the methodology for assessing the "urban product" proposed by the Global Urban Observatory, operating within the UN on human settlements, will be used [9]. Within the framework of this technique, the following method for calculating the GMP is proposed: Where: GMPi is the assessment of gross municipal product by the i-municipal district; GRPr is the Gross Regional Product; NEr is the region number of employed; NEi is the municipal district number of employed; AWr is the average monthly wage by region; AWi is the average monthly wage by the i-municipal district. Based on the comparison of these indicators, it is possible to determine the characteristics of the region main issues in the field of resource use and its production competencies, to identify potential areas for eliminating imbalances, and, as a result, to formulate a set of forms and methods that can ensure the most intensive use of the main types of resources in the regional economy, taking into account their specific features in a particular region. Results and Discussion The proposed author's approach "renewal-quality-efficiency" was tested on the example of municipalities of the Republic of Bashkortostan. The analysis of the provision of the municipal formations of the Republic of Bashkortostan with basic production assets made it possible to identify 4 groups of municipalities. The first group is represented by large industrial centers of the Republic of Bashkortostan: Ufa, Salavat, Ufa municipal district, Sterlitamak, Uchalinsky municipal district, Blagoveshchensk municipal district, Beloretsk municipal district, Neftekamsk. Despite the decrease in this group share in the total volume of production assets of the republic, it concertizes on its territory more than 85% of FCA. The share of the second group of municipalities slightly increased from 2.47% in 2010 to 4.2% in 2018. At the same time, growth was recorded in all municipalities of this group, including municipalities with large industrial enterprises: the municipal districts of Khaibullinsky, Tuimazinsky, Belebeevsky, municipal districts city of Oktyabrsky, city of Kumertau. The 3rd and 4th groups, including more than 2/3 of the municipal formations of the Republic of Bashkortostan, account for less than 10% of the main production assets of the Republic of Bashkortostan. These facts testify the high differentiation of municipalities in terms of the provision of FCA as the most important production base. The important factor in the production process efficiency is the quality of fixed assets. A limited list of statistical data on municipalities makes it possible to include in the system of indicators for assessing the production potential, to characterize the quality of FCA: the share of non-depreciated FCA in the total volume of municipality FCA. These indicator values for the Republic of Bashkortostan practically did not change throughout the analyzed period. In average, the share of non-depreciated fixed assets in the republic for 2011-2018 exceeded 50%, with a downward trend, which is a positive fact. But, at the same time, the FCA depreciation level in 20 municipalities of the republic is close to or exceeds 50%, increasing annually. We are talking in particular about the Republic's It shall be noted that in 2011-2018. the leaders among municipalities in terms of the share of non-depreciated FCA were municipalities that do not have a large number of capitalintensive production enterprises on their territory: Blagovarskiy, Karaidelskiy, Burzyanskiy municipal district, Agidel and others. The exception is the Ufa region (67.9%), Salavat (63.3%), Khaibulli district (58%). At the same time, the most important indicator of the economy and production development is not only the availability and quality of a certain resource, but also the efficiency of its use. Let us analyze the balance of the AEU in the use of production resources, in this case, the use of fixed production assets. The efficiency indicator of using fixed assets is the return on assets rate [8]. In order to evaluate it, we use the value of the gross municipal product as a generalizing characteristic of the functioning of the municipal formations economy and the cost of fixed assets for the year. The average capital productivity in the republic decreases annually, which indicates a drop in the efficiency of the use of fixed assets. The group of municipal formations with a level of capital productivity above the average for the Republic of Bashkortostan includes municipal formations that specialize in the production of agricultural products mainly in personal subsidiary plots, the provision of tourist services, etc., that is, with a low provision of basic production assets: Davlekanovsky, Kaltasinsky, Baltachevsky, Buzdyaksky, Burzyansky, Kiginsky, Miyakinsky, Buraevsky and other municipal districts. Among the industrially developed municipal and urban formations in 2011, high capital productivity was noted in the city of Kumertau (2740 thousand rubles), Ishimbay municipal district (2703.68 thousand rubles), Belebey municipal district (2525.74 thousand rubles), Sterlitamak municipal district (2282.37 thousand rubles). By 2018, the trends remained: above the average republican level, capital productivity was recorded in the Ishimbay municipal district (2734 thousand rubles), Ufa (2266 thousand rubles), city of Oktyabrsky, despite a decrease in capital productivity by 11% to RUB 1,875,000, retained its place in the first group. At the same time, due to a sharp decrease in capital productivity, primarily due to the excess of the growth rate of the FCA cost over the volume of the gross municipal product produced, at the end of the rating of the municipalities were: Belebeevsky municipal district (1465 thousand rubles or 58% of the 2011 level), Sterlitamak district (1409 thousand rubles or 62% of the 2011 level). Also, a downward trend in capital productivity was noted in Beloretsk municipal district (1591 thousand rubles in 2011 and 1298 thousand rubles in 2018), Sibay (from 1472 rubles to 1249 rubles), Kumertau (from 2740 thousand rubles up to 1082 rubles). In the large industrial centers of the republic throughout the analyzed period, an extremely low level of capital productivity was noted: Sterlitamak, Uchalinsky, Blagoveshchensky, Ufa municipal districts, Salavat. An important role in the formation of the production potential of municipalities is played by investment in the renovation of fixed assets. In order to analyze the investment activity of municipalities in the field of updating the production base, we use the indicator of the volume of investments in FCA per 1 ruble. basic production assets. On average in the Republic of Bashkortostan, the reproduction rate of FCA decreased by 27% from 148.3 rubles in 2010 to 108.3 rubles in 2018. The dynamics of the indicator was unstable, the growth in 2011-2013 was replaced by a decrease in the crisis period of 2014. Among the municipalities, the maximum growth in investments in fixed assets per 1 ruble of fixed assets in 2018 relative to 2011 was shown by the municipalities with a low base of the indicator in the base period: Karaidel district (7.5 rubles), Agidel city (6 rubles Among the large industrial districts with a significant decrease in investments in fixed assets per 1 ruble of fixed assets, the following municipalities can be distinguished: Uchalinsky district (-57.2%), Sibay (-55.0%), Tuimazinsky (-52.0%), Ufa district (-48.6%), Belebeevsky district (-37.8%), city of Oktyabrsky (-13.3%), city of Neftekamsk (-9.5%). Let us assess the dependence of the change in the use of FCA with the gross municipal product. The data are presented in Table 4. The data in Table 4 indicate the extensive development of the economy of the Republic of Bashkortostan, when the volume of the gross municipal product depends primarily on the increase in fixed assets. If in 2011 the contribution of the labor-to-labor ratio in the GMP was 57% (the coefficient of determination is 0.57), then by 2018 it dropped to 22%. The calculations of the volume, quality, efficiency of use and reproduction of FCA indicators allowed us to identify the following patterns, which are most clearly traced in the example of industrially developed municipalities of the Republic of Bashkortostan. The data are presented in Tables 5-6. Also, the distribution matrices of municipalities were built based on the parameters of volume, quality, efficiency of use and reproduction of FCA. The leader in using fixed assets efficiency considering their quality and the volume of investments in the renovation is the Ishimbay district. In this Municipal Formation, the highest capital productivity is recorded, with a low share of depreciated FCA and high volumes of investments in the renovation of fixed assets. At the same time, the district has significantly improved its position, moving in terms of capital productivity from 16 to 6 place, in terms of the quality of FCA: from 38 to 12 place, in terms of investments in the renewal of FCA: from 48 to 9 place. However, the production processes in the Ishimbay district in many respects repeat the situation in the Republic of Bashkortostan as a whole, when the growth of investments in capital assets outstrips the growth rates of production volumes, which leads to a decrease in the efficiency of using FCA. Thus, the increase in the volume of investments in 2018 compared to 2011 in the district by 78%, allowed to increase the cost of FCA by 56% and bring the share of not worn-out funds to 62.1%. However, the capital productivity at the same time in 2018 amounted to only 86% of the 2011 level. A similar trend is observed in another industrial center of the Republic of Bashkortostan -Sterlitamak district, with a high share of non-depreciated FCA (62.8% and 9th place in Bashkortostan) and ensuring high growth rates of reproductive processes (153.3 rubles per RUB, RB), capital productivity in 2018 compared to 2011 decreased by 44%. Against the background of the rest of municipal formations of the Republic of Bashkortostan, city of Ufa stands out, which, with a high supply of FCA, demonstrates a consistently high level of capital productivity, having moved in this indicator from 49th place in 2011 to 13th place in 2018. In city of Oktyabrsky, with a sufficiently high provision of fixed assets (11th place in the Republic of Bashkortostan in 2018, which is 1 position higher than in 2011), there is a rather low share of non-worn-out fixed assets: 52.9% (40th place in RB). Compared with 2011, this indicator value decreased from 68.4% (the decrease was 22.7%). At the same time, the return on assets amounted to 1,875. In the Ufa district in 2018, there was still a high level of provision of fixed assets (3rd place in the Republic of Bashkortostan) and a low level of capital productivity (60th place). At the same time, the value of the indicator characterizing the quality of using the FCA has deteriorated significantly. According to the indicator "The share of not worn-out fixed assets" the region from the 6th position in 2011 dropped to 56th position in 2018 (-27.0%). In the Uchalinsky district, the situation is as follows. While maintaining positions in the Republic of Bashkortostan in terms of providing fixed assets in 2018 (5th place), capital productivity increased slightly (56th place against 59), which is partly due to the improvement in the quality of FCA. At the same time, during the analyzed period, there was a negative trend of reducing the costs of reproduction of production facilities, by 30.5% compared to 2011. Conclusions The conducted study on the basis of the proposed author's approach, through a comparative analysis of the volumes, quality, efficiency of use and costs of restoring fixed production assets, made it possible to identify the following patterns in the production potential of municipalities of the Republic of Bashkortostan in 2011-2018: − low, with a negative trend, return on assets on average in the Republic of Bashkortostan and in municipal and urban districts, which are large industrial centers; − concentration of FCA in municipal and urban districts, with low efficiency of their use (Salavat, Sterlitamak, Neftekamsk, Uchalinsky and Blagoveshchensk municipal district, etc.); − high depreciation of basic production assets at the leading industrial enterprises of the republic; − a decrease in the volume of investments in fixed assets, on average in the Republic of Bashkortostan by 27% over the period under review. Currently, the serious issue is the depreciation of basic production assets, their technological backwardness, assets need not so much repair and reconstruction, but replacement and modernization [6]. In the municipal districts with agricultural specialization, a rather low level of mechanization of production processes is observed. Without technological re-equipment, it will not be possible to ensure high yields and the required production volumes for the transition to sustainable economic development. It should also be noted the heterogeneity of the republic's municipal formations in terms of the level of production potential, among the reasons: the specialization of the economy, formed in Soviet times, the production base of large enterprises, etc. Increasing the efficiency of using production potential largely depends on the pace of implementation of innovative technologies: the creation of modern high-tech industries, the use of digital technologies, including in agriculture, which is one of the key sectors in the economy of the republic. At the same time, the speed of investment processes is insufficient to ensure the technological renewal of the production base of the Republic of Bashkortostan.
4,461.8
2021-01-01T00:00:00.000
[ "Economics" ]
Domestic Work and the Gig Economy in South Africa: Old wine in new bottles? Based on innovative, mixed-methods research, this article examines the entry of on-demand platform models into the domestic work sector in South Africa. This sector has long been characterised by high levels of informality, precarity, and exploitation, though recent regulatory advances have provided labour and social protections to some domestic workers. We locate the rise of the on-demand economy within the longer-term trajectory of domestic work in South Africa, identifying the ‘traditional’ sector as a key site of undervalued labour. On-demand domestic work platforms create much-needed economic opportunities in a context of pervasive un(der)-employment, opportunities that come with some incremental improvements over traditional working arrangements. Yet we contend that platform models maintain the patterns of everyday abuse found elsewhere in the domestic work sector. These models are premised on an ability to navigate regulatory contexts to provide clients with readily available, flexible labour without longer-term commitment, therefore sidestepping employer obligations to provide labour rights and protections. As a result, on-demand companies reinforce the undervalued and largely unprotected labour of marginalised women domestic workers. Introduction The gig economy, in which Uber-like digital platforms unite workers and purchasers of their services, is expanding globally. The model requires workers to perform task-based 'gigs', mediated through digital platforms, without the security or benefits usually associated with formal employment. 1 Though exponential growth is forecast in traditionally female-dominated sectorsnotably on-demand household services including cooking, cleaning and care work 2 -relatively little research to date has focused on gendered experiences of gig work outside of North America and Europe. 3 This article discusses ondemand domestic work in South Africa. It explores platform models' effects on working conditions, their impact on the three key constituents of the gig economy (workers, platform companies, and clients), and the implications of their rise for the valuation of domestic work. Domestic work is persistently undervalued in South Africa (as elsewhere), where it is overwhelmingly the preserve of poor black African women. However, the domestic work sector is relatively large, occupying 6 per cent of the country's workforce, 4 and advocacy by unions and allies has led to incremental improvements to the regulatory framework governing the sector. Though these regulations are neither comprehensive nor generous-the relatively low entitlements they stipulate reinforce the marginal status of domestic workersthey have given advocates a foundation from which to argue that working conditions could be further improved through additional formalisation. The 1 See V De Stefano, 'The Rise of the "Just-In-Time Workforce": On-demand work, crowd work and labour protection in the "gig-economy"', Comparative Labor Law and Policy Journal, vol. 37, issue 3, 2016, pp. 461-471; A Hunt and E Samman, Gender and the Gig Economy: Critical steps for evidence-based policy, Working Paper 546, Overseas Development Institute, London, January 2019, https://www.odi.org/sites/odi.org. uk/files/resource-documents/12586.pdf. 2 Projections by PricewaterhouseCoopers, for example, forecast that 'on-demand household services will be the fastest growing sector' of the gig economy in the European Union (EU), 'with revenues estimated to expand at roughly 50% yearly through 2025'. (See J Hawksworth and R Vaughan, 'The Sharing Economy-Sizing the Revenue Opportunity', PricewaterhouseCoopers, 2014, http://www.pwc.co.uk/issues/ megatrends/collisions/sharingeconomy/the-sharing-economysizing-therevenueopportunity.html; R Vaughan and R Daverio, Assessing the Size and Presence of the Collaborative Economy in Europe, Publications Office of the European Union, 2016, cited in Hunt and Samman, p. 10.) 3 On-demand services are provided locally, with the purchaser and provider in geographic proximity (in contrast to crowdwork, which takes place online). 4 'Quarterly Labour Force Survey: Quarter 3', Statistics South Africa, Pretoria, October 2018, retrieved 13 July 2020, http://www.statssa.gov.za/publications/P0211/ P02113rdQuarter2018.pdf. ostensibly different operating model underpinning the gig economy has the potential to undermine this effort, and thus it is important to understand the impact of its entry into the domestic work sector. We begin by examining the characteristics of traditional and 'on demand' domestic workers, and then explore the undervaluation of domestic work within South Africa. We argue that while on-demand platforms offer some improvements to workers over traditional employment arrangements, their goal of facilitating flexible labour leads to the continued normalisation of the labour exploitation of domestic workers. We conclude that both models undervalue domestic labour and perpetuate breaches in workers' labour rights, leaving workers in a highly precarious position. Characteristics of Domestic Workers Traditional and on-demand domestic workers share many common characteristics. This is unsurprising, given that many on-demand workers have previously worked under traditional domestic work arrangements or have continued with traditional work alongside platform-mediated gigs. The domestic workforce is overwhelmingly comprised of poor black African women. 5 Indeed, 98 per cent of our survey respondents were female and 97 per cent were black African. 6 Migrant workers from South Africa's rural areas or from adjoining countries furthermore form a significant share of the paid domestic workforce, especially in its less formalised segments. 7 Finally, we should note that domestic workers are relatively young, although we found that platform-based workers are on average slightly younger. In our sample, on-demand workers had a median age of 35, while traditional domestic workers had a median age of 41. 8 5 L Orr and T van Meelis, 'Women and Gender Relations in the South African Labour Market: A 20-year review ', Labour Research Service, Cape Town, 2014, pp. 2-27. 6 See footnote 29 below for details of the survey methodology, including the sample size. The Undervaluation of Domestic Work The historical undervaluation of domestic work is evident in both the traditional and on-demand models. At its core, this undervaluation stems from the gendered way in which the dominant economic concepts of 'productive' and 'unproductive' work are differentiated. Unpaid domestic work-mostly carried out by women-is categorised as an element of the 'household and care economy', and as a result domestic work is not seen as having intrinsic economic value. This undervaluation persists when domestic work is commodified: 'The gender stereotyping of unpaid care work, and the association of care with women's "natural" inclinations and "innate" abilities, rather than with skills acquired through formal education or training, lies behind the high level of feminization of care employment'. 9 Consequentially, 'the fact that women's unpaid domestic work has been undervalued has had a negative impact on the salary and working conditions of remunerated domestic workers'. 10 In other words, paid domestic work, which is disproportionately carried out by women, is perceived as an extension of the unpaid work within the household. This has contributed to the frequent exclusion of domestic work from formal labour relations frameworks, and therefore to its perception as 'undeserving' of good working conditions, including decent remuneration. The undervaluation of domestic work relates to the characteristics of domestic workers, who are typically marginalised and subject to intersecting inequalities alongside continued systematic discrimination. These women experience sites of power disparity that go beyond gender to include race, migratory status, and social class. 11 In South Africa, 'the low social status and undervalued nature of domestic work has roots in the historical use of specific racial and cultural groups as servants and slaves', 12 exacerbated by the racialised nature of relations 9 L Addati, U Cattaneo, V Esquivel, and I Valarino, 'Care Work and Care Jobs for the Future of Decent Work', International Labour Organization (ILO), Geneva, 2018, p. 8, http://www.ilo.org/wcmsp5/groups/public/---dgreports/---dcomm/---publ/ documents/publication/wcms_633135.pdf. 10 G Labadie-Jackson, 'Reflections on Domestic Work and the Feminization of Migration ', Campbell Law Review, vol. 31, issue 1, 2008, pp. 67-90, p. 82. 11 On intersectionality in the context of domestic work in South Africa, see D Gaitskell, J Kimble, M Maconachie, and E Unterhalter, 'Class, Race and Gender: Domestic workers in South Africa ', Review of African Political Economy, vol. 10, issue 27/28, 1983, pp. 86-108. On migratory status, see L Griffin, 'Unravelling Rights: "Illegal" migrant domestic workers in South Africa', South African Review of Sociology, vol. 42, issue 2, 2011Sociology, vol. 42, issue 2, , pp. 83-101, https://doi.org/10.1080Sociology, vol. 42, issue 2, /21528586.2011 D du Toit and E Huysamen, 'Implementing Domestic Workers' Labour Rights in a Framework of Transformative Constitutionalism', in D du Toit (ed.), Exploited, Undervalued -and Essential: Domestic workers and the realisation of their rights, Pretoria University Law Press, Pretoria, 2013, p. 79. between black domestic workers and their white 'madams', 13 a pattern that has lingered despite Apartheid ending. 14 With growing black affluence, class has featured more prominently in domestic worker-employer relations. Nonetheless, '[t]he result of the complex interplay between gender, race and class is, in many cases, a perception amongst employers that the domestic worker is a lesser creature'. 15 One outcome is the persistence of paternalist relationships between domestic workers and their employers. 16 Another is that the mobilisation of women, which is 'generally a necessary condition for changes in care-related policies', becomes less likely. 17 The location of domestic work also contributes to its undervaluation. Domestic work is largely conducted by isolated workers in the private sphere, which makes worker organising for better conditions with employers or stronger government regulation more difficult. 18 It should be clear from this analysis that there are many obstacles to raising the value of domestic work in South Africa. They go well beyond the gig economy. However, we argue that existing intersecting inequalities, discrimination, and power differentials tend to be reinforced in the on-demand economy, deepening the existing analysis of domestic work and care platforms in the However, see also S A Ally, From Servants to Workers, South African domestic workers and the democratic state, University of KwaZulu-Natal Press, Scottsville, 2010. Ally argues that the post-Apartheid state's regulation of domestic work 'depersonalised' employeremployee relations, thereby threatening domestic workers' use of personal relations to negotiate their working conditions, which is itself a unique characteristic of the nature of domestic work. United States 19 as well as India, Kenya,Mexico,and South Africa. 20 In the remainder of this article, we demonstrate that the current operating model of platforms in South Africa is likely to perpetuate the labour exploitation of domestic workers. The next section explores working conditions for traditional domestic workers. This is followed by our analysis of the emergence of the ondemand economy. This section includes our methodology, outlines our empirical findings on labour conditions within the on-demand sector, and analyses the 'winners and losers' under this new model. We conclude with reflections on the policy implications of this research. Labour Conditions in the 'Traditional' Sector Working conditions for domestic workers in South Africa have been historically poor, characterised by informality and exploitation. There have, however, been recent attempts to improve the situation. Unions such as the South African Domestic Service and Allied Workers Union (SADSAWU) have been leading sustained campaigns for decent wages and adequate workers' protection. Government attempts to establish a regulatory framework include the introduction of 'Sectoral Determination 7' in 2002, which mandated a minimum wage and basic working conditions such as formal employment contracts and the compulsory registration of workers with the Department of Labour-a change that enables them to benefit from the Unemployment Insurance Fund (UIF). In 2013, South Africa ratified ILO Convention 189 on Domestic Work, setting a new benchmark for improved conditions in the sector based on the key pillars of 'decent work'. These include recognition of domestic work as 'real work', formalisation through contracts, adequate wages, social protection, health and safety in the workplace, and rights to organising and social dialogue. 21 In 2018, the Department of Labour proposed extending workers' compensation to domestic workers, and in May 2019, the Pretoria High Court ruled that their exclusion from the Compensation for Occupational Injuries and 19 N van Doorn, 'Platform Labor: On the gendered and racialized exploitation of lowincome service work in the "on-demand" economy', Information, Communication & Society, vol. 20, issue 6, 2017, pp. 898-914, https Diseases Act of 1993 (COIDA) was unconstitutional. As of early March 2020, the Constitutional Court began proceedings over whether to instruct Parliament to amend COIDA to include domestic workers. 22 Nonetheless, significant challenges remain. While the National Minimum Wage Act of 2018 specified a minimum hourly wage of ZAR 20 (approx. USD 1.40 at the time), as of January 2019, the minimum wage for domestic workers was set at only 75 per cent of the national minimum. 23 SADSAWU and other labour rights organisations continue to highlight the insufficiency of this wage to meet the cost of living, as well as its symbolism for the undervaluation of domestic work vis-à-vis other forms of work to which a higher minimum wage applies. Implementation of regulation also remains patchy. An unknown (but presumably sizeable) number of domestic workers continue to work informally, and several categories of domestic workers remain excluded from social protection provisions. 24 For example, one recent estimate suggests that approximately one-third of the domestic workers who work the requisite 24 hours or more per month remain unregistered with UIF. 25 These include foreign individuals working on contracts, as well as individuals employed for less than 24 hours a month by a single employer-a key barrier given that many domestic workers work part-time for multiple employers. 26 Employer non-compliance and domestic workers' limited awareness of their rights further impede 22 'Why the Concourt Case for Domestic Workers is So Important -for Employers Too', Eyewitness News, 10 March 2020, https://ewn.co.za/2020/03/10/why-theconcourt-case-for-domestic-workers-is-so-important. 23 The Sectoral Determination of Minimum Wages for Domestic Workers (December 2018) adds detail based on location and weekly hours worked. As of 20 March 2020, the minimum wage was raised to ZAR 20.76 for most workers and ZAR 15.57 for domestic workers. See 'This is South Africa's New Minimum Wage', Business Tech, 18 February 2020, https://businesstech.co.za/news/finance/374890/this-is-southafricas-new-minimum-wage. 24 du Toit. The On-Demand Economy: Labour conditions, winners and losers The entry of digital platforms into the domestic work sector in South Africa builds upon an established model while also adding new features. Compared to other options open to domestic workers (notably but not exclusively traditional domestic work arrangements), digital platforms offer some positive features that workers value and which improve their working conditions. However, workers also identified several ways in which the on-demand model perpetuates their precarious working conditions. The data that informs this article was collected as part of a broader two-year research project exploring gender and the gig economy in Kenya and South Africa. 29 In South Africa, novel methods of data collection included a nineround, automated voice response (AVR) survey with workers active on a domestic work platform, and the analysis of data from this same company. 30 It should be underlined that while the platform provided access to its data and contact information for registered workers, the study was fully independent: the survey was conducted through an independent company who secured consent from workers to survey them and anonymised the data that was collected. The Nearly 650 workers (around one-third of the total) who were on the platform as of August 2018 responded to an invitation to complete the first round of a survey covering their background and motivations for engaging with the platform. We could not investigate self-selection into the survey nor non-response comprehensively because, for privacy reasons, the platform deliberately collected minimal personal details regarding registered workers. Subsequent response rates varied between 25% and 42% for the first five rounds of the survey, after which we had to reduce the sample size due to budgetary restrictions and a rise in the price of mobile phone airtime. See Hunt et al. for more details. study also involved qualitative interviews with workers who were active or had previously been active on gig platforms (16 direct interviews and one focus group discussion comprised of 10 participants). We also conducted three key informant interviews, with a team of academics, one domestic worker union representative, and one platform representative. Several domestic work-focused platforms exist in South Africa. Although platforms evolve and change regularly, they typically offer a smartphone-operated app that allows clients to access the profiles of workers whose availability and profile match their preferences for domestic service provision. These same apps also offer ways for workers to sign up, manage gigs, and receive payment. At the time of data collection, the platform studied for this research enabled clients to make bookings of three hours or more and gave them a way to tip workers. On the worker side, it offered an hourly rate based on their tenure with the platform and a premium for taking on gigs cancelled by others. The platform's method of recruitment included an application and selection process, migration status and criminal record checks, and orientation sessions on using the platform. It covered the cost of cleaning supplies, while workers paid for transport and the cost of their airtime (the platform has since developed a datafree app, eliminating airtime costs for workers). 31 Labour Conditions in On-Demand Domestic Work Our exploration of the conditions of gig work focused on: earnings and income stability; flexibility in the location and timing of work; safety and security; social protection; opportunities for learning and the professionalisation of service provision; and possibilities for collective organisation and bargaining. We briefly outline our findings on each in turn. Earnings and Income Stability Our analysis demonstrates that, as of December 2018, workers on the platform with five days of availability were earning ZAR 900 (USD 65) on average per week. This was around 45-50 per cent higher than the minimum wage for domestic workers (working at least 27 hours per week) of ZAR 616 (USD 45), but it still falls short of the amount needed for a family of four to exceed the poverty line (estimated at between ZAR 1,031 and ZAR 1,319 per week per 31 Detailed information is provided in Hunt et al. member). 32 Moreover, the overhead financial costs of gig work-e.g. airtime and transport costs between gigs-depress platform-based earnings. Most survey respondents (84 per cent) reported being their household's primary earner, while nearly all (95 per cent) had financial dependents. Many also signalled that their household incomes were insufficient to meet their basic needs and financial responsibilities. Utilisation rates for 'full-time' workers (those available for work five or more days weekly) averaged around 60 per cent over a one-year period. 33 In addition, the irregularity in receiving bookings meant that some gig workers experienced significant changes in their incomes from week to week, as demonstrated in average variation from mean earnings of close to 50 per cent weekly. Some workers fared better than others on the platform. The top 10 per cent of full-time workers were taking on around one quarter of the available hours of work carried out by full-time workers, with this 'success' linked to ratings, length of tenure on the platform, and being relatively more available to take up gigs. Nevertheless, over half of survey respondents (56 per cent) reported being satisfied or very satisfied with their pay. A significant share also reported that their hourly earnings were higher than they would have been in other types of work: 37 per cent reported that working through the platform was more lucrative than other jobs on an hourly basis, and 40 per cent indicated this was 'sometimes' the case. Once registered, gig workers tended to engage in other forms of paid work alongside platform work. Around half (52 per cent) of survey respondents reported having an additional job or business, or that they also worked for another platform. However, the platform typically provided the bulk of workers' income: 73 per cent identified the platform as the main source of their earnings in the previous month. Participants in face-to-face interviews mentioned having recently undertaken other types of casual or informal work, with several reporting street vending, working in shops, and commercial cleaning work. However, paid domestic work was the most frequently cited work engaged in 33 We computed utilisation rates, discounting voluntary 'days off ' between November 2017 and December 2018. before joining the platform, whether live-in or day labour within private homes, sometimes obtained via an agency, and many continued to provide domestic labour through traditional means alongside gig work. Taken together, these findings suggest that despite gig earnings being inadequate, they are still better than other options-notably domestic work in traditional households. Flexibility Flexibility in line with workers' preferences is a core offering platform companies advertise to their workforce. It is often portrayed as particularly advantageous to women due to the potential it offers to balance paid work with unpaid care and domestic work. 34 We found some evidence of workers being able to work on days that they preferred-92 per cent of survey respondents reported having worked on convenient days during the previous week and 88 per cent reported having worked at convenient times. Yet our interviews suggested mixed experiences among workers. Several interviewees agreed that platform work was more flexible than other types of work, including traditional domestic work. Alongside low pay in previous roles and/or persistent unemployment, this flexibility was cited as a reason to join the platform. However, this ostensible flexibility must be interpreted alongside other features of the platform model. First, the model allows platforms and clients to contract workers only when they need them. This means that the platform can respond to fluctuating demand at minimal cost, and that client demand for bookings de facto take precedence over workers' timing preferences. Second, the ability of clients both to book and cancel cleaners on an ad hoc basis-a key aspect attracting clients to the platform model-introduces considerable uncertainty for workers. Third, fixing gig booking lengths in advance increases the likelihood that clients will insist on more work than can reasonably be done in the agreed time, putting pressure on workers to acquiesce or risk being rated negatively and/or lose the client entirely. The location of gigs was also a challenge. The persistent legacy of racial and economic segregation in South Africa means that many workers live in townships or other low-income areas. These are geographically far away from the more affluent neighbourhoods of their clients, and travelling between the two on public transport is rarely easy. While workers can specify on the platform where they wish to work, this aspect of flexibility was often shaped by logistical and/or financial concerns. Safety and Security As in the traditional sector, violence against on-demand domestic workers is a concern. On-demand domestic work also comes with safety risks particular to providing services to a range of different and unknown clients in their homes. 35 Some workers reported instances of rude, aggressive, or abusive treatment while working behind closed doors. The physical urban environment in South Africa, characterised by long distances, poor transport links, and extremely high levels of crime and insecurity, presents further risks. Early gig start times were raised multiple times as a safety issue, and workers reported several instances of armed and aggressive robbery while travelling to and from gigs. Labour and Social Protection A chief critique of gig platforms is that the classification of workers as 'independent contractors' (which denies them the status of employees) restricts their access to labour and contributory social protections, while removing the need for platforms to make contributions on their behalf. 36 The model does not guarantee entitlements mandated within South Africa's domestic worker employee regulation. Moreover, platform company representatives frequently express aversion to becoming recognised employers (as opposed to technology companies, as discussed further below), although some have extended basic protections to workers through limited private schemes. The relatively progressive platform involved in this research, for example, had instituted various measures aimed at improving working conditions, including making accidental death and disability coverage available to workers via a private insurance company. Workers' status on the platform meant that routine life events risk further exacerbating economic precariousness. Workers often had limited or no income during maternity periods in which they could not work, which was especially pertinent since a majority were single mothers. Furthermore, workers' coverage by public social protection was low: the platform's polling of its workforce in 2019 suggested that just 5 per cent of on-demand domestic workers reported being registered for UIF (which would give them access to public maternity benefits), while 32 per cent did not know whether or not they were registered. 37 35 Hunt and Machingura. Learning and the Professionalisation of Service Provision Workers expressed satisfaction with the professional development the platform afforded; 91 per cent of survey respondents believed that the work they did through the platform gave them opportunities for 'learning on the job'. The model also appeared to enable a significant share of workers-26 per centto pursue studies alongside work. The platform representative we interviewed spoke of plans to provide training in soft skills such as scheduling and customer interaction. Despite this, attempts to professionalise on-demand domestic work, including through increasing and certifying worker skills, have not yet translated into widespread increased valuation of workers or the labour they provide. Some evidence suggests that investing in skills development, certification, and other forms of domestic work 'professionalisation' is important for increasing its societal and economic valuation. 38 Indeed, the platform we collaborated with had sought to challenge client perceptions of domestic work as a low-value commodity by presenting it as professional service meriting 'above market' rates. 39 But although the company had started out with higher prices for clients, they did not make bookings until prices were lowered. 'Razor thin margins and no willingness to pay' among clients made raising earnings an extremely challenging proposition for the start-up company. 40 Moreover, several interviewees spoke of a continued lack of respect and poor treatment from clients, suggesting ondemand models have not caused clients to value domestic workers more. Collective Organisation and Bargaining Formal gig-worker organisation is nascent in South Africa, with few signs of successful collective action in the on-demand domestic work sector. 41 Indeed, the platform model excludes workers from fundamental labour rights such as freedom of association, collective bargaining, or protection against discrimination or unfair dismissal. None of our survey respondents reported membership in any formal group that would advocate for their rights: 32 per cent said they did not know how to join such a group and 26 per cent felt that such organisations were for workers in the 'formal economy'. While SADSAWU reported receiving some complaints from platform workers, it had not yet had the capacity to focus on them. It also noted that workers would Ibid. 41 With notable exceptions, such as the legal case against Uber in South Africa discussed above, which the company later successfully appealed. need to be members of the union to receive structured assistance. 42 That said, many workers reported being in informal communication with one another: 74 per cent reported interacting with others on a regular basis, most commonly through WhatsApp. While certainly a source of support to these workers in lieu of other options, the informal nature of these private groups-and lack of any formal organising or bargaining mechanism-prevents them from transforming into meaningful collective action. Winners and Losers of On-Demand Domestic Work Building on the analysis of working conditions, this section explores beneficiaries and losers from the rise of on-demand domestic work. We consider three core constituencies: workers, companies, and clients. The core challenge for workers in South Africa is that of employment 'quality vs. quantity'. Although platforms play a growing role in generating paid work, some clearly provide better conditions than others-as evident in the results of the University of Oxford's 'Fairwork' index, which ranks platform companies according to principles covering fairness in pay, health and safety provisions, contracting, management, and representation. 43 Nonetheless, even where platform representatives report a wish to provide quality economic opportunities, a context of high unemployment, informality, and a weak regulatory environment make it possible for decent work standards to remain unmet. Clients are likely to benefit from securing flexible on-demand domestic work with few employer obligations. Workers The chief motivation for domestic workers to engage in gig work in South Africa, despite the multiple challenges it presents, appears to be economic necessity. It is important to recall the broader structural constraints that limit the availability and quality of work available to marginalised and disadvantaged women in South Africa, including an economy characterised by widespread un(der)employment and informality, persistent discrimination, and a challenging physical urban environment. Many interviewees highlighted a lack of other options and reported that platform work offered them some tangible benefits over both unemployment and the other forms of work realistically available to them. These included higher hourly earnings, some choice over work hours, and having an intermediary between them and clients. Indeed, 91 per cent of survey 42 Key informant interview, SADSAWU representative. respondents reported that gig work gave them greater freedom and control in their work. From this perspective, any constraints to platform operations through further regulation are likely to restrict workers' economic opportunities, under a model that many perceive as having relative advantages. This backdrop is hardly promising for improvements in working conditions, even where platforms seek to charge higher rates to clients than in the traditional sector and pass on (some of) this surplus to workers. The traditional domestic sector has been a key source of work for many marginalised women in South Africa, and platform companies are fully cognisant that they are operating within a context of poor labour conditions. Indeed, platforms are reliant on having a large pool of workers willing to provide cheap and readily available labour. This means that their offering can come with only minimal security, rights, and protections, and it will in some ways be better than what is found in the traditional sector. In other words, it is a relatively better option. But by neither fully meeting workers' needs nor by advancing a quality work framework, it can also be argued that they are helping to maintain the traditionally inadequate working conditions that have long characterised domestic work. Indeed, they can do this because weak regulatory institutions (and enforcement), widespread unemployment (currently averaging 30 per cent for women), 44 and deeply entrenched structural challenges give workers little choice but to take whatever paid work comes their way. Lack of protections typify South Africa's informal economy (and other low-and middle-income contexts). But what distinguishes the platform economy is that these are built in by design. Domestic work, as we have seen, provides an income to the most insecure workers who often lack other forms of social protection, such as living in a household with a member who has social insurance or who receives a government grant. Only 27 per cent of our survey respondents lived in households receiving a South African Social Security Agency (SASSA) grant (while 12 per cent were unsure), compared with 70 per cent of households nationally. It follows that these workers are most in need of the rights and protections that employee status would confer. The platform has provided limited protections to workers, including privatelyprovided microinsurance for accident and disability coverage, instead of contributions to public social protection which would normally be provided by employers. However, public schemes are more likely to confer protection upon workers, while 'more individualized forms of protection, such as private insurance or individual accounts, do not comply with most social security 44 'Unemployment Drops in the Fourth Quarter of 2018', Statistics South Africa, 13 February 2019, retrieved 13 July 2020, http://www.statssa.gov.za/?p=11897. principles, and therefore are outside the core of social protection systems'. 45 Indeed, the privatisation wave in the 1980s and 1990s demonstrated the underperformance of such schemes and raised serious doubts about an increased role for private provision. 46 Accordingly, public social protection systems financed through an appropriate blend of taxes and contributions are more likely to guarantee adequate social protection, ensure fiscal and economic sustainability, and give due regard to social justice and equity. Such an approach has the potential to promote a stronger social contract by allowing for risk pooling and redistribution among different groups within the population. 47 Behrendt et al. conclude that 'proposals that weaken social insurance in favour of private insurance and individual savings arrangements, with their limited potential for risk pooling and redistribution, will likely increase poverty, especially for vulnerable low-income earners and those with non-linear work careers, and exacerbate inequality, including gender gaps, and thus can only be voluntary mechanisms to complement stable, equitable and mandatory social insurance benefits '. 48 The delinking of platform companies from social insurance is not inevitable. GoJek, the largest gig platform in Indonesia, is notable for having developed the pioneering SWADAYA programme in 2018 in partnership with the country's public social security system, which adds a social insurance contribution to the price of its services. 49 However, because it voluntarily opts to provide this scheme rather than being mandated to do so by law, it has the option of changing or revoking the programme at any time. In short, even if workers perceive short term benefits from engaging in platform work, the concern is that its operating model could undermine legislative gains achieved within the traditional sector in the longer term. In turn, this could worsen working conditions and make workers dependent on company goodwill rather than concrete entitlements to labour rights and government social 45 C Behrendt, Q A Nguyen, and U Rani, 'Social Protection Systems and the Future of Work: Ensuring social security for digital platform workers ', International Social Security Review, vol. 72, issue 3, 2019, pp. 17-41, https://doi.org/10.1111 Ortiz, cited in Behrendt et al. protection. Domestic work is inherently insecure work in which marginalised women are overrepresented, yet their lack of power and socio-economic marginalisation means they are too often excluded from such protections, especially since they often do not qualify as or are not recognised as employees. Indeed, this motivated a long-fought effort by domestic worker unions and other allies in South Africa, leading to one of the strongest regulatory and social protection frameworks for traditional domestic work globally, which the ondemand economy risks undermining. Companies The legal framework underpinning gig work is a recurring challenge. Should gig workers be classified as employees and platform companies as their employers? This issue's importance is reflected in litigation seeking the application of regulation and/or confirmation of employee status (with its associated protections and benefits), which is being pursued by workers and labour advocates in many countries. Some analysts argue that on-demand models herald a new form of working which renders current regulatory approaches ambiguous or even obsolete, and that a new classification is needed. 50 Others argue instead that such a reappraisal would merely undermine existing standards by evading the application of current sectoral regulation. 51 Legislative debates over gig workers' employment status in South Africa have been confined to the ridesharing sector, most notably a case for unfair dismissal brought against Uber by deactivated drivers in 2017. Despite Uber's defence that drivers were not employees, and therefore could not be dismissed, the Commission for Conciliation, Mediation and Arbitration (CCMA) ruled in the drivers' favour-although the decision was later overturned on appeal. So far, this burgeoning advocacy has not led to the wholesale recategorisation of gig workers as employees. This means that platform companies 'win' from the growing gig economy chiefly by positioning themselves as brokers between clients and workers, rather than as employers. They capture value from workers' labour by charging commissions on gigs, while at the same time circumventing the responsibility to uphold labour rights and contribute to social insurance on workers' behalf. Per Aloisi and De Stefano, 'the lack of compliance with labour- Platform companies, in turn, argue that innovation is needed to provide employment (particularly in high unemployment contexts like South Africa); that they provide their own support to workers where viable (e.g. private insurance); and that their operating model bolsters work quality in settings where poor quality work is endemic. They contend that any attempt to reclassify users of their platform as employees would severely hamper their profit-making ability, due to the attendant obligations in terms of worker taxation and employee contributions, and consequentially jeopardise their very existence and therefore the economic opportunities they facilitate. Furthermore, by arguing that they need a favourable operating environment to 'create' jobs, platform companies may reduce the South African government's political will to carry out oversight. The government may well gamble that it is more politically expedient to support the creation of 'digital jobs' amid high unemployment, as Kenya's government has done, 53 than to increase the regulation of labour conditions and taxation. Indeed, a 'social partners' framework agreement for addressing South Africa's unemployment crisis through 'broad-based improvement in the business environment and conditions for entrepreneurial development' and strong encouragement of 'adopters of new technology to use innovation as a means to save and grow jobs' was agreed during the national Presidential Jobs Summit held in October 2018, with scant reference to job quality. 54 In short, there is a strong case that the profit-making model of on-demand companies in South Africa currently depends on the historic inability of domestic workers to establish a de facto employment relationship (and the better conditions that accompany it), as well as poor enforcement of existing regulations governing traditional domestic work. If these challenges were tackled in a meaningful way, companies would likely be obliged to emulate traditional employers in paying employee taxes and UIF contributions, which could in turn lead them to reduce the opportunities available (e.g. to ensure no worker gets over 24 hours per month work, which would render them exempt from having to pay UIF contributions). Clients At the time of writing, the platform charged its clients a variable rate that depended on the number of hours booked-ranging from ZAR 48 (USD 3.48) per hour for a four-hour booking to ZAR 30.5 (USD 2.21) per hour for a ten-hour booking, which represents the maximum length. This is clearly far higher than the government-mandated minimum wage for domestic workers (ZAR 16.03 [USD 1.16] hourly for a worker employed for fewer than 27 hours weekly, per the 2018 Sectoral Determination). However, the higher hourly cost to clients of hiring a platform worker is offset by lower transaction costs, e.g. those associated with selecting, screening, and supervising a worker found independently. This is a key attribute of the on-demand model that was highlighted by a platform representative we interviewed. They explained that, by allowing the platform to carry out these processes, clients were ensured a 'professionalised' service in return for paying higher prices. In addition, clients avoided the economic commitment of guaranteeing employment for a set number of hours' work and bureaucratic processes associated with being an employer as stipulated by South African labour law. Therefore, it could be argued that, from a client perspective, an important advantage of the platform model is that it de facto provides a service that evades compliance with labour or social security regulations. Such a trend significantly threatens the hard-won gains of the domestic worker movement and risks eroding the better-quality formal jobs where they exist, should the platform economy secure a sizable market share. This is likely to impact negatively upon the cohort of domestic workers who remain relatively marginalised but have managed to secure access to higher standards and securities in the traditional sector. Conclusion At present, neither the traditional nor the on-demand models can be said to offer decent domestic work. In both spheres higher standards and their enforcement are needed to redress historical power inequalities and ongoing breaches of the labour rights of South Africa's domestic workforce. The trajectory of the gig economy to date suggests that platform companies, with an inherent profit motive, are unlikely to lead the charge towards a wide-scale revaluation of domestic work; nor are household purchasers of workers' labour. Broader societal reforms are therefore needed to shift the social norms underpinning the discrimination and structural inequality characteristic of the domestic work sector. Government action is also needed, so that traditional domestic workers benefit from the same labour protections as workers in other more highly valued sectors, and to ensure that existing regulation is enforced in the platform economy. Indeed, without compliance with labour rights and protections, on-demand workers are unlikely to benefit fully from 'collective bargaining, protection from unfair dismissal and all the legal protection that goes with formal employment that goes in inverted commas if and when they become employees', per a Social Law Project representative. 55 As it stands, incremental improvements notwithstanding, we find that ondemand models can be seen as largely 'more of the same'. They capitalise on the undervalued labour of marginalised women workers and uphold the power held by the purchasers of their labour that characterise the traditional sector. Therefore, the platform economy represents a continuation of the normalisation of the labour exploitation of domestic workers. It is critical to extend labour and social protections to all domestic workers in a sustainable and comprehensive way, for which an increased societal valuation of domestic work-and workers-is a prerequisite. Policy-makers and platform companies have a central role to play in ensuring these rights, which notably include regular, fair, and adequate earnings, facilitating access to public social protection, ensuring their health and safety, and supporting collective action to ensure that policy and practice reflect workers' own priorities. Abigail Hunt is a Research Fellow at the Overseas Development Institute, where she leads research focused on gender and the world of work. She is particularly interested in marginalised women workers' experiences of new and emerging labour market trends, unpaid care and domestic work, and social protection. Email<EMAIL_ADDRESS>Dr Emma Samman is a Research Associate with the Overseas Development Institute and an independent consultant. Her research centres on the analysis of poverty and inequality, particularly gender inequality, the human development approach, the future of work, and the use of subjective measures of wellbeing to inform research and policy. Email: e.samman@odi.org.uk
9,685.2
2020-09-28T00:00:00.000
[ "Economics" ]
A Comprehensive Investigation of Gamma-Ray Burst Afterglows Detected by TESS Gamma-ray bursts produce afterglows that can be observed across the electromagnetic spectrum and can provide insight into the nature of their progenitors. While most telescopes that observe afterglows are designed to rapidly react to trigger information, the Transiting Exoplanet Survey Satellite (TESS) continuously monitors sections of the sky at cadences between 30 minutes and 200 s. This provides TESS with the capability of serendipitously observing the optical afterglow of GRBs. We conduct the first extensive search for afterglows of known GRBs in archival TESS data reduced with the TESSreduce package, and detect 11 candidate signals that are temporally coincident with reported burst times. We classify three of these as high-likelihood GRB afterglows previously unknown to have been detected by TESS, one of which has no other afterglow detection reported on the Gamma-ray Coordinates Network. We classify five candidates as tentative and the remainder as unlikely. Using the afterglowpy package, we model each of the candidate light curves with a Gaussian and a top-hat model to estimate burst parameters; we find that a mean time delay of 740 ± 690 s between the explosion and afterglow onset is required to perform these fits. The high cadence and large field of view make TESS a powerful instrument for localising GRBs, with the potential to observe afterglows in cases when no other backup photometry is possible, and at timescales previously unreachable by optical telescopes. INTRODUCTION Gamma-ray bursts (GRBs) are the most powerful explosions we observe in the universe, surpassing all other catastrophic events in terms of electromagnetic luminosity (Gill & Granot 2022).Peaking at γ-ray wavelengths, these bursts release rapid jets of radiation that can carry more energy in the span of a few seconds than the Sun will in its entire 10 billion year lifetime (Fishman & Hartmann 1997). GRBs emit most of their radiation out in narrow conical-shaped jets (Sari et al. 1999) which can be described using a number of geometric parameters, most notably θ obs , θ c , and θ w , as shown in Fig. 1.These represent the angles measured from the burst axis to the observer, jet core, and outermost jet boundary respectively.The majority of the energy released by GRBs is concentrated within the region defined by the jet core, θ c , whereas beyond the boundary, θ w , the energy emission is virtually negligible.Consequently, in general θ obs < θ w . Beyond their geometric properties, GRBs are also described by their durations and energies.The duration of a GRB is quantified using the parameter T 90 , which is defined as the time interval during which the cumulative number of detected photons increases from 5% to 95% of the total recorded counts (Kouveliotou et al. 1993).Based on this definition, GRBs are classed into two subcategories: short (T 90 ≤ 2 s) and long (T 90 > 2 s).The former are attributed to merger events between extremely compact celestial bodies (Eichler et al. 1989), such as neutron stars or black holes.There has been significant evidence over the past decade to support this hypothesis -most notably the detection of a short GRB accompanying the gravitational wave event GW170817 (Abbott et al. 2017). Long GRBs, constituting approximately 70% of all GRB detections (Kouveliotou et al. 1993), are thought to arise from an entirely different process than short GRBs.Because these events have a strong correlation with galaxies exhibiting high rates of star formation, they are strongly associated with hypernovae explosions, resulting from the core collapse of supermassive stars (Woosley 1993).The duration of these bursts can range up to several hundred seconds, having an average T 90 of ∼ 20 s.Unsurprisingly, their greater time spans and abundance make them easier to detect and study. The prompt GRB emission is followed by an afterglow at longer wavelengths that persist on timescales ranging from minutes to months in rare cases (Mészáros & Rees 1997).These afterglows are powered by synchrotron emission as ultra-relativistic particles interact with and shock the circumburst medium (CBM) (Gao & Mészáros 2015).Due to their dependency on the CBM and inhomogeneous magnetic fields of the source, we can use detailed observations of afterglow light curves to deduce properties of the progenitor and surrounding region (Wang et al. 2018).Open-source packages such as afterglowpy can be used to model these properties (Ryan et al. 2020). The rapid nature of GRB emission makes observing them challenging.At present, GRBs are primarily detected with the Fermi Gamma-ray Space Telescope and Neil Gehrels Swift Observatory which are fitted with the Gamma-ray Burst Monitor (FermiGBM; Meegan et al. 2009) and the Burst Alert Telescope (SwiftBAT; Barthelmy et al. 2005) respectively.Once a GRB is observed by either instrument, a 'trigger' is issued to the Gamma-ray Coordinates Network (GCN) for prompt follow-up by optical telescopes.In cases where a burst is too rapid or faint for precise localisation, the GCN is unable to mobilise adequately, which can prevent follow-up photometry from being performed (Berlato et al. 2019). Telescopes with high-cadence observations over large areas can provide unique data for studying GRBs through serendipitous observations.The Transiting Exoplanet Survey Satellite (TESS, Ricker et al. 2015) is one such telescope, with a 2160 deg 2 field of view (FoV) and near continuous observation over each ∼27 day Sector.With cadences decreasing from 30 minutes (2018-2020), to 10 minutes (2020-2022), to 200 seconds (2022-present), TESS is becoming an increasingly ideal instrument for serendipitous detection of GRB afterglows, particularly with respect to obtaining observations during the rising phase.The capability of TESS to capture these afterglows was shown in the analysis of GRB191016A (Smith et al. 2021).While well-localised GRBs such as GRB191016A could be readily identified and studied, the large data volume and challenging data have limited the extent to which TESS observations could be utilised for GRB analysis. In this paper we build on the TESSreduce pipeline (Ridden-Harper et al. 2021) to conduct the first systematic search for afterglows originating from known GRBs that were serendipitously observed by TESS.We also model resulting light curves using afterglowpy.Section 2 describes the TESS data and the source catalogues we utilise in this project.Section 3 describes the pipeline we implement to search for afterglows, and presents the results.Section 4 overviews the likelihood that the candidates are GRB afterglows.We also discuss how these results support the use of TESS for detecting and characterising rapid transients. Gamma-Ray Burst Coordinates Network We use the GRBweb event list1 compiled from sources in the GCN (von Kienlin et al. 2020;Lien et al. 2016;Ajello et al. 2019;Hurley et al. 2013;Barthelmy et al. 2000) as a catalogue of all known bursts over the lifetime of TESS operations.With the detection time, coordinate, and associated 1-dimensional position error (1σ), we check for TESS observations that overlap in time and with the 2σ error region.Due to TESS ' wide FoV, we have the capacity to search for afterglows with large positional errors that were otherwise not observed. From the total list of 1444 GRBs that have occurred since the launch of TESS (as of February 2023), we find that 69 have coincidental observations.Of these 69 bursts, only GRB191016A was known to have been detected by TESS prior to this study.One other GRB, occurring in March 2023, has subsequently been detected and analysed (Fausnaugh et al. 2023).Among our sample, 56 are long GRBs with T 90 > 2s, and 4 are short GRBs with T 90 < 2s (GRB180727A, GRB190507C, GRB221004A, GRB221120A); the remaining 9 GRBs have no solid constraint on T 90 . TESS TESS observes 24 • × 96 • sections of the sky continuously for ∼27 days in a highly elongated orbit around Earth.During its lifetime TESS has recorded Full-Frame Images (FFIs) at three cadences: 30 minutes (25 th July 2018-5 th 2020), 10 minutes (5 th 2020 -31 st August 2022), and 200 seconds (31 st August 2022 -present).It is fitted with a very broad bandpass ranging from the optical to near-IR (∼ 600 − 1050 nm), and has a low angular resolution of 21 arcseconds per pixel. Each ∼27 day observation period constitutes a TESS 'sector'.The time series of calibrated TESS full-frame images (FFIs) for each sector are made available through the MAST archive doi:10.17909/0cp4-2j79.The images are released as 2078×2136 pixel files, though only 2048×2048 contain captured data.We process all TESS data with TESSreduce to remove the scattered light background that affects cameras near the ecliptic, as described in section 3 of Ridden-Harper et al. (2021). Pipeline For rapid transients like GRB afterglows, the high-cadence TESS data can provide multiple data points during the event light curve.In this search we examine all available TESS data that overlaps with the reported GRB 2σ positional error at the time of the burst to identify any potential counterparts.Using the following criteria we search all available TESS data for GRB afterglows.For a pixel to be considered as containing a candidate GRB afterglow it must meet the following conditions: 1. have a local maximum within 1 hour of the burst trigger, as we expect the brightness to decay exponentially following the rapid rise; 2. exceed a brightness threshold of median+4σ, set by the median and standard deviation of the pixel's light curve 3. remain above the brightness threshold for 2 consecutive exposures. The localisation of the GRBs varies from arcminutes to degrees.In all cases we determine what TESS cameras and CCDs provide coverage of the GRBs and download the corresponding time series data of FFIs.We create data cubes, or Target Pixel Files (TPFs), from the FFIs using the astrocut package for python (Brasseur et al. 2019). After constructing the full frame TPFs, we cut them according to their error regions using astrocut.Due to computational limitations of TESSreduce TPFs larger than 1.6 × 10 6 pixels must be segmented into smaller TPFs.These segments can have a maximum size of 1300 × 1300 pixels and share a buffer zone of 20 pixels.An example of a segmented region can be seen in the left panel of Fig. 2. In all cases we perform photometric reduction on the cutouts using TESSreduce.Due to the size of the files, the more rigorous steps normally applied by TESSreduce are omitted, such as the secondary background correlation correction method and flux calibration.These processes are not required for the search procedures we implement, but are used when extracting the final light curve once a candidate is identified. Using the criteria outlined above for event detection we can search for transients in the reduced data.Pixels that meet the detection criteria are flagged as candidates.Neighbouring pixels that trigger at the same time are gathered and considered the same event.Comparing the light curves from each pixel in an event provides a useful diagnostic for determining the validity of the event.All events in a GRB field are manually vetted for legitimacy.An example field for GRB180807A with identified candidates is shown in the left panel of To pass the visual vetting stage, candidates must have a light curve shape consistent with a GRB afterglow -a rapid rise followed by a power law decay.Objects that pass visual vetting are reduced with the full TESSreduce pipeline, which reduces a 90 × 90 pix image cutout using TESScut.We then localise the source with the photutils centroid sources function on the frame with peak candidate flux (Bradley et al. 2022). As a final vetting stage we check deep imaging of the regions with the Pan-STARRS2 and SkyMapper DR23 image cutout services for the northern and southern targets respectively.Candidates whose coordinates appear near to visible sources are included in our findings, though we note their limited reliability.In general, we only find at most one candidate per GRB that satisfies our selection criteria for follow-up vetting. TESS Candidate GRB Afterglows Table 1 presents a list of candidate afterglow signals that were temporally coincident with known GRBs reported on the GCN, alongside the known afterglow of GRB191016A.The light curves of these candidates are displayed in Fig. 3, and are classed into three groups based on our confidence in their legitimacy.Our reasoning for each candidate is discussed in Section 4.1.The light curves are extracted with the complete TESSreduce reduction process, where we use a custom aperture for each GRB that maximises the collected flux while avoiding contamination from nearby sources due to the large pixel size. Modelling with afterglowpy Also displayed in Fig. 3 are models generated with the afterglowpy package for python.This package computes the GRB afterglow lightcurves based on a range of physical parameters describing synchrotron emission generated from the forward shock of a relativistic blast wave.These parameters include the isotropic-equivalent energy (E iso in ergs), the redshift in the cosmic microwave background frame (z cmb ), the jet angles for the observer, core, and wing (θ obs , θ c , θ w , all in radians), and four variables describing the density and energy distribution of the simulated circumburst medium (see Ryan et al. (2020) for details).All models assume a spatially-flat ΛCDM cosmology with H 0 = 70.0kms −1 Mpc −1 and Ω m0 = 0.3.Finally, we include an additional parameter t 0 which is the time in seconds from the GRB trigger to the afterglow rise. With only the TESS data this complex model suffers from high levels of degeneracy between parameters.Both the emcee (Foreman-Mackey et al. 2013) and nestle (Skilling 2004;Shaw et al. 2007;Feroz et al. 2009;Mukherjee et al. 2006) packages failed to converge for the full parameter fit with Gaussian jet geometry.Likewise, a simplified five-parameter model where n 0 = 1, p = 2.2, ε E = 0.1, and ε B = 0.01 with a tophat jet geometry failed to converge. As an alternative to emcee and nestle we use scipy curve fit (CF) algorithm for the initial fit, followed by a non-linear least squares optimisation.In this process, we first fit the parameters through CF and then used the output as the initial parameters for the least squares optimisation.Through trial fitting, we found that the simplified five-parameter model with a top hat jet structure provided results that had negligible differences to the full parameter fit with Gaussian jet structure which required a higher computational overhead, so we use the simplified model for all fits.The afterglowpy lightcurves for these fits are shown in Fig. 3 and Fig. 4 with the corresponding parameters shown in Table 2. From this model fitting it is clear that additional observations at different wavelengths are required to break the degeneracies we encounter here.Interestingly, in all cases, we find a non-zero value for t 0 indicating that there is an offset between when the burst occurred and when the afterglow begins according to afterglowpy. Contamination fraction Contaminants are always present in transient searches, manifesting from a myriad of sources, including instrumental artefacts, asteroids, and flare stars.We can clearly identify instrumental artefacts by checking for bad subtractions and dubious pixel clustering; asteroids can also be clearly identified by their on-sky motion.However, flare stars are more challenging to rule out as they can evolve on similar timescales to GRB afterglows and in some cases have a similar light curve. In this search, we limited ourselves to identifying signals that occur within 1 hour of the GRB trigger, and within the 2σ error region.To estimate the contamination fraction of each candidate GRB field, we carry out 30 trial searches using evenly spaced times which are at least 2 hours from the GRB trigger.In each trial we record all transients that pass our selection criteria and calculate the contamination fraction to be the total number of transients detected divided by the number of test searches.The contamination fraction for each GRB field is summarised in Table 3. For greater context detailing the relationship between contamination rate and galactic latitude, we repeat the test for each non-detected field outlined in Table 5 (see appendix).Of the 57 fields, 21 contained at least one contaminant, with a total number of 55 events.As shown by Fig. 5, we find that the contamination fraction correlates with galactic latitude; this is expected due to the higher concentration of flare stars in the galactic plane.It should be noted that the apparent dip in flare detections around zero degrees latitude is not due to physical influences; denser regions of the sky have brighter limiting magnitudes in the reduction process, and thus flares must be brighter to overcome the background. Non-Detections For GRB fields where we find no candidate, we estimate the limiting magnitude and coverage area of the 2σ error region.These limits are presented in Table 5 (see appendix).As our condition for detection requires at least two consecutive frames 4σ brighter than the background, our limiting magnitude is dependent on the light curve shape and decay time.This means that the true limiting magnitude for a GRB afterglow is dependent on the observation cadence.To estimate the true limiting magnitude, we inject a template GRB afterglow light curve of our best-fit model for GRB220310A, which is our best-constrained GRB afterglow.We scale the flux until it is detected according to the TESSreduce zero point.For large error regions with multiple cutouts, we find that the zero point is variable across the field, and thus take the median value for this estimated limit.This issue is not applicable to small cutouts, which have a consistent zero point. We combine all the pixels' limiting magnitudes into a single estimate through a weighted sum.We weight the contribution of each pixel according to a normalised 2D Gaussian centered on the reported burst position and using the 1σ position errors as the standard deviation.This method incorporates the large positional uncertainty into our estimated magnitude limits.Due to field crowding and reduction quality, we find that the limit can vary significantly between a magnitude range of 13.5 and 18.4,where pixels containing stars have brighter limits than sky pixels. GRB Name Confidence Metric Judgement Low Contamination Single Outburst Galactic Latitude Empty Visual Field Our classifications can be further supported by other circulars on the GCN.Both GRB200412B and GRB220310A have had other circulars reporting detections of optical afterglows that agree with the locations we find (Lipunov et al. 2020;Kumar et al. 2020;Belkin et al. 2020;Stecklum et al. 2020;Xin et al. 2020;Ogawa et al. 2020;Negoro et al. 2022;Svinkin et al. 2022;Lipunov et al. 2022;Kumar et al. 2022;Fouad et al. 2022;Hosokawa et al. 2022), whereas GRB180807A has no other circulars reported beyond the Fermi-GBM catalogue.GRB190723A, GRB200111A, and GRB210317A each were observed by multiple γ-ray telescopes with general consistency in location estimates to those reported by Fermi-GBM (Lipunov et al. 2019;FermiGBM-Team 2020;Xiao et al. 2020;Kozlova et al. 2020;Gaikwad et al. 2020;Pal'shin et al. 2020;FermiGBM-Team 2021;Yi et al. 2021;Lipunov et al. 2021).One telescope observed an X-ray afterglow for GRB220514B (Kawakubo et al. 2022), though no further positional information is reported to aid in localisation.The remaining candidates were only observed by the respective telescope that reported their bursts, and thus no comparison can be made beyond the coordinates utilised in this search. Upon reference to imaging catalogues, two candidates appear to be located near known sources.According to SIMBAD (Wenger et al. 2000), the presented coordinates for GRB190117B's candidate lie within 10 arcseconds of a potential flare star progenitor Gaia-DR3-2926429675703704448.Similarly, our GRB200111A candidate lies approximately 14 arcseconds away from the pulsating variable star ATO J099.2975+37.0822;this star's underlying variability appears to manifest in the long-term light curve of the pixels included within our candidate.The remaining candidates with partial or failed grades in the Empty Visual Field category of our assessment metric appear close to sources that are not named in any catalogue.We also investigate ATLAS photometry for candidate coordinates where possible, and find that no outbursts have been captured for the candidates of GRB190117B, GRB190308A, GRB200111A, and GRB220514B. From this analysis, we conclude that the candidate afterglows for GRB180807A, GRB200412B, and GRB220310A originate from their respective GRBs.The candidate afterglows of GRB181208A, GRB200111A, GRB210114A, GRB210317A, and GRB220514B each exhibit at least one feature that inspires reasonable doubt in their legitimacy, and thus are tentative candidates.The candidate afterglows for GRB190117B, GRB190308A, and GRB190723A each exhibit a number of compelling features against their legitimacy, and thus are unlikely candidates. TESS and GRB afterglows Our directed GRB afterglow survey shows promise for the capacity for TESS to serendipitously detect rapid extragalactic transients.Smith et al. (2021) predicted an observation rate of approximately 1 GRB per year; this was based on the general occurrence rate of GRBs, alongside the sky coverage and seeing ability of TESS.Our study finds a rate consistent with this, though upper estimates reach ∼ 2 GRB afterglows per year when including all candidates presented in this paper.Additionally, we expect that moving forward, the rate of TESS ' detection of GRB afterglows will further rise due to its increased cadence of observation, allowing for the resolution of shorter signals. The real value of these observations, however, are their uniqueness.As highlighted above, only GRB200412B and GRB220310A have had their optical counterparts detected by other telescopes.As we are confident in our candidate for GRB180807A, we believe that this means TESS may have been the only telescope to observe its afterglow It is likely that this is due to their poor localisation as reported by the detecting telescope, displaying the value of TESS ' wide FoV.Such an observation reveals the valuable role TESS can play in the analysis of rapid transients with poor localisation. Modelling Optical GRB Afterglows The high-cadence observations conducted by the TESS mission give a unique sample for modelling GRB afterglows.However, relying solely on the high-cadence observations taken with only the single TESS -Red broadband filter is insufficient in extracting the modelling degeneracies in afterglowpy. Despite the robustness of the five-parameter modelling procedure in producing well-constrained parameter estimates 4 , the inclusion of bootstrap sampling within these parameter spaces can result in light curves exhibiting significant residuals compared to the fitted model line5 .In order to obtain parameter-bound estimates during the modelling process, curves exhibiting a least-squares regression value higher than a predetermined threshold (which depends on the least-squares regression of the best fit) were excluded as potential candidates. In particular, the key parameters z cmb and E 0 exhibit a high level of degeneracy, necessitating observations at different wavelengths to disentangle them.In the event that the redshift had tighter constraints, for example, if the host galaxy was resolved, a more optimised set of parameters could be modelled.In an ideal scenario, simultaneous observations from a broadband blue variant of TESS alongside the existing TESS system would provide an extremely powerful data set for understanding GRBs and other transient-like phenomena. Because of the non-linear nature of the model and the degeneracies present, these parameters are simply those that were found to have the lowest residual and parameter uncertainty, therefore they are not to be taken as idealised constraints of the whole system.As some of the optimised parameters provide unrealistically small errors we implement a floor uncertainty of 0.01 for z cmb , 0.1 s for t 0 , as well as 0.001 radians for both θ c and θ obs due to these modelling uncertainties. Crucially we find that we needed to include an additional parameter, t 0 , which shifted the explosion time from the burst time.In all but one instance, t 0 > 0, suggesting there may be a physical process that is not accounted for in the afterglowpy module that delays the red to near-IR emission by 740 ± 690 s.This offset is still present when considering only the high-quality afterglows of GRB180807A, GRB191016A, GRB200412B, and GRB220310A, giving a delay of 560 ± 320 s.However, no true conclusion can be drawn based on the statistically small and broad sample with different sampling rates.While we restrict our general modelling to a 'top hat' geometry, the t 0 parameter was also required for the gaussian jet geometry.More exploration of further jet geometries, with more parameters could be explored in future work as TESS observes more GRB afterglows at higher cadence. The exception to the offset time being greater than zero, GRB210317A, was the most marginal flux detection that passed the criteria.It should also be noted that the GRB afterglows with data points along the rise had more constrained light curve parameters, and therefore, it should be expected that as TESS observes more GRB afterglows with a higher cadence, a clearer picture of this delayed onset will emerge. CONCLUSIONS We present the findings of the first systematic search for optical GRB afterglows in archival TESS data.Our pipeline utilises TESS ' wide FoV to survey the fields of poorly-localised afterglows; upon analysis of 69 potential fields, we present 11 candidate signals alongside one previously documented afterglow.Of these 11, we have high confidence in the legitimacy of 3 and some level of uncertainty in the remainder.We attempt to model these light curves using the afterglowpy package for python, fitted through a range of sampling and least-squares regression techniques.We find that high-cadence TESS broadband observations are not sufficient in breaking the degeneracy of key parameters, leading to poorly constrained fits and parameter estimates.However, we also find that there is a measurable delay time between the initial GRB burst and the afterglow onset that is not accounted for in afterglowpy -740 ± 690 s when considering all 12 events, and 560 ± 320 s when only considering the four high-likelihood afterglows of GRB180807A, GRB191016A, GRB200412B, and GRB220310A. Only two of the eleven candidates have had other afterglow detections reported on the GCN, likely due to the localisation uncertainty released by the respective alert telescopes.TESS therefore was likely the only optical telescope to observe the remaining events, including the probable afterglow of GRB180807A.This displays the capability TESS has at filling a valuable role in the tranisent community for detecting poorly localised rapid transients as a direct result of its unique combination of near-continuous observation and a large FoV.Furthermore, with TESS now operating at a cadence of 200 seconds, it will only improve at sampling the rise of optical afterglows, and thus better constrain the apparent time delay between burst and afterglow onset.As the TESS mission continues, the sample of GRB afterglows will increase and enable more detailed studies of GRBs and their afterglows. Figure 1 . Figure1.Simplified schematic of a GRB, displaying three key geometric parameters.θw is the outer wing truncation angle, θ obs is the angle from the jet axis to the observer, and θc is the angle of the central jet core. Figure 2 . Figure2.Detection pipeline as performed on GRB180807A.(Left) Events detected by the pipeline on camera 2 chip 3 coincident with the time of GRB180807A.The green curve displays the boundary of the 2σ error distance from the estimated location of the GRB as reported by the GCN, which is represented by the green plus in the top left.The coloured boxes display the regions of the cutouts we generate and operate upon individually.Each black point represents an 'event' detected by the pipeline; these are the signals of asteroids (eg.top right, each light curve corresponds to a single pixel) or random noise (eg.middle right).The red star displays the location of the candidate GRB signal (bottom right).Note that the dashed line in each of the light curves on the right represents the reported detection time of GRB180807A. Fig 2 . The majority of these detections are easily identifiable as asteroids from their on-sky motion (Fig 2 top right), or as peculiarities in the data reduction due to their light curve shape and number of triggered pixels (Fig 2 middle right). Figure 3 . Figure 3.Light curves and overlaid models of optical afterglows observed by TESS.GRB191016A (top right) is a confirmed detection; the remaining are high-likelihood candidates discovered by our pipeline.Each red line represents the best fit top hat model generated with afterglowpy, with the orange shaded regions representing the 1σ limits; see Section 3.3 for details.The time axis is presented with respect to the detection time (dashed line) of each GRB as reported by the GCN. Figure 4 . Figure 4. Continued candidate light curves for detections we consider less likely to originate from their respective GRBs.Light curves with ∼ preceding their names are tentative candidates, whereas those with × preceding their names are our least likely candidates; see Section 4.1 for details. Figure 5 . Figure 5. Distribution of detected stellar flares over galactic latitude in random trials across 69 field centres.Note that displayed in blue are the latitudes of the centres of fields; as shown in the far right bin, flares can be detected far from these centres. Figure 6 .Figure 7 . Figure 6.Detection images for all GRB afterglow candidates analysed in this study.For each GRB we show the difference images of the frame 10 cadences prior to burst trigger (left) and the frame with the flux (right). Table 1 . GRB optical afterglow candidates detected by TESS. Table 3 . Number of astrophysical contaminants detected in the fields of each candidate through 30 trials occurring at random at least 2 hours from the trigger.Each of these flares would pass our detection criteria discussed in Section 3.1. Table 4 . Candidate GRB afterglow confidence classification.Confidence is measured on performance in 4 categories: low contamination fraction (seeSection 3.4), solitary outburst occurrence across TESS sector, galactic latitude falling outside Milky Way plane, and empty field in comparison with PanSTARRS and SkyMapper imaging. Table 5 . Average limiting magnitudes and coverage for coincident GRBs with non-detections in TESS.Occurred during a TESS downlink.
6,799.8
2023-07-21T00:00:00.000
[ "Physics", "Geology" ]
Fermions on the kink revisited We study fermion modes localized on the kink in the 1+1 dimensional $\phi^4$ model, coupled to the Dirac fermions with backreaction. Using numerical methods we construct self-consistent solutions of the corresponding system of coupled integral-differential equations and study dependencies of the scalar field of the kink and the normalizable fermion bound states on the values of the values of the parameters of the model. We show that the backreaction of the localized fermions significantly modifies the solutions, in particular it results in spatial oscillations of the profile of the kink and violations of the reflection symmetry of the configuration. Fermions bounded by kinks were considered in many papers [24][25][26][27][28]. However, nearly all of the studies neglected the backreaction of the fermions on the soliton, moreover only zero modes were considered in most cases. There has however been some attempt to take into account the back-reaction of the fermion on the kink [30,31], although self-consistent solution is still missing. One of the main reasons for that is the enormous computational complexity of the problem, there is no analytical solution of the corresponding system of coupled integral-differential equations. A main objective of this paper is to reconsider this system consistently. Recently, we developed new numerical scheme which was successfully applied to examine the effects of backreaction of localized fermionic modes on planar skyrmions [32,33]. We have found that there is a tower of fermionic modes of two different types, localized by the soliton with one level crossing mode. Furthermore, in [33] we discussed a novel mechanism of exchange interaction between the skyrmions and constructed stable multi-soliton configurations bounded by the attractive interaction mediated by the chargeless fermionic modes. In the present paper we revisit the fermion-kink bounded system with backreaction. Apart the well known zero mode, which does not affect the kink for any values of the Yukawa coupling, we find various localized fermion modes with finite energy. The number of these bound modes increases as the Yukawa coupling becomes stronger, they are linked to the states of positive and negative continuum. We find that, as we increase the coupling, the effects of backreaction of the fermions on the kink becomes more and more significant. Furthermore, the localized fermions may give rise to additional exchange interaction between the solitons. This paper is organised as follows. In Section II we present the φ 4 model coupled to Dirac fermions via the usual Yukawa coupling. Numerical results are presented in Section III, where we describe the solutions of the model and discuss the spectral flow of the localized fermionic states with backreaction on the kink. Conclusions and remarks are formulated in the last Section. II. THE MODEL We consider a coupled fermion-scalar system in 1+1 dimensions defined by the Lagrangian where U (φ) is a potential of the self-interacting scalar field, ψ is a two-component spinor and m, g are the bare mass of the fermions and the dimensionful Yukawa coupling constant, respectively. The matrices γ µ are γ 0 = σ 1 , γ 1 = iσ 3 where σ i are the Pauli matrices, andψ = ψ † γ 0 . The φ 4 model corresponds to the quartic potential U (φ) = 1 2 1 − φ 2 2 with two vacua φ 0 ∈ {−1, 1}. The field equations of the system are given by Using the usual parametrization of a two-component spinor we obtain the following coupled system of static equations This system is supplemented by the normalization condition ∞ −∞ dx (u 2 + v 2 ) = 1, thus the configuration as a whole could can be characterized by two quantities, the fermionic density distribution ρ f = u 2 + v 2 and the topological density, i.e. the profile of the scalar field of the kink φ(x). Note that the first equation in the system of dynamical equations (3) enjoys the reflection symmetries while the equations on the spinor components coupled to the scalar field, are invariant with respect to the transformations Consideration of the fermionic modes is usually related with simplifying assumption that the scalar field background is fixed [24][25][26][27][28]. In the decoupled limit g = 0, the φ 4 model supports a spatially localized static topological soliton, the kink: interpolating between the vacua φ 0 = −1 and φ 0 = 1. Here x 0 is the position of the center of kink. The antikink solutions can be found by the inversion x → −x. Clearly, the kink field is parity-odd, it agrees with the symmetry condition (4). Then the reflection symmetry of the Dirac equation (5) means that the positive energy fermionic states localized on the kink are also the negative energy states localized on the antikink, and vice versa [26]. Further, due to this symmetry there is only one zero mode of the Dirac equation, which does not depend on the value of the Yukawa coupling g [24][25][26] where N 0 is a normalization factor. In the special case of the N = 1 supersymmetric generalization of the model (1) [34] this mode is generated via the supersymmetry transformation of the boson field of the static kink. It was noticed that further increase of the Yukawa coupling g gives rise to other fermion modes with non-zero energy, which are localized on the kink [25,26]. Indeed, the system of two first order differential equations in (3) can be transformed into two decoupled second order equations for the They are Schrödinger-type equations, for the fermions in the external static background field of the kink (6) the corresponding potential is In the limit of zero bare mass of the fermions, m = 0, the potential (9) becomes reduced to the usual Pöschl-Teller potential, so the equations (8) can be solved analytically [25,26]. Further, it was pointed out that as the Yukawa coupling g increases, the potential well becomes deeper and new levels appear in the spectrum of the bound states 1 . For example, there is a bound state solution for the massless fermions, which appears as the coupling increases above g cr = 1, with eigenvalues ǫ 1 = ± √ 2g − 1 [26,30,35]. Other solutions also can be written in a closed form, see [26]. However, as the coupling becomes stronger, the backreaction of the bounded modes could significantly affect the scalar field, so the analytical solution for the fermion modes bounded by the kink in not self-consistent for large values of g. Indeed, as we will see below, at strong coupling the exact self-consistent numerical solutions of the coupled system of equations (2) become very different from the analytical results for the fermions in the external field of the static kink. Our goal here is to investigate this effect in a systematic way. III. NUMERICAL RESULTS We have solved numerically the full system of integral-differential equation (2) with the normalization condition on the spinor field using 8th order finite-difference method. The system of equations is discretized on a uniform grid with usual size of 5000 points. To simplify our calculations, we consider only positive semi-infinite line taking into account the symmetry of the configuration (4), (5). Further, we map semi-infinite region onto the unit interval [0, 1] via the Here c is an arbitrary constant which is used to adjust the contraction of the grid. The emerging system of nonlinear algebraic equations is solved using a modified Newton method. The underlying linear system is solved with the Intel MKL PARDISO sparse direct solver. The errors are on the order of 10 −9 . To obtain numerical solutions of the system (2) we have to impose appropriate boundary condition for the spinor field, both at the center of the kink and at the vacua. For fermions localized on the kink, we have to impose First, we consider the normalized fermions with zero bare mass m = 0. Taking into account the symmetry properties (5) and the linearized equations (3) for the spinor field at x = x 0 = 0, we can can classify the corresponding solutions according to their parity. Thus, we consider two types of the boundary conditions for the massless fermions at the center of the kink We will refer to the modes of the first type to as A k -modes and to the modes of the second tape for example the zero mode (7) is denoted as A 0 . Note that for all modes, the number of nodes of the component u is one node more than the number of nodes of the component v. In the decoupling limit the backreaction of the fermions on the kink is neglected, the pair of the first order equations on the spinor components in the system (2) describes the fermion states in the external scalar field of kink φ K (x) (6). In such a case the energy spectrum of the localized fermions is symmetric with respect to inversion ǫ → −ǫ, apart zero mode A 0 , each state with a positive eigenvalue ǫ has a counterpart with reflected anti-symmetric u(v)-component and a negative eigenvalue −ǫ, see Fig. 1, left plot. The situation is very different in the full coupled system (2) with backreaction, the profile of the kink deforms as a fermion occupies an energy level. Further, the energy levels of the bounded fermions move accordingly and the symmetry between the localized fermion states with positive and negative eigenvalues ǫ is violated, see Fig. 1, right plot. Considering the spectral flow of the localized fermions, we observe that in the range of values of the coupling 0 < g < 1 there is only one localized zero mode, as seen in Fig. 1 In Fig. 2 we display the components of a few modes of both types, localized on the kink with backreaction, and the corresponding distributions of the fermionic density ρ f (x). In Fig. 3 we plot the fermionic density distributions ρ f of the first four localizing modes The effect of backreaction of the fermions coupled to the kink is illustrated in Fig. 4. As expected, the massless zero mode does not distort the kink for any values of the Yukawa coupling. However, the scalar field is strongly affected by other bounded modes with non-zero energy. For example, it is seen in Fig. 4, left upper plot, that the coupling of the kink to the mode B 1 leads to distortion of the profile of the soliton, which closely resembles the deformation of the kink due to excitation of its normalizable discrete vibrational mode, see e.g. [4,7,36]. Clearly, by analogy with excitations of this internal mode of the kink [37], dynamical coupling to the fermions may lead to production of the kink-antikink pairs. Coupling of the kink to the modes with some number of nodes is reflected in visible spatial oscillations of the static scalar field at the center of the kink, where fermion modes are located, see The work here should be taken further by considering fermionic states localized on various sponding superconducting strings. Another interesting question, which we hope to be addressing in the near future, is to investigate the exchange interaction between the solitons mediated by the fermions, a first step in this direction has been made in [33].
2,622.2
2019-09-27T00:00:00.000
[ "Physics" ]
Design of a 40-nm CMOS integrated on-chip oscilloscope for 5-50 GHz spin wave characterization Spin wave (SW) devices are receiving growing attention in research as a strong can-didate for low power applications in the beyond-CMOS era. All SW applications would require an efficient, low power, on-chip read-out circuitry. Thus, we provide a concept for an on-chip oscilloscope (OCO) allowing parallel detection of the SWs at different frequencies. The readout system is designed in 40-nm CMOS technology and is capable of SW device characterization. First, the SWs are picked up by near field loop antennas, placed below yttrium iron garnet (YIG) film, and amplified by a low noise amplifier (LNA). Second, a mixer down-converts the radio frequency (RF) signal of 5 (cid:12) 50 GHz to lower intermediate frequencies (IF) around 10 (cid:12) 50 MHz. Finally, the IF signal can be digitized and analyzed regarding the frequency, amplitude and phase variation of the SWs. The power consumption and chip area of the whole OCO are estimated to 166.4 mW and 1.31 mm 2 , respectively. © 2017 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Spin wave (SW) devices are receiving growing attention in research as a strong candidate for low power applications in the beyond-CMOS era. All SW applications would require an efficient, low power, on-chip read-out circuitry. Thus, we provide a concept for an on-chip oscilloscope (OCO) allowing parallel detection of the SWs at different frequencies. The readout system is designed in 40-nm CMOS technology and is capable of SW device characterization. First, the SWs are picked up by near field loop antennas, placed below yttrium iron garnet (YIG) film, and amplified by a low noise amplifier (LNA). Second, a mixer down-converts the radio frequency (RF) I. INTRODUCTION CMOS has been dominating over decades as a technology for low power, low cost and highvolume applications. In the meantime, more and more emerging devices are getting attention in the research as candidates for beyond-CMOS era. 1 SW based or magnonic devices are considered as a low power alternative to CMOS computing. It can perform both Boolean and non-Boolean operations. A current example of a majority gate, performing logic operations, is demonstrated by Klinger et al. 2 Similar to optical computing, 3 SWs can also perform additional operations using wave phenomena in a more direct way than it is done with Boolean logic. 4 As shown by Csaba 5 and Papp, 6 a non-Boolean computing concept for a Fourier transform calculation can be realized using phase shifting plates. SW devices operate in a wide GHz frequency range, 4 that makes detection and signal analysis challenging. There are several ways to detect and analyze SW signals, e.g. detection via spin-pumping effect or Brillouin light scattering spectroscopy. 4 But a low power and low cost SW characterization equipment, covering a reasonable frequency range of several GHz, is still missing. Hence, we consider a concept for SW on-chip characterization with respect to their frequency, amplitude and phase variations as published previously in Refs. 7 and 8. In this paper we present a modified SW characterization concept, compared to Ref. 7, with simulation results achieved with a state of the art low power radio frequency (LP-RF) 40-nm CMOS technology. In order to detect smaller SW signal II. CONCEPT FOR ON-CHIP SPIN WAVE DETECTION In order to convert SWs into electron current, we assume a 50 Ω near field loop antenna placed below the dielectric material yttrium iron garnet (YIG) with low SW propagation damping 4 (see Fig. 1). Based on micro magnetic simulations, an electrical signal power of 80 to 90 dBm is expected in the loop antenna, as previously published in Ref. 8. Due to a limited bandwidth of a single on-chip antenna, an array of loop antennas can be used for covering the targeted 5 50 GHz frequency range of the SWs, i.e. different frequencies can be picked up by different antennas. 9 Besides, smaller bandwidth of the circuit components provide better noise filtering. Therefore, we assume a slightly reduced signal amplitude of 5 µV in the antenna instead of 15 µV, as previously published in Ref. 7. As known, there is a trade-off between noise figure (NF), chip area and power consumption, which are balanced in the presented design. The modified OCO is divided in 9 frequency bands, listed in Tab. I. While the mixer is covering the whole frequency range 5 50 GHz with the single design, the LNAs and the VCOs are optimized for smaller frequency ranges in order to achieve better performance against noise. For the sake of simplicity, we are currently using near field antennas, but the OCO design works with other SW sensing elements that generate an RF voltage. Conversion of SWs into electrical signal is perhaps the most challenging aspect of SW devices. One needs a compact, integrable way of measuring SW amplitudes/phases/frequencies and do it without an extensive circuitry that would diminish the advantages of SW devices. The challenges associated with magneto-electric interfaces to SWs are described in Refs. 9 and 10. As shown in Fig. 1, a periodic radio frequency (RF) signal, picked up by the antenna, is amplified by a differential low noise amplifier (LNA). The ultra-wideband mixer down-converts the signal to lower IF of 10 50 MHz. For that purpose, a local oscillator (LO) signal, generated in a voltage controlled oscillator (VCO), is required. Subsequently, an operational amplifier (OpAmp) amplifies the periodic IF signal to higher voltage values. Finally, the amplified signal can be digitized in an analog to digital converter (ADC) that gives information about the SWs frequency, amplitude change and phase variation. The topology of the OCO components depicted in Fig. 2 show the design for the frequency band 6 (33 39 GHz). The OCO designs of the other bands are very similar to the presented one and skipped for simplicity. In order to amplify 90 dBm signal of the loop antenna, we use a fully differential LNA with 2 stages. The output driver provides additional isolation between LNA input and output. Besides, the driver is crucial for impedance matching between the LNA output and the mixer input. Starting with the mixer design in Ref. 11, we extended the circuitry with inductors to achieve better NF and higher conversion gain over the whole frequency range of 5 50 GHz. To create the IF signal at lower frequencies, the frequency difference of the RF and LO signals should be in the MHz range. Hence, a tunable VCO is necessary for the readout circuitry. Fine tuning of the VCO output frequency is controlled by the voltage V CTRL . For a coarse frequency tuning we use appropriately sized switchable capacitors. The final amplification step is realized with the 2 stage OpAmp bringing IF signal to 100 mV range. III. SIMULATION RESULTS The presented results were obtained by simulations using Cadence Virtuoso with device models of the Global Foundries 40-nm LP-RF technology. 12 The simulation results are valid for room temperature of 300 K and already include the noise of each single device. The interconnect parasitics, which will be impacted by the physical layout, have not yet been included in these simulations. The parasitics, extracted from the layout, will of course affect operating frequencies of the OCO. However, this refinement will be tackled in the next project step. We use transistor models with a low threshold voltage. Implemented resistors operate with silicided or unsilicided p+ poly resistor models, depending on required resistance values. For capacitors we use alternative polarity metal oxide metal capacitors (APMOM Cap) as well as metal isolator metal capacitors (MIM Cap). Symmetric inductor and center tapped inductor models, with nitride as passivation layer, are deployed from optimal inductor finder kit provided by Global Foundries. The most critical part of the OCO regarding the NF is the first stage of the LNA. Depending on the frequency band, we achieved a NF of the LNAs between 2.4 4 dB. For the 50 Ω matching to the antenna, we have a return loss of each LNA better than 10 dB over the whole frequency range. The achieved gain of the LNAs is around 30 dB (see Fig. 3). We use an active mixer, i.e. the RF signal is additionally amplified during the conversion to lower frequencies. The conversion gain of the mixer is higher than 12 dB as shown in Fig. 3. Finally, the OpAmp amplifies the IF signal with a gain of more than 30 dB in 10 50 MHz. The main task of the OCO is the characterization of the SWs regarding frequency, amplitude and phase variations. The frequency detection is demonstrated in Fig. 4. We assume sinusoidal signals with an amplitude of 10 µV and frequencies at 5, 10, 15, 20, 30, 35, 40, 45, 50 GHz at the input of the OCO in the 9 bands, respectively. Subsequently, the frequency of the LO signal is swept from 5 50 GHz. Finally, the simulated signal at the output of the OpAmp is fitted to a sinusoidal curve and divided by the root mean square error (RMSE). As a result, we get peaks at the assumed frequencies (see Fig. 4). Due to a jitter of the LO signal, the resolution of the SW frequency detection is limited. We achieve a precision in frequency of 20 MHz. The transfer characteristic of the amplitude at the OCO output signal versus input signal is depicted in Fig. 5. Here we use band 6 for demonstration. The RF signal is set to 35 GHz. The LO frequency is set to 35.03 GHz. Due to a limited output voltage swing of the OpAmp, the amplitude curve has a linear characteristic until 30 µV, that corresponds to the assumed maximum signal power of 80 dBm in the loop antenna. 8 The simulation results show that the signal power of less than 96 dBm in the 50 Ω loop antenna is detectable with proposed OCO concept, i.e. a significant improvement of factor 3 compared to our first approach in Ref. 7. Figure 6 shows the phase transfer characteristic between the input of the OCO and the output of the OpAmp. The phase shift due to run-time of the signal through the circuity is compensated here in order to compare the simulated phase shift with the ideal one. The maximum deviation of the simulated phase shift from ideal one is equal to 23 • . The main reason for phase error is the jitter noise of the VCO. An introduction of a phase locked loop (PLL) circuitry in the OCO design could essentially reduce the phase deviation and will be considered in our future work. IV. CONCLUSION SW based devices are emerging for high-speed and low power signal processing tasks, but the challenge of an effective SW detection remains. The OCO could be an integrated alternative to current spin wave detecting systems with the near field loop antenna as a sensing element placed below an insulating magnetic medium such as YIG. Besides, the OCO could be adapted for other magneto-resistive or spin hall effect sensing elements. Simulation results show that the signal power of less than 96 dBm can be detected with the proposed design. Sensing time for SW amplitude and phase is below 1 µs and for frequency detection less than 40 µs with accuracy of 20 MHz. The OCO shows a further step of a possible realization of the SW on-chip detection with a power consumption of 166.4 mW and chip area of 1.31 mm 2 . ACKNOWLEDGMENTS Fruitful discussions with S. Kiesel and U. Nurmetov from Technical University of Munich are gratefully acknowledged.
2,704.8
2018-05-01T00:00:00.000
[ "Engineering" ]
Design of Decentralized Adaptive Sliding Mode Controller for the Islanded AC Microgrid With Ring Topology Sliding mode control can restrain the perturbations generated from the intermittence of the renewable energy generation and the randomness of local loads when microgrids are operating in islanded mode. However, the microgrid consists of several subsystems and the interactions among them will cause the chattering problems under the overall sliding mode control. In this paper, the chattering restraint issues for voltage control of the islanded microgrid with a ring topology structure are investigated based on the decentralized adaptive sliding mode control strategies. Firstly, we construct a tracking error system with interconnections considering the power transmission among subsystems and nominal values of system states. Secondly, we design linear matrix inequalities (LMIs) according to the H ∞ attenuation performance of the system external disturbances. Then, the tracking error performance and the control precision are guaranteed via the asymptotic stabilities of integral sliding mode surfaces. Adaptive laws are utilized to address the chattering problems of the sliding mode control. Finally, simulation results verify the effectiveness of the proposed decentralized control methods. INTRODUCTION Recently, there are abundant distributed generating devices permeating in the modern electric power systems for achieving the environment protection and the effective and flexible control of grids. In order to ensure the extensiveness and security of the power supply, microgrids have been the main form to transmit electricity to local loads for remote regions, which can operate in islanded mode or grid-connected mode (Mahmoud et al., 2014). Actually, an AC islanded microgrid consisting of distributed generation units (DGus) and energy storage devices can supply power to local loads steadily in low voltage magnitude (Kabalan et al., 2017). Because the microgrid contains numerous power electronic facilities, such as voltage source converters (VSC) et al., it is lack of immense inertia provided by rotating devices comparing with conventional grids (Zou et al., 2019). Furthermore, the renewable generation devices are usually affected by weather conditions and the power generated from them is usually intermittent and uncertain, so it is more complicated to realize the stable control of the multi-area microgrid voltage when it is in islanded mode (Zhou et al., 2021). At present, there are different control strategies to solve the voltage control problems of multi-area microgrids and optimize the control performance in islanded mode in order to improve the reliability and effectiveness of the power supply (Divshali et al., 2012;Sahoo et al., 2018). The conventional control methods for multi-area microgrids operating in islanded mode demonstrate several disadvantages. The close-loop proportional-integral-derivative (PID) voltage control strategy can not estimate the errors between state variables and the nominal sinusoidal voltages accurately. Additionally, this control strategy presents bad control performance for restraining the inner parameter perturbations, such as frequency fluctuations (Vandoorn et al., 2013;Chen et al., 2015;Sefa et al., 2015). Zeb et al. (2019) combined the PID control method with fuzzy principles and designed the proportional resonant harmonic compensator as a current controller. Moreover, a phase lock loop (PLL) was designed to promote the speed of the system dynamic response. A comparison between fuzzy sliding mode control (FSMC) and fuzzy PID control illustrated that the dynamic response speed was lower and the tracking error performance was less precise via the fuzzy PID method. Considering that microgrids are sensitive about the system parameter variations, so the droop control technology is proposed to improve the robustness of microgrids via simulating the droop relationships among different electrical parameters (Avelar et al., 2012;Beerten and Belmans, 2013;Eren et al., 2015;Wang et al., 2019;Wang et al., 2021). Mi et al. (2019) modified traditional linear droop control strategies and utilized nonlinear droop relationships to describe the interactions between reactive power and voltages. The T-S fuzzy theory was applied to approximate the nonlinear model accurately and coordinate power among each DGu. Nevertheless, there were also errors between stable values and nominal values of voltages. Recently, sliding mode control (SMC) strategies are extensively applied in the stability control of microgrids for the superior asymptotic stability and robustness against parameter uncertainties (Hu et al., 2010;Karimi et al., 2010;Liu et al., 2017). An integration model of microgrids with complex meshed topology structures and several DGus was constructed to achieve the power sharing and voltage robust control (Cucuzzella et al., 2017;Wang et al., 2020). But the integration model could not represent actual interaction effects in different subsystems and the chattering was serious. Mi et al. (2020) proposed an adaptive sliding mode control strategy based on the sliding mode observer for wind-diesel power systems. The microgrid bus voltage showed remarkable stability via regulating the reactive power in terms of this method. Contrarily, the disturbance observer and adaptive algorithm brought in numerous parameters and promoted the complexity of the control system. To figure out the problem of harmonic disturbance in microgrids, Esparza et al. (2017) proposed a comprehensive control strategy to restrain the harmonic currents generated from DGus in AC microgrids. As shown in the simulation results, this strategy could cause the chattering phenomenon inherently. Motivated by the aforementioned discussions, for the multiarea microgrid with a ring topology, the decentralized voltage control model will represent more appropriate relationships among the parameters in each local subsystem comparing with the integrated one. In addition, the adaptive sliding mode control (ASMC) strategy, which will be designed according to H ∞ attenuation performance of each subsystem, can ensure the robustness of the interconnected systems against mismatched uncertainties and external perturbations. The main contributions of this paper can be summarized as follows: 1) The established multi-area microgrid model can depict the interactions among subsystems appropriately; 2) The reliability of solutions and the attenuation performance of external disturbances can be ensured based on the linear matrix inequalities (LMIs); 3) The proposed decentralized ASMC can restrain the chattering of the microgrid. The rest of research includes four sections. Section Dynamical Models of Multi-Area Interconnected Microgrids constructs state functions with interconnections representing the topology structure of microgrid systems and defines tracking error models based on the nominal values of the state variables. Section Proposed Decentralized Adaptive Sliding Mode Voltage Controller introduces the designed decentralized voltage controllers in terms of the proposed ASMC theory. Section Simulation Results provides the simulation results and Section Conclusion illustrates the conclusion. DYNAMICAL MODELS OF MULTI-AREA INTERCONNECTED MICROGRIDS In order to explain the power transmission among the multi-area microgrid, the electrical three-phase diagram of the ring topology system composed four DGus is shown in Figure 1. The researched microgrid consists of local loads, power transmission lines and DGus. Because of various energy storage components in renewable generation systems, the DGus could be represented as DC voltage sources. The DGus connect with the points of common coupling (PCC) via VSCs and filters and provide power to local loads. PCCs can also link one of areas of the microgrids to another and connect microgrids with main grids. Considering the ring topology structure of the microgrid and the power transmission orientations among different areas, the voltage control model of subsystem i with interconnections in the dq-coordinates in terms of Kirchhoff's Curent Law (KCL) and Kirchhoff's Voltage Law (KVL) can be obtained as follows, where N is the number of DGus in microgrids, L ti , C ti and R ti represent the inductance, capacitance and resistance of the filter connected with the DGu in subsystem i, respectively. The microgrid subsystems in various areas are integrated via interconnecting lines. L i and R i are the inductance and resistance of the interconnecting line between subsystem i and the adjacent subsystem. ξ ij is the orientation of the interconnecting line current between subsystem i and subsystem j (i ≠ j). ξ ij 1 and ξ ij −1 represent the current flows into and flows out the subsystem i. However, ξ ij 0 represents there is no power exchange between subsystem i and subsystem j. V di and V qi are the direct and quadrature components of the PCC voltage in subsystem i. I di and I qi are the direct and quadrature components of interconnecting line i. I tdi , I tqi , U di and U qi are the direct and quadrature components of the current and voltage generated from DGu in subsystem i. I ldi and I lqi are the direct and quadrature components of local loads. The randomness of the local loads and the power generation intermittence of DGus will cause frequency fluctuation in microgrids. Therefore, we introduce the parameter uncertainties and donate system frequency ω ω 0 + Δω. The matrix form of the dynamic (1)-(8) can be written as and F i ∈ R 6×2 are the system matrix, control input matrix and the external disturbance matrix of the ith voltage control model of the microgrid. ΔA i ∈ R 6×6 is a time-varying matrix representing the frequency fluctuation and E ij ∈ R 6×6 is the interconnection gain matrix consisting of ξ ij and the parameters of interconnecting line i. Assume that the nominal vector of the state vector in In view of (9), the corresponding error dynamic model in subsystem i can be expressed by For the later proof proceedings, we introduce the following lemmas, which can be needed to ensure the asymptotic stability of the system. Lemma 1: (Mnasri and Gasmi, 2011) Consider the following unforced system: This system is regarded as quadratically stable and satisfies the H ∞ norm T yω ∞ < c. If there exists a quadratic Lyapunov function V(x) x T Px, with P > 0, then, for all t > 0, Lemma 2: (Mnasri and Gasmi, 2011) Let x and y be any vectors with appropriate dimensions. Then, for any scalar ϵ > 0, the following inequality holds: 2x T y ≤ ∈x T x + ∈ −1 y T Lemma 3: (Mnasri and Gasmi, 2011) Consider a partitioned symmetric matrix where A and C are square matrices with appropriate dimensions. Then, this matrix is negative define if and only if the matrix A and C − BA −1 B T are negative define. PROPOSED DECENTRALIZED ADAPTIVE SLIDING MODE VOLTAGE CONTROLLER The adaptive algorithm can optimize the parameters in controller and the decentralized strategy can improve the control performance. Design proceedings of sliding surface need to consider the stabilizing, tracking and restraining performance of the system. The sliding mode control law usually contains two parts, the switching control law and the equivalent control law. The former one can force the system state to approximate to the sliding surface when it deviates from the surface and the latter one can ensure the system state to keep on the sliding surface when it reaches on the surface. In order to design decentralized adaptive sliding mode voltage control laws for error dynamic model (11), the following assumptions are introduced. Assumption 1: All the parameter uncertainty matrices caused by frequency fluctuations are viewed as bounded. That means Assumption 2: For each subsystem i, For improving the dynamic response performance, we define the following integral sliding mode surface as where H i ∈ R 2×6 is a constant matrix satisfying H i B i is nonsingular and H i B i is positive for all i ∈ N. K i ∈ R 2×6 is the feedback matrix to be obtained via solving LMIs. Substituting Equation 11 into the derivative of sliding surface (15) yields (16) When the state trajectory of the tracking error system arrive and keep on the sliding mode surface, it would satisfy the following equation. Based on Equation 17, equivalent control law can be represented as (18) Substituting Equation (18) into (11), the sliding mode dynamic equation can further be expressed as shows a more complicated tracking error system with parameter uncertainties and external disturbances. In the following procedures, we utilize the Lyapunov theory to analysis the system stability and the tracking performance to the nominal values of currents and voltages in each subsystem. Furthermore, we consider the H ∞ disturbance attenuation performance of interconnected system and design LMIs in terms of an H ∞ norm c i . Theorem 1: Assume that the tracking error system (19) satisfies Assumption 1 and Assumption 2. If there exists a feasible solution X i > 0, and R i satisfies the following LMI (20), then we consider uncertain system (19) matches the H ∞ condition. where Ω i X i A T i + A i X i − B i R i − R T i B T i + ε i I and ε i > 0 is a positive scalar. We consider the sliding mode surface is asymptotic stable. Proof: Select the following Lyapunov function for tracking error system (19). Based on an H ∞ performance bound for the closed-loop system (19), one can obtain the following derivative. If B i ≤ b i and Assumption one is satisfied, using Lemma 1, we get Equation 23 can be rewritten as The system under the equivalent control law is stable, if there is a feasible solution for the following LMI. Define E i (E i1 , /, E iN ), X i P −1 i , and R i K i X i . After premultiplying and post-multiplying (26) by diag[X i , I, /, I ], Equation 26 can be rewritten as where (20) is obtained with the method of Lemma three and the proof is completed.In the following section, we design the switching control in terms of adaptive algorithm to restrain the state chattering. Theorem 2: Design the following controller (28) for closed-loop system (11) in terms of the feasible solution obtained via (20) and the system dynamic is asymptotic stable based on where q i1 and q i2 are positive parameters. Proof: Let us consider the following Lyapunov function: where a i (t) a i − a i (t), and ρ i (t) ρ i − ρ i (t). Based on (28), (29) and (30), its derivative is given by Obviously, the derivative of the Lyapunov function V 2i (t) ≤ 0 is verified. That means the system states will reach the designed sliding mode surface in finite time for arbitrary s i (t) ≠ 0. Then, the proof is completed. SIMULATION RESULTS In this section, the proposed decentralized ASMC strategy is applied in the voltage control of microgrid with ring topology. The microgrid on study contains four DGus (N 4) and the nominal frequency is 60Hz, that means ω 0 120πrad/s. The electrical parameters of the subsystems and interconnecting lines are concluded in Table 1 and Table 2, respectively. The intermittence of the renewable generation and the uncertaintiy of the local load in microgrid will influence the power sharing among subsystems and cause frequency fluctuation indirectly. In this case, we consider the frequency fluctuation Δω sin(1000πt), the current generated by DGu in subsystem one is increased by 20% at t 1s and recover to the original state at t 2s. Furthermore, the current generated by DGu in subsystem three is reduced by 50% at t 1s and recover to the original state at t 2s. On the contrary, the operation states in subsystem two and four are constant. In order to ensure the power sharing, the voltages of PCCs and currents of interconnecting lines should also change. The reference values of voltages and currents in each subsystem are expressed in Table 3. In the following simulation procedures, we analysis the simulation results and verify the validity of the proposed decentralized ASMC strategies under frequency disturbances and local load uncertainties. Because the system uncertainties are bounded, we can get a i ≥ 1, b i 1.7321. The initial states are In order to restrain the mismatched uncertainties and external disturbances, for tracking error system (11), the adaptive parameters are selected as q 11 1.2, q 21 0.9, q 31 1.1, q 41 1.5 Frontiers in Energy Research | www.frontiersin.org August 2021 | Volume 9 | Article 732997 The voltages and currents of the islanded microgrid demonstrate superior robustness and tracking error performance under the proposed decentralized ASMC strategy. The time evolutions of the dq-components of the currents generated from each DGu are depicted in Figure 2 and Figure 3 respectively in the case of load currents mutation. I ld1 increases 20 A at t 1s and I td1 increases 20 A synchronously, while I tq1 does not change significantly. Similarly, the current generated from DGu3 varies with the local load current in subsystem 3, and the current dynamic The time evolutions of the dq-components of the PCC voltages of each subsystem are shown in Figure 4 and Figure 5. Considering the load current variations in the interconnected microgrid system, V d1 is decreased by 5 V and V d3 is increased by 5 V. Meanwhile, I d1 provided by subsystem one is reduced and I d3 provided to subsystem four is improved. Then, the power sharing among each subsystem can be ensured. As shown in Figure 4, there exist deviations between initial voltage values and nominal values, but the system voltages can also track to the nominal values and maintain stable. When the d-components of the voltages change in small scales, the q-components of the voltages fluctuate and remain stable in a short time. The time evolutions of the dq-components of the currents of interconnecting lines are represented in Figure 6 and Figure 7. Under the influence of mutations of load currents and PCC voltages, the currents of interconnecting lines are only influenced by the voltages of PCCs and do not deviate from the reference values appreciably, which illustrates the remarkable robustness of proposed ASMC strategy against the external disturbances and mismatched uncertainties. That also means the changes of local August 2021 | Volume 9 | Article 732997 load currents will not influence the stability of the general microgrid. Notably, the current orientation of line four is opposite compared with the currents of other lines because the voltage of PCC four is lower than that of PCC1. Figure 8 and Figure 9 depict the three-phase waveforms of currents generated from DGus. Figure 10 and Figure 11 depict the three-phase waveforms of voltages of PCCs in subsystem one and subsystem 3. In islanded mode, the decentralized controllers are August 2021 | Volume 9 | Article 732997 11 mainly designed to standardize the voltage and current waves and improve the power quality of AC microgrid. In Figure 8 and Figure 9, the amplitude of three-phase wave of the current generated from DGu1 increases 20 A and that of DGu3 decreases 20 A at t 1s, which matches with the variations of load currents. In Figure 10 and Figure 11, it is obvious that the three-phase waves of the voltages of PCC1 and PCC3 maintain in standard waveform and the effectiveness and reliability can be proved adequately. CONCLUSION In this paper, the chattering restraint issues for voltage control of the islanded microgrid with a ring topology structure have been solved via decentralized adaptive sliding mode control strategies. The constructed tracking error system with interconnections has depicted the interaction among each subsystem appropriately. The control matrices in sliding mode surfaces have been obtained via solving the LMIs, which combine the H ∞ attenuation performance of the system external disturbances with the asymptotic stabilities of integral sliding mode surfaces. The controller parameters have been optimized by means of adaptive algorithms. The simulation results have illustrated the effectiveness of the proposed decentralized ASMC strategies. In future, further research will be extended to the nonlinear and time-delay system. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
4,552.8
2021-08-25T00:00:00.000
[ "Engineering" ]
Insect wing deformation measurements using high speed digital holographic interferometry An out-of-plane digital holographic interferometry system is used to detect and measure insect’s wing micro deformations. The in-vivo phenomenon of the flapping is registered using a high power cw laser and a high speed camera. A series of digital holograms with the deformation encoded are obtained. Full field deformation maps are presented for an eastern tiger swallowtail butterfly (Pterourus multicaudata). Results show no uniform or symmetrical deformations between wings. These deformations are in the order of hundreds of nanometers over the entire surface. Out-of-plane deformation maps are presented using the unwrapped phase maps. ©2010 Optical Society of America OCIS codes: (120.0120) Instrumentation, measurement and metrology; (120.2880) Holographic interferometry; (120.4290) Nondestructive testing References and links 1. R. Jones, and C. Wykes, Holographic and Speckle Interferometry (Cambridge Univ. Press, 1989). 2. K. J. Gasvik, Optical Metrology (John Wiley & Sons, Ltd., 2002). 3. R. K. Erf, Holographic Nondestructive Testing (Academic Press Inc., 1974). 4. C. M. Vest, Holographic Interferometry (John Wiley & Sons, 1979). 5. P. K. Rastogi, Digital Speckle Pattern Interferometry and Related Techniques (John Wiley & Sons,Ltd., 2001). 6. S. Schedin, G. Pedrini, and H. J. Tiziani, “Pulsed digital holography for deformation measurements on biological tissues,” Appl. Opt. 39(16), 2853–2857 (2000). 7. D. L. Grodnitsky, Form and Function of Insect Wings (John Hopkins University, 1999). 8. R. Dudley, The biomedical of insect flight (Princeton University Press, 2000). 9. S. P. Sane, “The aerodynamics of insect flight,” J. Exp. Biol. 206(23), 4191–4208 (2003). 10. C. P. Ellington, “The novel aerodynamics of insect flight: applications to micro-air vehicles,” J. Exp. Biol. 202(Pt 23), 3439–3448 (1999). 11. T. L. Hedrick, J. R. Usherwood, and A. A. Biewener, “Wing inertia and whole-body acceleration: an analysis of instantaneous aerodynamic force production in cockatiels (Nymphicus hollandicus) flying across a range of speeds,” J. Exp. Biol. 207(10), 1689–1702 (2004). 12. J. R. Usherwood, and C. P. Ellington, “The aerodynamics of revolving wings I. Model hawkmoth wings,” J. Exp. Biol. 205(Pt 11), 1547–1564 (2002). 13. S. J. Steppan, “Flexural stiffness patterns of butterfly wings (Papilionoidea),” J. Res. Lepid. 35, 61–67 (1996). 14. S. A. Combes, and T. L. Daniel, “Flexural stiffness in insect wings. I. Scaling and the influence of wing venation,” J. Exp. Biol. 206(17), 2979–2987 (2003). 15. J. R. Usherwood, and C. P. Ellington, “The aerodynamics of revolving wings II. Propeller force coefficients from mayfly to quail,” J. Exp. Biol. 205(Pt 11), 1565–1576 (2002). 16. S. Sudo, K. Tsuyuki, and K. Kanno, “Wings characteristics and flapping behavior of flying Insects,” JSEM 45, 550–555 (2005). 17. R. B. Srygley, and A. L. R. Thomas, “Unconventional lift-generating mechanisms in free-flying butterflies,” Nature 420(6916), 660–664 (2002). 18. A. L. R. Thomas, G. K. Taylor, R. B. Srygley, R. L. Nudds, and R. J. Bomphrey, “Dragonfly flight: free-flight and tethered flow visualizations reveal a diverse array of unsteady lift-generating mechanisms, controlled primarily via angle of attack,” J. Exp. Biol. 207(24), 4299–4323 (2004). 19. M. Dickinson, “Solving the mystery of insect flight,” Sci. Am. 284, 34–41 (2001). 20. J. Yan, R. J. Wood, S. Avadhanula, M. Sitti, and R. S. Fearing, “Towards flapping wing control for a micromechanical flying insect,” in Proceedings of IEEE Conference 4 (IEEE, 2001), pp. 39013908. #123510 $15.00 USD Received 29 Jan 2010; revised 19 Feb 2010; accepted 19 Feb 2010; published 4 Mar 2010 (C) 2010 OSA 15 March 2010 / Vol. 18, No. 6 / OPTICS EXPRESS 5661 21. S. Avadhanula, R. J. Wood, E. Steltz, J. Yan, and R. S. Fearing, “Lift force improvements for the micromechanical flying insect,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems 2 (IEEE 2003), pp. 13501356. 22. I. D. Wallace, N. J. Lawson, A. R. Harvey, J. D. C. Jones, and A. J. Moore, “High-speed photogrammetry system for measuring the kinematics of insect wings,” Appl. Opt. 45(17), 4165–4173 (2006). 23. C. Pérez-López, M. H. De la Torre-Ibarra, and F. Mendoza Santoyo, “Very high speed cw digital holographic interferometry,” Opt. Express 14(21), 9709–9715 (2006). 24. S. Sunada, D. Song, X. Meng, H. Wang, L. Zeng, and K. Kawachi, “Optical measurement of the deformation, motion and generated force of the wings of a moth, Mythima separate (Walker),” JSME Int. J. Ser. B 45(4), 836– 842 (2002). 25. I. R. Hooper, P. Vukusic, and R. J. Wootton, “Detailed optical study of the transparent wing membranes of the dragonfly Aeshna cyanea,” Opt. Express 14(11), 4891–4897 (2006). 26. G. Pedrini, W. Osten, and M. E. Gusev, “High-speed digital holographic interferometry for vibration measurement,” Appl. Opt. 45(15), 3456–3462 (2006). 27. S. Schedin, G. Pedrini, H. J. Tiziani, and F. M. Santoyo, “Simultaneous three-dimensional dynamic deformation measurements with pulsed digital holography,” Appl. Opt. 38(34), 7056–7062 (1999). 28. M. De la Torre-Ibarra, F. Mendoza-Santoyo, C. Pérez-López, and S. A. Tonatiuh, “Detection of surface strain by three-dimensional digital holography,” Appl. Opt. 44(1), 27–31 (2005). 29. A. Fernández, A. J. Moore, C. Pérez-López, A. F. Doval, and J. Blanco-García, “Study of transient deformations with pulsed TV holography: application to crack detection,” Appl. Opt. 36(10), 2058–2065 (1997). 30. N. K. Mohan, A. Andersson, M. Sjödahl, and N.-E. Molin, “Optical configuration for TV holography measurement of in-plane and out-of-plane deformations,” Appl. Opt. 39(4), 573–577 (2000). 31. M. Takeda, H. Ina, S. Kobayashi, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. 72(1), 156–160 (1982). Introduction Recent advances in non-contact optical techniques which are mainly applied to measure superficial deformations on an object allow indirect detection of its mechanical properties.Most of these techniques have higher resolution than their similar and more traditional mechanical tests [1,2].Digital holographic interferometry (DHI) is an optical non-contact method which generates qualitative and quantitative information from an object's displacement [3][4][5]. DHI has been applied to inspect biological samples as an alternative to traditional invasive techniques [6].The research reported here stems from the interest of scientists and engineers to design, develop, and improve flying systems.Insect wing flapping may prove to render very useful data that will contribute to better understand the aero dynamical properties of human used aircrafts.Previous studies have helped to gain a better understanding about the structures, the shape and the behavior of winged animals, in particular in trying to reproduce the complex characteristics involved in flying.Most of this knowledge is already applied in modern aerodynamic models which allow more efficient airplanes, rockets, etc [7][8][9], and has served to enhance physical parameters such as air pressure and friction reduction [10].In recent years research on flying structures has been primarily focused in newer techniques like computer modeling, new pressure sensors, computational simulation, and flow visualization [11][12][13][14][15][16][17][18]. Further research used and developed nano electronic models with complex computer control systems which simulate an insect's flight [19][20][21].Photogrammetry is yet another useful technique applied to extract the kinematics of several marked points on an insect wing during tethered and hovering flight [22].In this manuscript, a new approach to the study of insect wing deformation during flight is presented.A DHI system with a high speed camera and a cw high output power laser to record fast and non repeatable events [23][24][25] to detect very fast wing deformations with interferometric resolution is used.A series of deformation maps during insect flapping are presented and a discussion about the results is presented.This optical non-destructive technique may prove to be an extraordinary alternative to understand the phenomena of insect wing deformation in events such as the up-stroke and the downstroke movements. Model DHI is a remote, non-invasive whole field optical technique based on the interference between an object and a reference beam.The interference intensity signal is recorded by a 2D sensor and an image hologram is thus recorded.In order to observe a relative deformation between two different states of the sample under study it is necessary to compare a reference state with a second one when the object suffers a deformation (viz.ref. 1).The result of this comparison is a wrapped phase map directly related to the object surface deformation.DHI may be applied to quantify parameters such as mechanic vibrations [26], elastic deformations [27], strain [28] and crack detection [29], among many other successful applications.Most systems are designed to have in plane or out-of-plane displacement sensitivity [30].In both cases, the intensity recorded by the CCD camera sensor can be represented in general form as: where x and y are the spatial pixel coordinates, R(x, y) and U(x, y) are the reference and object beam amplitudes, and * denotes the complex conjugate amplitude of these quantities.To obtain the relative phase map between the two object states a Fourier transform and its inverse is applied to each digital holographic interferogram, and upon subtraction of the two inverse transformed images, see for instance ref [31], the resulting relative phase may be found from, where ∆φ n is the relative wrapped phase map between a reference state hologram (I n-1 ) and an n-th hologram (I n ).Re and Im represent the real and imaginary part of a complex number.Finally, the wrapped phase maps are unwrapped using a minimum cost matching algorithm (the commercially available Pv_psua2 from Phase Vision Ltd., was used) which generates a decoded displacement map. Experimental Method The optical setup configured to measure out of plane deformations is schematically shown in Fig. 1.The object is illuminated with a Verdi laser (Coherent V6), with a maximum output power of 6 W at 532 nm, and is divided into an object and reference beams using a 50:50 beam splitter (BS).The object beam illuminates the insect using a 20X microscope objective which covers the entire insect's surface.mode optical fiber which is combined with the object beam using a 50:50 beam combiner (BC) in front of the camera sensor.In order to observe the entire insect the field of view (FOV) is set to image an area of 90 X 100 mm.The interference between object and reference beams is captured using a high speed camera (NAC GX-1) with an image resolution of 1024 x 1280 pixels at 10 bits dynamic range.The butterfly insect chosen for the purpose of this research is common in the local ecosystem making it easy to capture, and due to the very large numbers found there is no risk related to endangering its specie, which is called Pterourus multicaudata, obtained since it was in pupal stage.A couple of hours after it emerged from the pupa; it has a size of 88 X 130 mm of height and width, respectively. To perform the in-vivo experimental measurement it is necessary to fix the butterfly onto a rigid surface trying to avoid any damage to it, or indeed minimize the damage.The procedure followed, with the help of an expert on the subject and co-author of this manuscript, was to glue each leg to a dark metal post and to wrap a thread around the insect in two contact points such that it was left free to move its wings.procedure avoided the need to use a pin through the butterfly which will modify its wing movements and eventually kill it.The butterfly was then minimally affected and its wing flapping may be safely considered motion free.Each experimental test lasted only a few seconds, after which the insect is released and set free.In Fig. 2 an actual image of the butterfly in its position during the test is shown and the main body parts are pointed, to secure natural wing movement no special preparation was used on their surface.Once the insect is in position in front of the imaging system a series of images are recorded at 500 frames per second, which is the ideal CMOS camera repetition rate found to perform the experiments, i.e., the required camera repetition rate to momentarily freeze the wing movement.During the recording process the electronic shutter of the CMOS camera was open, however due to its working characteristics the exposure time is much less than 1 ms.Besides, the flapping frequency is about 15 times slower than the sampling rate of the camera.For the recording the butterfly is freely flapping and the up and down stroke movements are registered.Figures 3a, 3b, 3c and 3d show a series of wrapped phase maps at different, not controlled, instants of the wing flapping.The wrapped phase maps represent variations from -π to π, and represent the insect's wing deformations for non controlled time instants during the flapping.The deformation presented in these figures do not represent the whole amplitude of the flapping movement which has centimeters of displacement, instead the images refer to micro deformations along the wings between any two consecutive images at 500 fps.With this information it is possible to reconstruct the real out-of-plane deformation map when unwrapping all wrapped phase maps.The corresponding unwrapped phase maps are shown in Figs.4a, 4b, 4c and 4d, where data quantification for the deformation present in the wing's surface during time intervals of 2 ms may be readily calculated.For these figures the displacement range goes from −0.9 µm to + 0.9 µm. Conclusions and discussion To the best of our knowledge it is for the first time reported in the internationally available literature the use of an optical technique to observe the full field deformation on butterfly wings during flapping.From Fig. 3 it is possible to observe that the forewing and the hindwing have independent flapping movements, not necessarily symmetric between them. Considering the absence of any treatment in the insect's wings, like the application of white developing powder that will kill the butterfly within minutes of its application, dark regions produce weak scattering which then introduce discontinuities during the unwrapping process (see Fig. 2).If the dark region on the wing is masked out (a natural dark fringe on the wing), the unwrapping process is greatly improved as can be seen in Fig. 4. In all experiments we considered the wing as a whole unit, but further study is needed in order to describe the effect of the scales present in the wing, which under close observation have an independent movement from each other.It is important to remark that this is a specific behaviour present in this butterfly under the particular conditions mentioned.The same behaviour should not be expected in every winged insect because it involves the wing shape, insect morphology and the flight conditions.Results show the behaviour for this butterfly (Pterourus multicaudata) which is part of a huge family species.The future study and research on insect wing flapping will render useful data that will no doubt contribute to better understand the aero dynamical properties of human designed aircrafts.The advantages of having a high speed camera are showed in this work for non repeatable events, like this butterfly flapping, a feature not possible to observe for the naked or unaided eye, or conventional cameras.This technique has high resolution and high sensitivity at 500 fps without needing any extra data processing to extract the deformation maps. Fig. 1 . Fig. 1.Schematic view of the experimental set up where a high speed camera is used (HS-CMOS).The backscattering coming from the object is then collected by means of a 125 mm focal length lens (L) located behind an aperture (A).The reference beam is launched into a single Fig. 4 . Fig. 4. (a), (b), (c) (Media 2) and (d) (Media 3) represents butterfly wing surface deformation recovered from DHI measuring experiments.The media files show different moments of the flapping during the test and as such are showing a movie of different unwrapped sates.
3,579.8
2010-03-15T00:00:00.000
[ "Physics" ]
Volatile Composition and Enantioselective Analysis of Chiral Terpenoids of Nine Fruit and Vegetable Fibres Resulting from Juice Industry By-Products Fruit and vegetable fibres resulting as by-products of the fruit juice industry have won popularity because they can be valorised as food ingredients. In this regard, bioactive compounds have already been studied but little attention has been paid to their remaining volatiles. Considering all the samples, 57 volatiles were identified. Composition greatly differed between citrus and noncitrus fibres. The former presented over 90% of terpenoids, with limonene being the most abundant and ranging from 52.7% in lemon to 94.0% in tangerine flesh. Noncitrus fibres showed more variable compositions, with the predominant classes being aldehydes in apple (57.5%) and peach (69.7%), esters (54.0%) in pear, and terpenoids (35.3%) in carrot fibres. In addition, enantioselective analysis of some of the chiral terpenoids present in the fibre revealed that the enantiomeric ratio for selected compounds was similar to the corresponding volatile composition of raw fruits and vegetables and some derivatives, with the exception of terpinen-4-ol and α-terpineol, which showed variation, probably due to the drying process. The processing to which fruit residues were submitted produced fibres with low volatile content for noncitrus products. Otherwise, citrus fibres analysed still presented a high volatile composition when compared with noncitrus ones. Introduction The recovery, recycling, and upgrading of waste material are particularly relevant in the food and food processing industry, in which waste, effluents, residues, and by-products can be reclaimed and often turned into useful higher-valueadded products [1].The food industry can take advantage of the physicochemical properties of these products to improve the viscosity, texture, sensory characteristics, and shelf life of final products.Hence, fibre-rich by-products can serve as inexpensive, noncaloric bulking agents for the partial replacement of flour, fat, or sugar.They can also be used to enhance water and oil retention and to improve emulsion or the oxidative stability of food products [2,3].Due to the increasing importance of these products in the food industry, several studies have addressed their characterisation, either of physicochemical properties [4,5] or of composition in bioactive compounds [6,7].Although aroma is a key sensory attribute to consider when using a product in the food industry, to the best of our knowledge, only one study has been devoted to the volatile composition of one by-product, namely, apple [8]. Gas chromatography-mass spectrometry (GC-MS) is the ideal analysis technique to analyse the composition of the volatile fraction of fibres derived from the juice industry since GC offers high separation power and MS useful spectra for compound identification and quantification.On the other hand, solid-phase microextraction (SPME), introduced by Arthur and Pawliszyn [9] and extended to headspace (HS) sampling by Zhang and Pawliszyn [10], is a reliable routine technique to sample the volatile fraction of complex matrices because of its simplicity, sensitivity, possibility of automation, and lack of solvent use.Enantioselective-gas chromatography (Es-GC) analysis using cyclodextrins as chiral selectors has been applied in the quality control of several fruits and beverages to detect adulteration with synthetic flavours [11,12] and to monitor the possible effects of orange juice thermal processing on the enantiomeric ratio of several terpenic components [13].Therefore, the study of the enantiomeric ratio of diagnostic chiral volatile compounds present in fibre samples can offer further useful information for their comparison. The aim of this work was to characterize the volatile fraction of several fruit and vegetable matrices which play an important role in juice producing industries and are expected to be further applied as food ingredients resulting in the valorisation of what initially was considered as a residue.The fibres analysed included apple, pear, peach, carrot, lemon flesh, orange flesh, orange peel, tangerine flesh, and tangerine peel.These fibres were obtained from several batches of processed industrial raw material from a currently operative juice production line.Moreover, the composition of these fruit-derived by-products has been compared to the results of several existing studies reporting the volatile composition of raw fruits and juices to assess the differences between fruits and related fibres resulting from processing.In addition, an enantioselective analysis of some of the chiral terpenes present in the fruit fibre samples was performed and their enantiomeric ratios were compared to those reported in the literature, in order to determine possible variations caused by the processing to which the fruit was subjected in the juice industry. Headspace Solid-Phase Microextraction (HS-SPME). Between 100 mg and 1 g, depending on the sample, of fruit fibre was homogenised in 10 mL of H 2 O saturated with NaCl and placed in a 20 mL headspace vial. Gas Chromatography-Mass Spectrometry (GC-MS) Analysis.GC-MS analyses were performed with an MPS-2 multipurpose sampler (Gerstel, Mülheim an der Ruhr, Germany) assembled on an Agilent 6890 (Palo Alto, CA, USA) gas chromatograph coupled to an Agilent 5973N Quadrupole Mass Selective Detector (MSD).The SPME fibre was desorbed into the injection port at 250 ∘ C in split mode (ratio 1 : 5) for 5 min.Compounds were separated with a MEGA5 column (30 m × 0.25 mm i.d.× 0.25 m film thickness) from Mega (Legnano, MI, Italy) using helium as carrier gas (1 mL⋅min −1 ).The oven was temperature-programmed from 50 ∘ C (held for 1 min) to 160 ∘ C at 3 ∘ C⋅min −1 and then to 250 ∘ C at 20 ∘ C⋅min −1 (held for 2 min).Mass spectra were recorded in electron impact (EI) mode at 70 eV within the mass range 35-350 /.The transfer line, the ionization source, and the quadrupole were thermostated at 280, 230, and 150 ∘ C, respectively.Acquisition was done using MSD ChemStation software (Agilent Technologies, Palo Alto, CA, USA).All analyses were performed in duplicate. Volatile compound identification was based on the comparison of experimental spectra with those of the Wiley 7 and Essential Oils mass spectral libraries (Wiley, New York, NY, USA) and was further confirmed by linear retention indices (LRI) calculated using an n-alkane mixture (C9 : C30) [14], which were compared to those reported in Adams database [15] and Nist WebBook [16]. Peak areas calculated from total ion current (TIC) for each compound were normalised by in-fibre internal standardisation [17] as follows: 5 L of 50 ppm solution of tridecane in dibutyl phthalate was sampled for 15 min at 50 ∘ C and the relative abundance data (percentage on total volatile composition) were then calculated.This procedure was adopted to normalise the analytical deviation produced by variations in the performance of fibre and instrumentation [17]. Enantioselective-Gas Chromatography (Es-GC) Analysis. Fruit fibres were manually sampled using the same conditions as described in Section 2.2.The analyses were carried out on a Shimadzu GC-2010 system coupled to a FID detector and controlled with Shimadzu GC Solution 2.30.00 software (Shimadzu, MI, Italy). The SPME fibre was desorbed into the injection port at 220 ∘ C in split mode (ratio 1 : 5) for 5 min.Analyses were carried out on columns coated with 30% 2,3-di-O-ethyl-6-O-tert-butyldimethylsilyl--cyclodextrin (diEt-CD) diluted in PS-086 and 30% 2,6-dimethyl-3-O-pentyl--cyclodextrin (Pentyl-CD) diluted in PS-086, both from Mega (Legnano, MI, Italy), using hydrogen as carrier gas (1.25 mL⋅min −1 ).The oven was temperature-programmed from 50 ∘ C to 127 ∘ C at 1.87 ∘ C⋅min −1 and then to 220 ∘ C at 15 ∘ C⋅min −1 (held for 1 min).The chromatographic conditions were selected on the basis of the conditions used for the construction of the dedicated chiral library [18] and translated using the GC Method Translator Software (Agilent).LRI were calculated using a mixture of n-alkanes (C9 : C30).The elution order of each enantiomer was assigned using a dedicated chiral library of racemic standards available in the laboratory [18]. Analysis of the Volatile Fraction of Fruit Fibres. The HS-SPME-GC-MS method described above was used to analyse the volatile fraction of nine fruit fibres derived from processed industrial raw materials obtained from a juice production line.Volatiles were identified through their LRI and mass spectral data.As expected, the profile of the chromatograms revealed a high similarity between the citrus samples, namely, orange, orange peel, tangerine, tangerine peel, and lemon.On the other hand, the volatile fraction of apple, pear, peach, and carrot samples was relatively poor. Figure 2 shows the HS-SPME-GC-MS profile corresponding to lemon fibre.Peach and lemon fibres were used to evaluate the repeatability of the method.Five replicates were analysed for each fibre on various days, resulting in a satisfactory % RSD < 11% for both fibres. Volatile Composition of Citrus Fibres. The volatile composition of citrus fibres (Table 1) consisted mainly of terpenoids, especially limonene, which accounted for about 52.7% of the total volatile fraction in lemon and over 90% in orange and tangerine fibres.Although limonene was the predominant volatile compound, all samples showed relatively high percentages of a large number of other terpenoids.For instance, lemon fibre contained, among others, 13.7% p-cymene, 7.4% -terpinene, 5.1% -terpinolene, 4.7% terpineol, and several other compounds at lower percentages.Aldehydes accounted for 8.5% of the total volatile composition in lemon fibre, the most abundant of them being furfural, which probably derived from the decomposition of sugars on the fibre.Other aldehydes found in lemon samples were heptanal, hexanal, (E)-2-heptenal, benzaldehyde, and nonanal.Ketones, esters, and alcohols were also found in the samples but at low concentrations (in all cases below 1%).Of note, the composition of the volatile fraction of citrus fibre is qualitatively comparable to those of raw fruits, essential oils [20], and juices [21][22][23]. Orange and tangerine flesh samples presented almost the same volatile composition, again showing a profile clearly dominated by terpenes (99.4 and 98.9% of total volatiles with a high predominance of limonene 92.3 and 94.0%, resp.).The same behaviour was observed for orange and tangerine peel fibres, which showed the same individual volatiles and similar percentages of the same.Moreover, the orange and tangerine peel samples presented a greater variety of compounds, including some terpenic acetates (e.g., terpinyl, citronellyl, and neryl acetate) and sesquiterpenoids, such as -cubebene, alloaromadendrene, -caryophyllene, and the cyclic monoterpene -(E)-ionone, which were not detected in the flesh samples. On the basis of the total area, the residual amount of volatile fraction in tangerine and orange peel was higher than that in the corresponding flesh, the latter being much higher than the amount found in lemon.This finding is in agreement with previous studies that report a major content of volatile compounds, especially of limonene, in orange peel compared to orange flesh [24,25]. Volatile Composition of Noncitrus Fruit and Carrot. Unlike citrus fibre, apple, pear, peach, and carrot fibres showed a volatile composition with a lower percent of terpenoids (Table 2).In this case, the analyses revealed that the most abundant group of compounds in apple fibre was that of aldehydes (57.5%), the main ones being hexanal (19.7%), benzaldehyde (15.6%), and (E)-2-heptenal (14.9%).Esters accounted for 16.3% of the volatile fraction, with butyl isobutyrate (12.1%) as the major component.Also, ketones were present in a considerable amount (11.6%), while terpenoids accounted for 11.4%. The volatile fraction composition of apple fibre was severely affected during fibre production if compared to that of raw fruit described in several publications [26,27].This observation could be attributed to the thermal treatment used during the juicing process.Former studies report ethyl esters, higher alcohols, and -farnesene as the main components rather than aldehydes. Pear fibre contained esters as the main constituents (54.0%), with hexyl acetate being the most abundant (49.1%).Volatile aldehydes accounted for a substantial fraction of these samples (32.8%), the most abundant being furfural (15.2%), followed by hexanal, (E)-2-heptenal, benzaldehyde, octanal, and heptanal.Other groups of compounds, such as alcohols, ketones, ethers, and terpenoids, were present in minor percentage.In this case, the volatile fraction of pear fibre is qualitatively comparable to that of raw fruits reported in previous studies [28], where esters were found to be the main fraction.Riu-Aumatell et al. [29] reported hexyl acetate as one of the compounds consistently found in 11 commercial samples of pear juice. Peach fibre also showed a high proportion of aldehydes (69.7%),where furfural (43.2%) and hexanal (17.4%) prevailed, together with heptanal, benzaldehyde, (E)-2-heptenal, and nonanal in percentages ranging on average between 1.4 and 2.6%.For these samples, terpenoids accounted for 22.4% of the volatile fraction.The main terpenoids found in peach fibre were mainly -terpineol, limonene, and -phellandrene.Ketones and ethers were present in lower percentages, 6.1 and 1.6%, respectively.The volatile fraction of the peach fibres contained several terpenoids at a percentage comparable to that of raw fruits [29], while lactones, key markers of peach aroma [30,31], were not detected. The volatile fraction of carrot fibre contained terpenoids as the main group of compounds (35.3%).Other studies have reported that these compounds account for 97% of the total volatile fraction of fresh carrot samples [32]; the lower percent found in the analysed sample could be explained by the loss of volatiles during the washing and drying treatment applied during industrial fibre processing.The most abundant components of carrot fibre were and -ionone, at 8.1 and 9.8%, respectively.The correlation between carotenoid degradation caused by processing and the production of degradative terpenes such as ionones has been described by Kanasawud and Crouzet [33].Aldehydes accounted for 32.8% of total volatile composition of this fibre, with hexanal at 20.3% and ketones at 16.1%.These included 1-octen-3-one, 6-methyl-5-hepten-2-one, 2-methyl-3-octanone, 2,2,6-trimethylcyclohexanone, and 2,3,4-trimethylcyclohexen-1-one, all present at between 1.9 and 6.5%.Esters, ethers, and alcohols were present at 5.1, 1.8, and 1.3%, respectively.On the basis of the total area, the volatile fraction of noncitrus fibre was about 10-fold lower than citrus flesh fibre and almost 100-fold lower than citrus peel fibre, that is, the matrices containing the highest amount of volatile compounds.The term "traces" indicates area percentage < 0.05%. Es-GC Analysis of Chiral Markers in Fruit Fibre Samples. Here, we sought to study some of the chiral markers present in the fruit fibre samples in order to assess whether the processing (which includes thermal treatment) affects the enantiomeric ratio (ER), that is, with an increase of racemisation of some chiral compounds.HS sampling by SPME was therefore applied in the same optimised conditions as previously described in Section 3.1, in combination with Es-GC with cyclodextrin derivatives as chiral selectors.The ER of the selected chiral markers was compared to those previously reported in the literature for samples of the same fruit origin, namely, fresh fruits, juices, or essential oils, when available.Only five chiral markers could be selected (-pinene, pinene, limonene, -terpineol, and -ionone) for noncitrus fibres due to the low abundance of volatile compounds, as reported in Section 3.3.On the other hand, for citrus samples, chiral marker selection was limited by the presence of coelution.diEt--CD and Pentyl--CD columns were used to achieve reliable separation of a higher number of compounds.Moreover, pure standard mixtures of racemic terpenes were injected under the same Es-GC conditions to facilitate enantiomer identification. Es-GC Analysis of Selected Chiral Markers in Noncitrus Fruit and Carrot Fibres.Very few chiral compounds were analysed in noncitrus fibres.However, the ER variability of chiral compounds among samples of each fruit was low (Table 3).A high ER was measured for the R-limonene enantiomer (>99%).-Pinene and -ionone in carrot fibre were present with a higher ER in favour of the S-enantiomer, while -pinene in pear was present in a racemic form.-Terpineol was found in all samples, with a higher abundance The term "traces" indicates area percentage < 0.05%. of the S-enantiomer, ranging from 58.0 to 74.8%, in all noncitrus samples. Es-GC Analysis of Selected Chiral Markers in Citrus Fruit Fibre.ERs were calculated for eight chiral markers in lemon fibre (Table 4).The results are, in general, in good agreement with those reported for the enantiomeric composition of essential oils.An ER was observed for all the chiral markers except for linalool, which was almost in a racemic form.This result is in agreement with the literature reporting that the enantiomeric composition of linalool in lemon essential oils is highly variable depending on the cultivar and harvest period [34].The monoterpenes -pinene, -pinene, borneol, and -terpineol presented a higher ratio of the S-enantiomer while camphene and limonene gave higher ratios of the Renantiomer.The ERs calculated for these compounds show in all cases the same predominance of one of the enantiomers as reported in the literature [19].However, the ER of terpinen-4-ol tended to vary as a consequence of the high temperatures applied during processing: the pretreatment of the lemon fibres at high temperatures might explain a lower ER of the R-enantiomer in these fibres when compared with essential oils and juices [35].ERs were calculated for nine chiral markers in orange and tangerine fibres (Table 5).These results are generally in good agreement with the literature on citrus essential oils [19], often showing a higher ER for one of the enantiomers, as was the case for -pinene, camphene, limonene, linalool, and carvone.-Pinene, as previously described, presented a high ER of S-enantiomer in orange and tangerine flesh fibres, while it was racemic in both peel fibres [19].In agreement with the reported data, the drying process applied to the 1 LRI and enantiomeric ratios calculated using diEt-CD column. 2 LRI and enantiomeric ratios calculated using Pentyl-CD column. 3 X and Y were used to indicate that the absolute configuration of the enantiomers could not be determined. fibres is expected to have modified the ER of terpinen-4-ol and -terpineol.A similar effect had already been reported for these monoterpene alcohols when citrus essential oils are obtained through distillation instead of cold pressing [19].Finally, the ERs of -terpineol in orange and tangerine peel were not coincident and showed distinct behaviour. This difference was also observed in the ER of -terpinyl acetate.This observation could be explained by the fact that -terpinyl acetate forms from -terpineol via acetylation.On the other hand, for tangerine peel, racemisation was observed for -terpineol, while -terpinyl acetate probably kept its original configuration. Figure 1 : Figure 1: Schematics of the production process to obtain the analysed fruit fibres from residues of the juice industry. Table 1 : Average relative percentage of volatile contents and their distribution ranges in different production batches (in parenthesis) of citrus fibres, as determined by HS-SPME-GC-MS analysis. Table 2 : Average relative percentage of volatile compounds present and their distribution ranges in different production batches (in parenthesis) of apple, pear, peach, and carrot fibres, as determined by HS-SPME-GC-MS analysis. Table 5 : Chiral markers, calculated LRI, and corresponding enantiomeric ratios for orange and tangerine fibres.
4,205.6
2017-01-01T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Metrics to analyze and improve diets through food Systems in low and Middle Income Countries Taking a food systems approach is a promising strategy for improving diets. Implementing such an approach would require the use of a comprehensive set of metrics to characterize food systems, set meaningful goals, track food system performance, and evaluate the impacts of food system interventions. Food system metrics are also useful to structure debates and communicate to policy makers and the general public. This paper provides an updated analytical framework of food systems and uses this to identify systematically relevant metrics and indicators based on data availability in low and middle income countries. We conclude that public data are relatively well available for food system drivers and outcomes, but not for all of the food system activities. With only minor additional investments, existing surveys could be extended to cover a large part of the required additional data. For some indicators, however, targeted data collection efforts are needed. As the list of indicators partly overlaps with the indicators for the Sustainable Development Goals (SGDs), part of the collected data could serve not only to describe and monitor food systems, but also to track progress towards attaining the SDGs. Introduction Improving diets features high on the global development agenda. A notable share of the world population faces at least one of the three forms of malnutrition-undernutrition, micronutrient malnutrition, or overweight and obesity (e.g. Padilla et al., 2015;IFPRI, 2016). While diets are rapidly changing, they are not necessarily improving (Pingali, 2007;Popkin, 2014). Dietary transitions typically imply increased consumption of animal fats, sugars, and processed foods (Hawkes et al., 2012;Imamura et al., 2015). To stimulate changes towards healthier diets, numerous policies, projects, and programs have been implemented (e.g. Fiorella et al., 2016;Allen and De Brauw, 2018). However, these interventions often narrowly focus on specific consumer groups or foods and rarely take a whole diet approach. Recently, a growing literature has emphasized the importance of approaching diet improvement from a food systems perspective (e.g. Miller and Welch, 2013;Sundaram, 2014;Allen and Prosperi, 2016;Gustafson et al., 2016;Caron et al., 2018). Food systems shape diets and are characterized by multiple interactions, tradeoffs and feedback mechanisms. For example, food systems have been shown to put stress on the environment and its natural resource base by degrading soils, polluting and exhausting fresh water supplies, encroaching on forests, depleting wild fish stocks and reducing biodiversity (FAO, 2013;Prosperi et al., 2014;Westhoek et al., 2016). Consequently, dietary challenges will potentially be best addressed using analytical methods that aim at understanding complex systems (Popkin, 2014). In particular, adopting a food systems approach to diet improvement would facilitate the identification of leverage points for systemic changes, accounting for the full range of interactions, tradeoffs, and system dynamics (Ericksen, 2008;Foran et al., 2014;Dentoni et al., 2017). Implementing such a food systems approach, however, would require the use of a comprehensive set of metrics. In the context of this paper, a food system metric is conceptualized as a system of relevant indicators that provide a tool for measurement, comparison or tracking system performance (Padilla et al., 2015). Food systems metrics are important to describe the current state of food systems, facilitate quantifying relationships needed for exploring causal mechanisms, set baselines against which to measure progress on key goals, evaluate impacts of system transitions and proposed changes, gauge efficacy of interventions, and ultimately facilitate the scaling up of successful interventions Global Panel, 2015;McDermott et al., 2015;IFPRI, 2016). Food system metrics and indicators are also useful to structure high-level debates and communicate the complexity of the system as well as data from science to policy makers or the general public (Gustafson et al., 2016;Lehtonen et al., 2016). As such, they are instrumental to create awareness and improve transparency, beyond being used as monitoring and evaluation tools. Ideally, a common set of indicators could be used across countries and over time to allow comparison. The aim of this paper is to synthesize existing knowledge and propose an integrated set of food systems metrics that can be readily applied to facilitate food system research and allow cross-case comparison. To ensure that the set of metrics is comprehensive, we base it on a conceptual framework delineating all key food systems components (Ericksen, 2008;Eakin et al., 2016). To allow practical applicability, we limit ourselves to indicators for which data is covered in datasets widely available for low and middle income countries (LMICs): Living Standards Measurement and related Surveys (LSMS), Demographic and Health Surveys (DHS), and publicly available sources of aggregate data, like FAOSTAT and the World Development Indicators. The metrics are thus defined, and can work best, at national and subnational scales. To link this discussion to the broader debate on sustainable development, we prioritize indicators present in the indicator compendium of the Sustainable Development Goals (SDGs). A recent analysis shows that at least 12 of the 17 Goals have strong linkages with food systems (Chaudhary et al., 2018), which illustrates the importance of food systems not only for diets but also for other development outcomes. We illustrate our approach for four countries that provide relevant case studies for LMICs -Ethiopia, Bangladesh, Nigeria and Vietnam. These countries were chosen as they were recognized to provide a wide range of diet and (sub) national food systems contexts at various stages of food system transformation-from rural (Ethiopia) through mixed (Nigeria and Bangladesh) to urban (Vietnam) with increasing complexity, and urbanization. They are also expected to represent "typical" LMICs in terms of availability (or lack thereof) of national-level data. There have been a few earlier attempts to develop a series of food system metrics. Notably, Gustafson et al. (2016) propose multidimensional indicators to quantitatively characterize the performance of food systems through seven metrics of sustainable nutrition security. Other recent efforts include Acharya et al. (2014) on assessing sustainable nutrition security, Prosperi et al. (2014) on a vulnerability assessment for the food system of the Mediterranean region, FAO's (2016) compendium of indicators for nutrition-sensitive agriculture, and Zurek et al. (2017) on the sustainability metrics for the European food system. While each of these studies emphasizes the importance of adopting a holistic perspective, none of them covers all dimensions and domains of the food system -see, however, Béné et al. (2019a). Many of the existing studies focus on food system outcomes, without always addressing activities and drivers. As a result, a comprehensive set of metrics to measure the entire food system continues to be a critical knowledge gap (Jones et al., 2013;Global Panel, 2015;McDermott et al., 2015). Below, we first present a conceptual framework of the food system. Broadly, we distinguish three domains: food system drivers, activities, and outcomes. For each domain, we then propose a comprehensive set of metrics and underlying indicators. Focusing on the four case study countries, we select indicators based on data availability and presence in the SDG indicator compendium. We assess for which metrics data is covered in the relevant datasets and for which metrics additional data collection investments need to be made. We conclude with a discussion on the potential use of the data, the data gaps identified and a cost-effective, high-return strategy for filling those gaps. A conceptual framework of the food system Before considering how food systems can be measured, a clear understanding of the concept of "food system" is essential. Multiple perspectives are found in the literature. Nearly all contain some notion of a "food supply chain," highlighting a series of stages through which food materials are turned into final food products (e.g., Sobal et al., 1998;Grant, 2015), but they are characterized by different levels of recognition of the importance of feedback loops capturing the circular (rather than linear) nature of food systems. Based on elaborated discussions in the context of the CGIAR research Flagship "Food Systems for Healthier Diets", this paper defines food systems broadly as the full set of actors, resources, processes and activities that encompass the domains of food production, processing, distribution, consumption and food waste disposal, and the outcomes of these activities, including nutrition and health, socioeconomic wellbeing and environmental quality, as well as the feedbacks, tradeoffs and synergies between these outcomes. Food systems are multifaceted and complex-with sociocultural, economic and environmental aspects (e.g., farming, food access and equity, food sovereignty) (Pinstrup-Andersen et al., 2011). They involve multiple actors (food producers, food-chain actors, and consumers) operating within dynamic and interactive food environments, with many mechanisms at work across multiple scales and levels (Ericksen, 2008;Eakin et al., 2016;Caron et al., 2018;Turner et al., 2018). Food systems analysis must also consider the governance and political economy of food production, processing and consumption, sustainability of food systems, effects on health and wellbeing, and drivers of system change. A conceptual framework of the food system thus needs to portray different relationships, interactions, tradeoffs, feedback mechanisms and drivers of system changes that ultimately shape system outcomes across several levels. Several food systems frameworks have been proposed in the literature (e.g. Sobal et al., 1998;Burchi et al., 2011). These frameworks tend to present the food system as a series of ordered and linear stages. While easily tractable and insightful, such a linear representation disregards complex interactions, synergies and feedbacks in the system. Importantly, many of these existing frameworks consider health or diet as the sole outcome of the system (e.g., Sobal et al., 1998;Burchi et al., 2011). Ericksen (2008) proposed a framework that recognizes the complex interactions, synergies and feedbacks, and thus presents a solid foundation for an appropriate conceptualization of food systems. We largely follow this framework with two amendments (Fig. 1). First, given the focus of this paper on healthier diets, we separate the nutrition element from the broader concept of food security to create a new outcome category. We group the remaining elements of food security under socioeconomic wellbeing. In addition, we include food loss and waste management explicitly as an activity (Jurgilevich et al., 2016;Chaboud and Daviron, 2017). The resulting framework gives a schematic representation of food systems activities and their main outcomes along with system drivers. Food systems activities lie at the center and include food production, processing, distribution and marketing, consumption choices, and food loss and waste management. The key food systems outcomes relate to nutrition and health, socioeconomic wellbeing, and environmental quality. Finally, food systems drivers include biophysical, socioeconomic and natural factors that shape food system activities and outcomes. Interactions within food systems are complex, and tradeoffs and synergies between various system outcomes must be considered to appropriately reflect the complexity of food systems and the difficulties to navigate between often competing goals (Béné et al. 2019b). For example, avoiding overconsumption and dietary changes, such as reducing the consumption of animal-sourced food and adopting diets with more plant-based products, serve not just dietary outcomes, but can also lead to improved environmental outcomes and reduced risk of diet-related non-communicable diseases (Tilman and Clark, 2014;van Dooren et al., 2014;Springmann et al., 2016). Yet, at the same time, recommendations to increase, say, the consumption of fruits and vegetables to promote healthier diets can raise questions about the potential consequences of expanding their production, such as increased irrigation water or farm labor, or increased use of pesticides with negative health outcomes (Wirsenius et al., 2010;Becker, 2017). Equally, reducing the consumption of animal-sourced food may negatively affect the livelihoods of livestock farmers. Further, tradeoffs can also exist between short-term gains and long-term costs of interventions. While all actors may agree that improved health and resource sustainability are positive long-term outcomes, present-day choices of consumers and businesses are still determined by costs, prices, convenience and cultural and social values, among other factors, all of which may not reflect good health or sustainable production practices (Nesheim et al., 2015). In sum, deciding among various intervention options can be challenging, and decision makers must possess the right tools for analyzing intended and unintended effects, including isolating underlying causes, understanding how to weigh various tradeoffs and taking advantage of synergies (Béné et al., 2019b). Research methodology We employed a series of processes for the selection of food systems metrics and corresponding indicators for different domains and dimensions of the food system. First, we reviewed the relevant literature and selected/amended a conceptual framework, with the aim of identifying key components of the food system. Second, we identified thematic categories of the food systems metrics and comprehensive sets of underlying indicators following a hierarchical approach based on this framework. We sought to be as comprehensive as possible capturing the various domains and dimensions of the food system in our metrics. Broadly, the identified metrics were grouped into three general thematic categories of metrics: 1) food systems drivers, 2) food systems activities, and 3) food systems outcomes. For each of these metrics categories, we further identified a coherent set of metrics components and corresponding indicators. Third, given the large number of potential indicators, there was a clear need for prioritization and standardization. We use two criteria: 1) data availability in at least one of the four countries; and 2) presence in the compendium of SDG indicators, which has a total of 230 indicators. Table 1 presents the SDGs and shows the overlap with our metrics. Please note that the SDG indicators are often less precise than our indicators, so that we used some expert judgment for matching. Data availability is a critical dimension of metric construction if indicators are to be easily adopted. We therefore identified widely available reliable open data sources to draw upon. At the micro-level, these surveys include the LSMS and the DHS. The LSMS and the DHS are representative at the national level. A wide range of topics are covered in these surveys, including household information, farm-related information, crop and livestock production details, agricultural extension services, household food consumption and income sources, among other variables. The LSMS surveys have a good quality rural and urban areas, enabling the surveys to provide reliable comparative analysis for rural and urban areas. Micro-level datasets are particularly useful, as these allow determining indicator values for subsystems or subpopulations and following indicator values throughout the system based on geographic location, food system activity, income, sex, age, race, ethnicity, migratory status, or other characteristics. At a higher (macro) level, aggregate data can be found in databases such as FAOSTAT, t he World Development Indi cators, AQUASTAT and the World Animal Protection dataset. Those aggregate datasets are also useful as they usually include more ready-to-use indicators. Summaries of these different datasets are provided in the Supplementary Materials. Data on regulatory bodies and other relevant institutions are available on their own websites. The four case study countries have a good availability of common data sources. All four have a DHS and Ethiopia, Nigeria and Vietnam have a LSMS. For Bangladesh, we use the BIHS, which is similar to the LSMS but only covers rural areas. For key socio-economic characteristics of the individual countries, please refer to Table 2. Metrics for food systems drivers Understanding drivers of food system changes and transformations is important to assess potential policy or technological options to affect food system actors' decisions and behaviors, and ultimately shape outcomes of food systems (Grant, 2015;McDermott et al., 2015;Béné et al. 2019c). The framework ( Fig. 1) broadly describes two types of drivers: biophysical and socioeconomic drivers. Biophysical drivers include environmental changes that affect food system activities and Fig. 1 A conceptual framework of the food system (adapted from Ericksen, 2008) outcomes through impacts on the quality and availability, notably climate change, deforestation, soil erosion, and reduced pollinators and ground water for irrigation. On the other hand, socioeconomic drivers comprise a wide range of social and institutional factors, including market forces, social organizations, science and technology, policies, and consumer preferences and norms, which also shape the ways in which food systems evolved and operate (Nesheim et al., 2015;Béné et al. 2019c). To combine these concepts, several indicators that capture essential drivers of food systems, mostly available from FAOSTAT can be identified. Table 3 shows that only a limited number of them is part of the SDG indicators. Data on drivers at subnational level is not available. Metrics for measuring food system activities Food system activities form the core of the system. The principal activities of the food system encompass food production, processing, distribution and marketing, consumption, and food loss and waste management. Metrics for food system activities largely reflect economic measures and are important to assess the economic performance of the systems. Food production Food production is the principal determinant of food availability in most economies (Ericksen, 2008). It includes all Zero Hunger Diet, nutrition and health outcomes, environmental outcomes, food production 3 Good Health and Well-being Diet, nutrition and health outcomes 4 Quality Education 5 Gender Equality Socioeconomic outcomes 6 Clean Water and Sanitation Diet, nutrition and health outcomes, socioeconomic outcomes, environmental outcomes, food loss and waste 7 Affordable and Clean Energy Socioeconomic outcomes 8 Decent Work and Economic Growth Socioeconomic drivers, socioeconomic outcomes 9 Industry, Innovation and Infrastructure Environmental outcomes, socioeconomic drivers, food production, food processing, food distribution and marketing 10 Reduced Partnerships to achieve the Goal Food marketing activities involved in the production of raw food materials, harvesting, raising livestock and activities related to fisheries. A (food) production system is characterized by its use of inputs (both natural resources and technologically improved varieties), productivity, and output levels. Relevant food production metrics should measure input utilization, productivity, and output levels in a given geographic area over a specific time period. Table 4 shows that sufficient information is available in the LSMS and FAOSTAT to generate a consistent set of indicators for these variables, though not equally complete for all four countries considered in this study. Notably, data about the use of high yielding varieties and biofortified seeds is collected in only two out of the four countries. Table 1 shows, however, that on the other hand, the SDGs make limited reference to food production: indicators are limited to the use of improved seeds and breeds and organic farming. Data on the latter is not routinely collected in the LSMS. Food processing Food processing consists of all processes that modify the original nature, content and/or appearance of the raw materials, including transforming it into more elaborated food products, as well as packaging and labelling (Ericksen, 2008;Ingram, 2009). These processes can substantially alter, reduce or improve the nutritional value, appearance, storage life, safety and content of the raw food materials (Miller and Welch, 2013). Food processing can also considerably reduce the time and energy required for home food preparation. The analysis (Table 5) shows that while data is available on the size of Food distribution and marketing A well-functioning food distribution and marketing system is an integral component of the food system activities, and involves transporting, storing and marketing food products to consumers (Ingram, 2009). Food distribution consists of several facilities and actors, including wholesalers, brokers, food warehouses, logistics and other distribution channels. The performance of the distribution and marketing sector is strongly determined by transportation and infrastructure availability, storage facilities, cold chains and the organizational structure of markets. Table 6 shows that at the present time, data on these processes is limited, leading to a small set of available indicators with partial coverage. The SDG indicators include road density and competitiveness (of the food sector) in the world market, which could serve as proxies, though very rough, for transportation and marketing. FAOSTAT and the World Development Indicators both include data on competitiveness. Information about storage capacity is included in the LSMS survey for Bangladesh and to a limited extent Ethiopia and Nigeria, but not for Vietnam, where the data only cover the storage of harvests by farmers. Hence, good metrics for food distribution and marketing are quite incomplete (Table 6). Notes: A = Publicly available and NA = Currently not available; LSMS: This involves integrated household surveys that from part of the LSMS program or similar surveys. 1 FAOSTAT: data on export of food products and World Bank: data on total export; 2 Data on how much of the harvest is stored for Ethiopia and Nigeria access to food and commercial advertising and prices are among the main factors that drive the type and nutrient quality of consumers' food choices (Ericksen, 2008;Nesheim et al., 2015). Food consumption behavior metrics should include indicators capturing capacityincluding consumers' economic resources, nutrition knowledge and consumer advocacy. The analysis (Table 7) shows that while the SDG indicators do not explicitly refer to the capacity to consume, the LSMS cover food expenditures, time availability for food preparation (though not in Nigeria), nutrition knowledge (Bangladesh only) and government food and safety net policies. Similarly, the SDGs do not include consumer advocacy, but data on consumer associations are available on the web. As diets are also key outcomes, we will discuss diet indicators in the outcome section. Food loss and waste management Food losses and waste play a key role in affecting global food and nutrition security by directly reducing the total food available for consumption and by indirectly increasing natural resource use (FAO, 2013). Food losses can occur along the value chain, but most food waste is recognized to occur after consumers purchase it (FAO, 2013;Lipinski et al., 2013). Preconsumer food losses are thought to be more prevalent in the food systems of developing countries (e.g. Delgado et al., 2017), and post-consumer food waste is considered to be higher in high-income countries (FAO, 2013;Gustafson et al., 2016). However, the extent and impact of food losses in developed countries should not be underestimated. While some stated that food loss or waste can be optimal in an economic sense (Bellemare et al., 2017) and recent rigorous evidence demonstrates that the rates of food loss and waste claimed by FAO (2011) are likely substantially overstated (Delgado et al., 2017;Ambler et al., 2018), food loss and waste management may constitute an important tool for improving food security and decreasing the pressure on food production (IFPRI, 2016;Jurgilevich et al., 2016). Table 8 shows that data availability on those issues is limited, and the food loss and waste indicators from the SDGs and the data available do not overlap. While the SDGs propose to record the total percentage of food lost or wasted and the percentage of food waste that is recycled, data are only available on food losses at farm level. Note also that data collection methods used so far are not considered to be of high quality (Delgado et al., 2017). In measuring food loss, Delgado et al. (2017) emphasize the need to identify where food loss occurs in the food system along the various stage of the value chain and causes of food loss. They propose alternative methodologies that aim to reduce food loss measurement error and that allow to account for both quantitative and qualitative losses from the pre-harvest stage through product distribution, as well as discretionary losses among the processing, large distribution, and retail sectors. Metrics for measuring food system outcomes An optimized food system would meet consumers' food quality and safety demands, promote economic and sociocultural wellbeing of communities, reduce the pressure on aquatic and terrestrial ecosystems, and increase ecosystems capacity to respond to changes and shocks (Hinrichs, 2014;IPES, 2015, Béné et al., 2019b. Impacts of investments and food system interventions can be assessed more adequately when food system outcomes are measured well. In the present case, food system outcomes were classified into three categories: (i) diet, nutrition and health outcomes, (ii) socioeconomic outcomes, and (iii) environmental outcomes. Metrics for dietary, nutrition, and health outcomes Nutrition and health are among the most important food systems outcomes (Burchi et al., 2011;Padilla et al., 2015;Lartey et al. 2018). Health and nutrition status can be assessed using anthropometric measures, such as body-mass index for adults, stunting prevalence among children under 5 years old, or disease-related measures, such as anemia or the prevalence of diet-related non-communicable diseases, which are all available in the datasets that were reviewed here. These indicators are listed in Table 9 along with their degree of availability. Interestingly, while anthropometric data are available at microlevel as well as macrolevel, the availablity of diseaserelated measures is limited. Diets provide a key link between the food system and health and nutrition status, as varied diets are essential to support individual physical and mental health. We focus on diet indicators that quantify two key attributes-diet quality and adequacy. Diet quality describes how well an individual's diet conforms to dietary recommendations that are often reflected in food based dietary guidelines (Alkerwi, 2014). The most commonly used assessment tool for dietary quality is the dietary diversity score. It refers to the consumption of a variety of desirable foods or food groups, reflecting both nutrient sufficiency (when measured at individual level) and economic ability to access a variety of foods (when measured at (FAO, 2010;Global Panel, 2015). Maintaining diet quality involves both enhancing the role of healthy foods, such as fruits and vegetables, while limiting the consumption of unhealthy foods or food groups, such as ultraprocessed foods. Diet adequacy refers to the sufficient-not too little but also not too much-intake of energy and essential nutrients needed to fulfill nutritional requirements for optimal health appropriate to age, sex, disease status and physical activity for a healthy life (Castro-Quezada et al., 2014). Typically, the requirement for a given nutrient is defined as a lower (for heathy nutrients) or higher (for unhealthy nutrients) bound. Diet adequacy is assessed based on the Indicator of the number of dietary-related non-communicable disease affected adult persons at a given time. It is measured as the ratio of number of cases of dietary-related non-communicable diseases in the population during a given period of time to number of persons in that population at same time. Prevalence of anemia The percentage of the population or subgroups of the population (e.g., children, women, etc.) affected by anemia at given period of time. DHS, FAOSTAT A A A A Yes Body mass index (BMI) Indicator of appropriateness of the ratio of weight to height squared that is used to define and screen for thinness (<18.5), overweight (>25) and obesity (>30) for women and according to age for children. LSMS, DHS A A A A No Prevalence of low birth weight The percentage of newborns that weigh less than 2.5 kg out of the total number of live births in the five (or two) years preceding the point of measurement in time. comparison between the (estimated) nutrient requirement and the intake of a certain individual or population (Castro-Quezada et al., 2014). Table 9 shows that the LSMS-type surveys, which typically include a seven day recall of food consumption, thus allows the construction of several household level diet quality and adequacy indicators. Food safety is another key factor affecting nutrition and health outcomes of food systems. It includes all hazards and risks that make food consumption harmful or potentially harmful to the health of consumers. The primary focus of food safety efforts is the reduction of health hazards and risks related to microbial and food-borne pathogens (Hoffmann and Harder, 2012). Key indicators are the incidence of food borne diseases and toxins, and access to safe potable water-both available in the LSMS (Table 9)-and the presence of national regulatory agencies (available on the internet). Metrics for socioeconomic outcomes While dietary outcomes are the focus of this paper, other food system outcomes need due consideration. Food systems are the largest employer in LMICs (Chaudhary et al., 2018) and have the potential to be both economically viable and inclusive. Inclusive food systems could in turn provide sustainable livelihoods in the different sectors of the system, particularly for vulnerable groups like smallholders and women. In theory, food systems can also provide equitable access to food, thus improving global food security. Indicators for socioeconomic outcomes of food systems include measures of the economic and social wellbeing of the various players in the food system activities, including considerations of food security, gender equality, child labor and animal health and welfare. As detailed in Table 10, several indicators for measuring food systems' socioeconomic outcomes are present in the publicly available data. Metrics for environmental outcomes Food systems are also critically linked to the biophysical environment, which is a key source of crucial inputs (land, water, biodiversity and fossil fuels) and an important recipient of the waste stream and byproducts (Nesheim et al., 2015). Food systems can have significant environmental footprints IPES 2016). Major environmental impacts of food systems include water pollution and depletion, soil degradation, desertification, biodiversity loss, and greenhouse gas (GHG) emissions contributing to climate change (Ericksen, 2008;Ingram, 2011;Vermeulen et al., 2012;FAO, 2011;Gustafson et al., 2016;Westhoek et al., 2016). Thus, sustainable food systems would be expected to achieve good nutrition and socioeconomic outcomes, while keeping the environmental impacts low enough so as not to transgress the planetary boundaries of biophysical processes and further destabilize environmental systems (Steffen et al., 2015;Béné et al. 2019a). Suitable indicators for environmental outcomes of the food system should therefore monitor changes in environmental conditions as reflected in the extent of resource consumption, biodiversity, harmful emissions, and natural resource management. Table 11 shows that aggregate information is available from AQUASTAT and FAOSTAT. Some indictors are also available at micro-level through LSMStype surveys. Overall, data availability is good, at least at the national level. Discussion and conclusion Using a "food system approach" to find ways to improve diets has become a key area of interest among policy makers and researchers. The approach appears particularly relevant for studying dietary changes, as diets are complex outcomes of food systems, involving feedback loops at multiple levels across multiple scales (Cash et al., 2006). However, for the approach to gain practical relevance, it is important to rely on a more comprehensive set of ready-to-use metrics and indicators that can characterize food systems, set meaningful policy goals, track progress, and evaluate potential impacts of innovations and interventions. Current metrics mostly focus on food system outcomes and do not cover all components of the food system. The information from a comprehensive set of food system metrics could be used by decision-makers to identify leverage points for intervention and investments at both sub-national and national levels. More concretely, data on food system metrics can serve as input in policy discussions and, together with foresight analysis, feed into participatory scenario analysis to discuss trade-offs and synergies (Rutten et al., 2018). Food system metrics can also help in the identification of important policy knowledge questions. For example, food system indicators were used in an interactive process with key stakeholders in Ethiopia. The aim of this process was to characterize the food system and to develop priority research questions to support operationalizing food systems approaches to improve diets (Gebru et al., 2018). The resulting discussion paper is currently used by the National Information Platforms for Nutrition (NIPN) to help policymakers develop their knowledge questions further. A subset of indicators could be used to develop food system countries profiles, allowing some rapid characterization of countries, comparative analyses across countries or even regions, and benchmarking and monitoring at global level, all of which would be extremely useful for both national and international decision-makers. Food system metrics can also help further the discussion between experts from diverse disciplines and backgrounds. This discussion is often frustrated by framing within distinctive disciplinary narratives (Bene a) Animal health and welfare legal framework: an indicator for whether a country has put in place the basic legal frameworks needed to protect animal health and welfare; b) Animal Protection Index (API): ranking of countries based on their commitment to animal protection, which gives letter grades to ranked countries ranging from a high of "A" to a low of "G" (World Society for the Protection of Animals , 2019b). Concrete data on a broad set of food system metrics can help identify these narratives and test whether they are supported by or not conflicting with the data. In this paper, we propose a comprehensive set of metrics that can enable the measurement of food systems across all relevant domains and dimensions. Building on previous work, we present a conceptual framework of food systems and use it to systematically identify relevant metrics and indicators based on data availability in LMICS. To assure the practical applicability of the metrics and allow for inter-country comparisons, we select indicators present in datasets that are available in LMICS. We prioritize indicators that overlap with the SDG indicators, although the latter do not cover all aspects of the food system. We apply our approach to four countries. This allows us not only to show commonalities in data availability, but also reveals differences between similar datasets. Key datasets are the LSMS and FAOSTAT, and for health and nutrition indicators, the DHS. While the existence of ready-to-use aggregate indicators from datasets like FAOSTAT is helpful for quick scans of some aspects of food systems, the availability of raw micro-level data from the LSMS has other advantages. The four study countries are large countries with high diversity in terms of agroecology, geography, rural-urban gradients, population, and multiple food sub-systems (Gebru et al., 2018;Raneri et al., 2019). As such, data collected at national scale tend to mask spatial and sectoral differences coming from these diversities within the countries. In addition, many indicators, such as food safety, diet diversity, and food losses and waste, can in theory be measured at local or sub-group scale and traced throughout the food system. Micro-level datasets allow doing so, thus potentially providing information about distributional implications and leverage points for interventions. Key advantages of the LSMS and DHS are that they are nationally representative and available for multiple countries. The latter, however, does not automatically mean that the metrics are comparable between countries. Statistical capacities are dire in many LMICs, particularly in Africa (Jerven, 2013). The surveys used for data collection in the different countries were designed and conducted by various actors. For example, the LSMS surveys in Ethiopia and Nigeria were supported by the World Bank. The Bangladesh Integrated Household Survey (BIHS) was conducted by the International Food Policy Research Institute (IFPRI) under the auspices of the Feed the Future (FTF) program. The other national-level surveys were conducted by the statistical authorities or relevant ministries of the respective countries. The surveys were not conducted in standardized and harmonized ways, and there are likely to be inconsistency in guidelines for the data collection, which can impair the comparability of the metrics across the countries. The datasets across the different countries also vary in terms of quality and their coverage in the rural-urban gradient. For example, the BIHS covers rural areas only, while the other data sets cover both rural and urban areas. In addition, not all selected indicators are available for all countries. With these considerations, the metrics based on these datasets remain greatly important to support evidence-based decision making in the food systems of the respective countries. We find that public data are available on food system drivers and outcomes, and on some of the activities, notably production and consumption. Data on food processing, food distribution and marketing, and food loss and waste appear less complete and thus require additional data collection efforts. With such data limitations, it would be difficult to carry out food system analyses that adequately address the complexity and trade-offs/synergies of the food system using the metrics currently available. Specifically, there is a risk that food system activities with missing data would largely be ignored in policy analyses and discussions, which could result in missing appropriate problem solutions or causing unwanted side effects. Improving the accuracy and usefulness of food systems metrics requires setting widely accepted norms and standards for data collection, collecting data for important metrics which are not often covered in traditional surveys, systematic analysis and synthesis of existing datasets, promoting the principle of open data access, and improving capacity to analyse and use data at all levels. To increase their relevance for policy and practice, food system metrics should allow disaggregated analyses at multiple levels across different scales, including different social groups, regions and local levels. Data on seasonal patterns and their impact on food consumption, and on nutritional intake of macro-and micronutrients across time and space (within-and between-countries) are likely to be important for policymakers and practitioners. With only minor additional investments, LSMS-type surveys could be extended to cover much of the required data. When individual indicators are not available for all countries, and this can easily be solved by including the relevant questions in the follow-up surveys for the relevant countries. In next data collection efforts, new modules could be added. In contrast to household surveys under the LSMS program, the community questionnaire of the Bangladesh Integrated Household Survey (BIHS) covers information on quality and accessibility of the road network, food warehousing and cold storage, all of which could be included in LSMS surveys relatively easily. The BIHS survey also includes valuable questions on nutrition knowledge in the household questionnaire. Food processing is a relatively new concern with limited coverage in all current surveys. The food expenditures sections contain only highly aggregate categories for processed foods that are less suitable for assessing their contribution to diets and nutrition. Some careful recategorization could solve this problem. The consumption questionnaire could also be extended to include food waste at the consumer level. For some indicators lacking data, such as food losses in the value chain and food distribution indicators with above-local relevance, targeted data collection efforts are needed. As there is overlap with the SDG indicators, part of the collected data could serve a dual purpose: both improving the description and monitoring of food systems, and to track progress towards attaining the SDGs.
8,754.2
2020-08-30T00:00:00.000
[ "Agricultural and Food Sciences", "Environmental Science", "Economics" ]
Analyzing and Experimenting Open Source OCR Engines in RPA with Levenshtein Distance Algorithm Robotic Process Automation is a platform used to automate boring and repetitive computer processes using software bots so that humans could involve in tasks which include creativity and decision making which could not be done by robots. Optical Character Recognition takes out printed characters in an image and converts it to text. Google Tesseract OCR and Microsoft OCR were the commonly used OCR engines available in UiPath, a tool for Robotic Process Automation. In Previous, research on comparing those two open source OCR engine, there we made comparison on basic factors which included speed, hardware requirements, accuracy ,but in that case, accuracy was been calculated manually which gave us results but with less precise, as it was a manual process to substitute scraped data to that formulas, In this research we’ve made results with more precision by performing a String comparison algorithm named, “Levenshtein Distance Algorithm” which is deployed in UiPath. Introduction Robotics and automation has stepped into reality a few years ago and is evolving so rapidly around the world in areas such as industrial automation, space engineering, stellar space engineering, even in urban and rural areas all over the world. I always wonder how these programs work seamlessly 24 by 7 hours, [1]as it was designed to do that actually, As we use this technology to overcome bored repetitive tasks and human risky jobs, we often deploy them daily to do that kinda tasks, However they depend on humans to get engaged with that technology, [7] there are many myths in this century regarding future predictions about these technologies (mainly robots or Artificial Intelligence) that they'll overrule the humans by making their own decisions and works which involve their own creativity. [4]But these theories clearly states that machines can't stand a chance against human intelligence cause they're still competing with our skills, [3]but there are many tasks which humans seek for technology help like repetitive tasks as mentioned above to make the result more efficient and with more precision, and there's many investors currently investing billions of dollars in these kind of technologies to seek more compound profit in future. Microsoft OCR, which is a built in OCR engine in Microsoft windows 10 and Tesseract OCR, [2]an open source OCR engine developed by Google were the two available open source OCR engines in UiPath, a tool for Robotic Process Automation. In the previous paper [1]research made is by checking the accuracy of Tesseract OCR and Microsoft OCR by using some manual methods, which is not precise. Also, we had used different sets of images for testing the accuracy and had also used systems of different specifications for this research which may result in error for time taken and accuracy percentage. Hence, to propose a more valid result, we had planned to improve our results by using a string comparison algorithm named, "Levenshtein algorithm", which is used to calculate the similarity between two input strings and returns it's accuracy in percentage. We had also used the same set of images for testing both OCR engines. And executed the workflow on the same system in order to calculate the time taken for the execution error-free. String comparison -levenshtein distance algorithm The operation accepts two strings and returns the percentage of similarity between two strings using the Levenshtein Algorithm in the System.Single form. The Levenshtein algorithm (also referred to as Edit-Distance) calculates the minimum number of editing operations needed to change one string in order to obtain another string. The dynamic programming approach is the most prevalent way of measuring this. A matrix is initialized and the Levenshtein distance between the m-character prefix of one is calculated in the (m,n)-cell with the n-prefix of the other term. From the upper left to the lower right corner, the matrix can be filled. An insert or a delete, respectively, corresponds to each hop horizontally or vertically. The cost is typically set at 1 for each of the tasks. If the two characters in the row and column do not match or 0, if they do the diagonal jump will cost either one. The expense is often reduced locally by each cell. This way, the Levenshtein gap between both terms is the number in the lower right corner. If it is needed to compare a string to some sample data, this could effectively be used. For example, the status of an application needs to be updated on the basis of a statement from Approvers. "The request is approved "Application is approved"The request raised is approved yesterday"Yesterday's request is approved. The traditional string comparison methods would not operate in such instances, but the operation will give a percentage of similarity. Methodology In this research proposal, an string comparison algorithm plays an important role to give more accurate results than our previous study, the whole sequence of the execution will be like; unzipping and feeding the data from our local machine storage to the workflow ocr engines(either Microsoft OCR engine or Tesseract OCR Engine first) ; and redirecting the extracted data to the string algorithm analysing container as show at figure 2.1 and there the major part plays on comparing the extracted data with the original data from the images which was used to feed the OCR engines, and eventually the accuracy(data) will be saved to local storage or can also use cloud storage for purpose, if we're deploying this workflow to the UiPath Orchestrator. The main upgrades from the previous paper [1] is about: Same set of source data is supplied to both ocr machines to expect better comparison results Both workflows for ocr has been executed with same hardware equipments whereas used different hardwares for previous comparison [1] Images with lighten backgrounds are used to extract more data to obtain more precise information. What does container refer to? Images with fancy or decorable fonts are used to test the algorithm [1]Containers or Blocks which is often used in uipath studio to classify set of activities or program in an order to execute in a sequential manner, if a container is set to top level node, the activities which is under that container executes first and then the workflow further moves to next container which holds set of instructions readily to run followed by the previous block. Architecture and workflow 4.1 Architecture Some people might be good at programming by nature. But it's not necessary to be everyone. There are many people, who always struggle to grasp the basics of programming or simply they can't program. [1]Hence UiPath studio offers a no-code environment where we can enter with minimal or zero programming background using some visual code blocks. [1]There were several boring and repetitive tasks in Information Technology and Business Process which may be mundane for many employees. Hence, we could deploy a bot so that humans could involve in some other creative activities and other activities which include human decisions in order to make an accurate process in a very low time. [1]The primary architecture of UiPath software consists of three components; UiPath Studio, UiPath Robot, UiPath Orchestrator which plays a vital role in automating a task: [1]UiPath Studio is a design tool that enables a user to create programs. [1]It has many activities(pre-defined functions) and repositories that are predefined. To model the workflow for the automation process, users can drag and drop activities. In simple terms, UiPath Studio is a method used to model the automation workflow to automate repetitive processes using predefined activities and libraries. And, UiPath Robots is a software that hosts the process installed in the UiPath studio that allows us to carry out our projects with or without human monitoring (or) supervision on any computer.UiPath Orchestrator is a web application that allows us to manage the development and deployment of our resources in our machine. It allows us to launch and schedule our bots on our or other desktops, and also control the bot's status and evaluate the effects of their work. Workflow This research consists of two workflows, one for Tesseract OCR and another for Microsoft OCR. A folder which consists of 100 images is given as input. An excel sheet which consists of Original Text data of the image is used for the calculation of the results. The original text data is read as string 1 for comparison on both workflows and it extracts the text data using Tesseract OCR on Workflow 1 and Microsoft OCR on Workflow 2. The extracted data is given for String 2 for the comparison on respected workflows. Now, the extracted data by Tesseract OCR and Microsoft OCR were fed into the columns C and D of the excel sheet. [1]And the similarity between the string 1 and 2 (String 1 is same for both OCR and String 2 is the extracted text respected to the OCR engine) were calculated and the percentage is returned in columns E and F respectively. Similarly the time taken for the execution has been calculated on the UiPath Studio for both OCRs. Finally, the average of accuracy for the 100 samples and average time taken for the extraction of a single image of each OCR has been calculated. Result analysis a.) Mean Accuracy-There is a measure of the similarities between the original text and the extracted text calculated by using Levenshtein Distance Algorithm-String Comparison in uipath, and the value is determined between them, and finally the mean accuracy is calculated for the determined values b.) Overall Execution Time (or) Time taken -The time taken for 100 images to be extracted including the time taken for the string comparison is calculated and tabulated. c.) Mean time taken(per images) -The average time taken to extract each picture is calculated and calculated and tabulated from the total time taken, and mean time taken value will be calculated from the total time taken divided by total no of images fed. Note: All images were taken in Joint Photographic group format (.JPG or JPEG). Decorative font outcomes For these types of fonts, OCR Engines returned null values and also random alphabets such as, Microsoft ocr returned null and Tesseract returned "fi'lfll/" for the value "love" as in the above image. Images with grey background Images with grey background provide (attached above) an accuracy of nearly 80 -90% in both Google Tesseract and Microsoft OCR. Future works [1]So, the overalls analysis and this comparison research has been done under three major factors which includes, velocity, accuracy, time taken, using distance algorithm, majorly this research was the improvised version of the previous research which gave more precise results over the first one, [1]but there are still some portions of this research can be upgraded for better purposes such as like deploying storage area from local storage to cloud storage which will be used to manipulate data over worldwide collaborators for further analysis, [1] so,moreover the future work will be on storing data with cloud support, Which'll be also used to run other OCR Engines available in the UiPath framework? Conclusion By making calculations at different factors, such as precision, time taken, to put the experiment to an end. [1] In certain cases the results of the Microsoft OCR are more reliable compared to the results of the Tesseract OCR. But Tesseract OCR also gives better results in certain cases as well.This provides different accuracy values.But when considering the time it took to identify the characters in the images compared to Microsoft OCR, Tesseract got efficient result, [1]So by performing these comparison with the use of String comparison algorithm
2,699.2
2021-01-16T00:00:00.000
[ "Computer Science" ]
Modeling Compact Intracloud Discharge (CID) as a Streamer Burst : Narrow Bipolar Pulses are generated by bursts of electrical activity in the cloud and these are referred to as Compact Intracloud Discharges (CID) or Narrow Bipolar Events in the current literature. These discharges usually occur in isolation without much electrical activity before or after the event, but sometimes they are observed to initiate lightning flashes. In this paper, we have studied the features of CIDs assuming that they consist of streamer bursts without any conducting channels. A typical CID may contain about 10 9 streamer heads during the time of its maximum growth. A CID consists of a current front of several nanosecond duration that travels forward with the speed of the streamers. The amplitude of this current front increases initially during the streamer growth and decays subsequently as the streamer burst continues to propagate. Depending on the conductivity of the streamer channels, there could be a low-level current flow behind this current front which transports negative charge towards the streamer origin. The features of the current associated with the CID are very di ff erent from those of the radiation field that it generates. The duration of the radiation field of a CID is about 10–20 µ s, whereas the duration of the propagating current pulse associated with the CID is no more than a few nanoseconds in duration. The peak current of a CID is the result of a multitude of small currents associated with a large number of streamers and, if all the forward moving streamer heads are located on a single horizontal plane, the cumulative current that radiates at its peak value could be about 10 8 A. On the other hand, the current associated with an individual streamer is no more than a few hundreds of mA. However, if the location of the forward moving streamer heads are spread in a vertical direction, the peak current can be reduced considerably. Moreover, this large current is spread over an area of several tens to several hundreds of square meters. The study shows that the streamer model of the CID could explain the fine structure of the radiation fields present both in the electric field and electric field time derivative. Introduction Narrow Bipolar Pulses or NBP, a type of radiation field generated by electrical activity in the cloud, were discovered first by LeVine [1]. He found that these radiation fields are associated with very high bursts of HF (High Frequency) and VHF (Very High Frequency) radiation. Further analyses of these pulses in thunderstorms in Florida were reported in [2][3][4][5][6][7][8]. Cooray and Lundquist [9] detected them for the first time in tropical storms in Sri Lanka and, more recently, detailed analyses of these them for the first time in tropical storms in Sri Lanka and, more recently, detailed analyses of these pulses in tropical Sri Lanka and Malaysia were conducted by Sharma et al. [10], Gunasekara et al. [11], and Ahmad et al. [12]. The latitude dependence of NBP was investigated by Ahmad et al. [13]. The typical duration of narrow bipolar pulses lies in the range of 10 to 20 μs. The polarity of the initial half cycle of these pulses can be either positive or negative. The zero crossing time of NBPs lies in the range of 3-10 μs [6,11]. Their initial peaks, when normalized to a common distance, are either comparable to or larger than those of return strokes. They appear to be rather smooth in field records, but high-resolution records show fine structure superimposed on these waveforms [12]. The presence of the fine structure is apparent when one observes the electric field time derivative of these pulses which shows a strong ragged structure which is not present in other high current events such as return strokes [2,14,15]. Figures 1 and 2 show, respectively, examples of NBPs and their time derivatives measured in the study conducted by Karunarathne et al. [6] (and summarized by Bandara et al. [7]) and Gunasekara et al. [11] . Karunarathne et al. [6] and summarized in Bandara et al. [7]. (a) Example of a type A NBP, (b) example of a type B NBP, (c) example of a Type C NBP and (d) example of a type D NBP as defined in [6] and [7]-figure obtained from [7]. Narrow Bipolar Pulses are generated by bursts of electrical activity in the cloud which are referred to as Compact Intracloud Discharges (CID) or Narrow Bipolar Events in the current literature. These discharges usually occur in isolation without much electrical activity before or after the event, but sometimes they are observed prior to the initiation of lightning flashes. They are abundant in growing thunderstorms and mostly occur before the main electrical activity, i.e., the production of lightning flashes sets in. They usually take place at high altitudes, at heights around 10 km or more [3,4,16,17]. While CIDs are abundant in tropical thunderstorms [11], experimental observations show that CIDs are rare in Swedish thunderstorms [13]. Waveshapes of narrow bipolar pulses measured in the study conducted by Karunarathne et al. [6] and summarized in Bandara et al. [7]. (a) Example of a type A NBP, (b) example of a type B NBP, (c) example of a Type C NBP and (d) example of a type D NBP as defined in [6,7]-figure obtained from [7]. Atmosphere 2020, 7, x 3 of 26 Figure 2. Two examples of the time derivative of NBPs (Narrow Bipolar Pulses) observed in the study conducted by Gunasekara et al. [11]. Note that the vertical scale is in arbitrary units. Interferometric observations show that CIDs are involved with electrical activity that propagates over a rather short distance, several hundred meters or so, with speeds in the range of 3 × 10 7 to 10 8 m/s with the upper theoretical bound being the speed of light [18]. Gurevich et al. [19] and Gurevich and Zybin [20] suggested the possibility that CIDs are Figure 2. Two examples of the time derivative of NBPs (Narrow Bipolar Pulses) observed in the study conducted by Gunasekara et al. [11]. Note that the vertical scale is in arbitrary units. Narrow Bipolar Pulses are generated by bursts of electrical activity in the cloud which are referred to as Compact Intracloud Discharges (CID) or Narrow Bipolar Events in the current literature. These discharges usually occur in isolation without much electrical activity before or after the event, but sometimes they are observed prior to the initiation of lightning flashes. They are abundant in growing thunderstorms and mostly occur before the main electrical activity, i.e., the production of lightning flashes sets in. They usually take place at high altitudes, at heights around 10 km or more [3,4,16,17]. While CIDs are abundant in tropical thunderstorms [11], experimental observations show that CIDs are rare in Swedish thunderstorms [13]. Interferometric observations show that CIDs are involved with electrical activity that propagates over a rather short distance, several hundred meters or so, with speeds in the range of 3 × 10 7 to 10 8 m/s with the upper theoretical bound being the speed of light [18]. Gurevich et al. [19] and Gurevich and Zybin [20] suggested the possibility that CIDs are generated by runaway avalanches. Marshall et al. [21] modeled the CID as a high current pulse propagating with speeds of 5 × 10 7 m/s. Cooray et al. [14] modeled the CID as a series of runaway avalanches and explained the main features of the NBPs and the strong chaotic nature of the time derivatives of these fields. A study conducted by Babich et al. [22] made an attempt to simulate the CID as a relativistic avalanche. Nag and Rakov [5] inferred from the field waveshape of NBPs that they consist of some form of oscillating current 'bouncing' back and forth along the discharge channel. More recently, Rison et al. [23] and Tilles et al. [24] inferred from interferometric observations that CIDs are fast streamer discharges in virgin air which do not produce conducting channels. Recently, several attempts were made to model CID's as streamer bursts [25][26][27][28]. In [25,26], the CID is modeled as an interaction between two (or more) bipolar streamer structures formed in a strong large-scale electric field of a thundercloud and the features of the electromagnetic emission resulting from this interaction between streamer structures were examined. In the two publications [27,28], the idea of CIDs as a streamer burst was explored to study their physical parameters. In [27], the concept of propagating streamer systems inside the cloud environment as proposed by Griffith and Phelps [29] was utilized. Assuming that the streamer channel is of zero conductivity, the authors of [27] showed how the streamer system exhibits an initial exponential growth followed by a quadratic steady state. In [28], the radio spectrum of NBPs was investigated by modelling the CID as a burst of streamers. All the streamers in the burst were assumed to be initiated at the source location. Individual streamers were created at different times and these initiation times follow a certain probability distribution. The current moment of each streamer was assumed to follow a function whose time derivative matches the shape of the NBP. Using these ideas, the authors of [28] managed to obtain a radio spectrum of NBPs which agrees with experimental observations. In the present paper, we will also model and simulate the CID as a streamer burst. However, our study differs in several aspects from the work described in [27,28]. In contrast to the work done by Attanasio et al. [27], we will attempt to connect the growth parameters of the streamer burst to the signature of the NBP and from that we will attempt to evaluate the temporal and spatial development of the streamer burst. Moreover, while providing a direct relationship between the streamer burst parameters and the radiation field of NBPs, we will also consider the effect of backward propagating currents along a weakly ionized streamer channel. Furthermore, while in [28] all the streamers in the burst were assumed to be originated at the source location, in our study, the growth of the streamer burst takes place as a result of streamer branching during the propagation of the burst. In other words, the streamer burst in our study may start with a small number of streamers and subsequently grow due to branching. We also provide an explanation for the emission of radiation during the propagation of streamers and we connect the growth of the streamer burst to the resulting NBP. In what follows, we will first discuss the features of positive streamers as observed in the laboratory and then proceed to analyze the possible nature of the streamer bursts responsible for NBPs. Characteristics of Positive Streamers When the electric field in air increases beyond the threshold field necessary for cumulative ionization, i.e., the breakdown electric field, any free electron in air located in such an electric field can give rise to an electron avalanche. In electron avalanches, the number of electrons at the avalanche head increases exponentially with distance. At standard atmospheric pressure when the number of electrons at the avalanche head reaches about 10 8 , the avalanche will be transformed into a streamer discharge [30]. The reason for this is the following. When the number of the electrons reaches this value, the distortion of the electric field by the space charge located at the head of the avalanche becomes so large that the positive space charge at the avalanche head starts attracting more avalanches towards it and with the aid of these avalanches the streamer propagates in a background electric field that is lower than the breakdown electric field. A schematic representation of the propagation of a positive streamer is shown in Figure 3. The high electric field produced by the positive charge at the streamer head attracts secondary avalanches towards it. These avalanches neutralize the positive space charge of the original streamer head leaving behind an equal amount of positive space charge at a location slightly ahead of the previous head. In this way, the streamer propagates ahead. Thus, the streamer can be visualized in the ideal case as a propagation of a localized positive space charge in the background electric field. Atmosphere 2020, 11, 549 4 of 27 and subsequently grow due to branching. We also provide an explanation for the emission of radiation during the propagation of streamers and we connect the growth of the streamer burst to the resulting NBP. In what follows, we will first discuss the features of positive streamers as observed in the laboratory and then proceed to analyze the possible nature of the streamer bursts responsible for NBPs. Characteristics of Positive Streamers When the electric field in air increases beyond the threshold field necessary for cumulative ionization, i.e., the breakdown electric field, any free electron in air located in such an electric field can give rise to an electron avalanche. In electron avalanches, the number of electrons at the avalanche head increases exponentially with distance. At standard atmospheric pressure when the number of electrons at the avalanche head reaches about 10 8 , the avalanche will be transformed into a streamer discharge [30]. The reason for this is the following. When the number of the electrons reaches this value, the distortion of the electric field by the space charge located at the head of the avalanche becomes so large that the positive space charge at the avalanche head starts attracting more avalanches towards it and with the aid of these avalanches the streamer propagates in a background electric field that is lower than the breakdown electric field. In the diagram, the effects of multiple avalanches traveling towards the streamer head is represented by an equivalent avalanche. Adapted from [30]. As the charge on the head of the streamer is neutralized by the incoming avalanche, the streamer extends forward by a length x Δ equal to the diameter of the streamer channel. In the diagram, s N is the number of positive ions at the head of the streamer and R is the radius of the streamer channel. A schematic representation of the propagation of a positive streamer is shown in Figure 3. The high electric field produced by the positive charge at the streamer head attracts secondary avalanches towards it. These avalanches neutralize the positive space charge of the original streamer head leaving behind an equal amount of positive space charge at a location slightly ahead of the previous head. In this way, the streamer propagates ahead. Thus, the streamer can be visualized in the ideal case as a propagation of a localized positive space charge in the background electric field. In the laboratory under standard atmospheric pressure, positive streamers were observed to travel at speeds in the range 2 × 10 5 to 5 × 10 6 m/s [31,32]. The background electric field necessary for their propagation at standard atmospheric pressure is estimated to be about 5 × 10 5 V/m [30,33]. The diameter of the streamer channel at atmospheric pressure may lie in the range of hundreds of μm to several mm [31,32]. The streamer diameter is observed to increase with increasing applied voltage or with increasing background electric field. The maximum value of the streamer diameter observed in experiments is about 3 mm [31,32]. Experiments and theory indicate that the background electric field necessary for streamer propagation, the streamer diameter, and the charge at the streamer head depend on the atmospheric . Schematic representation of the propagation of a streamer. In the diagram, the effects of multiple avalanches traveling towards the streamer head is represented by an equivalent avalanche. Adapted from [30]. As the charge on the head of the streamer is neutralized by the incoming avalanche, the streamer extends forward by a length ∆x equal to the diameter of the streamer channel. In the diagram,N s is the number of positive ions at the head of the streamer and R is the radius of the streamer channel. In the laboratory under standard atmospheric pressure, positive streamers were observed to travel at speeds in the range 2 × 10 5 to 5 × 10 6 m/s [31,32]. The background electric field necessary for their propagation at standard atmospheric pressure is estimated to be about 5 × 10 5 V/m [30,33]. The diameter of the streamer channel at atmospheric pressure may lie in the range of hundreds of µm to several mm [31,32]. The streamer diameter is observed to increase with increasing applied voltage or with increasing background electric field. The maximum value of the streamer diameter observed in experiments is about 3 mm [31,32]. Experiments and theory indicate that the background electric field necessary for streamer propagation, the streamer diameter, and the charge at the streamer head depend on the atmospheric pressure in which the streamer is propagating [34,35]. The background electric field necessary for positive streamer propagation decreases with decreasing pressure and at a pressure corresponding to 0.5 bar it may decrease to about 1.5 × 10 5 to 2.0 × 10 5 V/m and at 0.3 bar it may decrease to about 1.0 × 10 5 V/m [34]. At altitudes between about 10 and 12 km, where the CID takes place, the atmospheric pressure is about 0.25 and 0.19 bar, respectively, and, thus the background electric field necessary for stable streamer propagation may decrease further. The similarity laws indicate that the diameter of the streamer increases as d = d 0 p 0 /p, where p 0 is the standard atmospheric pressure, d o is the corresponding streamer diameter, and d is the diameter at air pressure equal to p [35]. The experimental data also indicate that the streamer diameter increases almost linearly with atmospheric pressure [31,32]. The charge at the streamer head also increases according to the relation q = q 0 p 0 /p, where q 0 is the charge at the streamer head under standard atmospheric pressure p 0 and q is the streamer head charge at atmospheric pressure p [35]. The similarity laws indicate that the electron drift velocity does not change with pressure [35]. The laboratory experiments show that the streamer speed increases almost with the square of the diameter of the streamer head [31,32]. The reason for this is the increase in the active region of the streamer channel with increasing diameter. Since the active region of the streamer increases with decreasing pressure, it is possible that the streamer speeds also increase with decreasing pressure. In general, the streamers branch frequently and the ratio of the branching length, i.e., the length a streamer head travels before it is being branched, to the diameter is about 10 [31,32]. CID as a Streamer Burst and the Current Associated with the Streamers In the analysis to follow, we will utilize the ideal picture of a streamer channel presented above and treat the propagation of a streamer channel as a propagation of a spherical charge distribution. Furthermore, according to this picture, since the electrons in the secondary avalanches will be neutralized by the positive charge, a backward propagating electron distribution or current is generated only at the instant of the creation of the streamer head. Once a streamer starts propagating, a backward propagating electron distribution is created only when the streamer is branched. This is the case because, during branching of the streamer, a new positive streamer head is generated and the additional electrons created during the process will propagate towards the origin of the streamer. The backward moving current pulse is assumed to be identical to the forward moving current pulse associated with the positive streamer head except for the polarity. The effect of dispersion of the backward moving current will also be considered later. This scenario is depicted in Figure 4 (Box I). In this diagram, the spherical charge distribution associated with the streamer head is depicted by red dots. The negative charge distribution associated with the electrons is depicted by blue dots. This is the scenario that we will be using in estimating the electromagnetic fields generated by the streamer burst. However, if the conductivity of the streamer channel is zero, the negative charge remains at the same location where it was created (the assumption made in [27]). We will consider this case too in our analysis ( Figure 4, Box II). Let us assume that CID is a streamer burst. We assume that it is not associated with or it will not give rise to a hot conducting channel [23]. A possible justification for this assumption will be given later. Assume that the streamer burst is initiated by a certain number of streamers. Following the description of the movement of positive streamers given earlier, the movement of a streamer head is represented by a movement of a spherical charge pocket. This in turn can be represented by a propagating current pulse. If the speed of propagation of the streamers is v s , which we assume to be uniform, and the radius of the spherical charge distribution at the streamer head (note that this is the same as the radius of the streamer channel) is r s , the duration of the current pulse associated with the propagating streamer head, τ p , will be equal to r s /v s . Let us assume that the overall charge distribution in the spherical charge pocket is Gaussian. Then, the current waveform associated with the movement of the streamer head can be represented by In the above equation, q is the total charge of the spherical charge pocket at the streamer head and σ 2 p = τ 2 p /2π. In the simulation, the charge q on the streamer head is assumed to be 1.6 × 10 −10 C, which corresponds to 10 9 elementary charges. At standard atmospheric pressure, the streamer head charge is about 10 8 electrons and, at low pressures corresponding to an altitude of 10 km, a value of Atmosphere 2020, 11, 549 6 of 27 10 9 is reasonable according to similarity laws. In order to shift the Gaussian function to the positive times, we have used t − 4σ p in the exponent with the understanding that the current pulse will go to almost zero for times less than or equal to zero. This is the expression for the streamer current that we have used in our analysis. However, observe that, for speeds of propagation of CIDs in the range of 3 × 10 7 m/s or more, the duration of the current pulse is in the sub-nanosecond domain even for low pressure expanded streamer radii in the cm range [31,32]. For example, if r s equal to 0.01 m, τ p = 0.33 ns. Thus, for calculations pertinent to time resolutions larger than about 1 ns, the current pulse associated with the streamer head can be represented by a Dirac delta function. That is, In the above equation, q is the charge on the streamer head. Atmosphere 2020, 11, 549 6 of 27 In the above equation, q is the total charge of the spherical charge pocket at the streamer head and = /2 . In the simulation, the charge on the streamer head is assumed to be 1.6 × 10 C, which corresponds to 10 elementary charges. At standard atmospheric pressure, the streamer head charge is about 10 electrons and, at low pressures corresponding to an altitude of 10 km, a value of 10 is reasonable according to similarity laws. In order to shift the Gaussian function to the positive times, we have used − 4 in the exponent with the understanding that the current pulse will go to almost zero for times less than or equal to zero. This is the expression for the streamer current that we have used in our analysis. However, observe that, for speeds of propagation of CIDs in the range of 3 × 10 m/s or more, the duration of the current pulse is in the sub-nanosecond domain even for low pressure expanded streamer radii in the cm range [31,32]. For example, if s r equal to 0.01 m, = 0.33 ns.Thus, for calculations pertinent to time resolutions larger than about 1 ns, the current pulse associated with the streamer head can be represented by a Dirac delta function. That is, ( ) = ( ). According to the scenario we use in this simulation, at a branch point, a forward moving streamer head of charge q will be converted to two forward moving streamer heads each carrying a charge equal to . This is depicted in Figure 4. Since the branching process leads to a creation of a new streamer head with charge , in order to maintain charge conservation, an equal amount of negative charge is also created at the same location. There are three physical scenarios that are of interest to be investigated concerning the fate of this negative charge. The first case is that this charge remains where it is located while the positive charge associated with the streamer head moves forward. The second case is that this charge maintains the same concentration but moves backward towards the origin of the streamer burst with speeds less than or equal to . The third case corresponds to the situation of strong dispersion of the backward current taking place during its propagation, making the duration of the pulse much longer. In the two latter cases, the backward moving electron current is also given by an expression similar to that given by Equation (1). That is, According to the scenario we use in this simulation, at a branch point, a forward moving streamer head of charge q will be converted to two forward moving streamer heads each carrying a charge equal to q. This is depicted in Figure 4. Since the branching process leads to a creation of a new streamer head with charge q, in order to maintain charge conservation, an equal amount of negative charge is also created at the same location. There are three physical scenarios that are of interest to be investigated concerning the fate of this negative charge. The first case is that this charge remains where it is located while the positive charge associated with the streamer head moves forward. The second case is that this charge maintains the same concentration but moves backward towards the origin of the streamer burst with speeds less than or equal to v s . The third case corresponds to the situation of strong dispersion of the backward current taking place during its propagation, making the duration of the pulse much longer. In the two latter cases, the backward moving electron current is also given by an expression similar to that given by Equation (1). That is, Atmosphere 2020, 11, 549 7 of 27 If the backward current does not disperse as it travels along the weakly conducting channel, τ n = τ p and σ n = σ p . If the backward current disperses, τ n > τ p and σ n > σ p . Equations (1)-(3) describe the current elements of the streamer burst. The next step is to write down expressions for the radiation generated during the initiation and branching of the streamer channel. Radiation Field Generated by the Initiation and Branching of a Streamer Channel The streamer system radiates every time a streamer is initiated or when a new streamer head is created during the branching of a streamer. This is the case because it is only during the initiation of a new streamer head that new charges are accelerated from rest to move (note that in the case of positive charge it is an effective movement) with the speed of the streamer. In the calculations to follow, we assume that during streamer branching both branches will propagate almost in a vertical direction. Without this assumption, one has to take into account the branching angle in calculating the radiation field. Consider the initiation of a single streamer from the origin of the streamer burst (i.e., at z = 0) at time t = 0. We assume that the streamer moves vertically downwards with uniform speed denoted by v s . We assume that the ground is perfectly conducting and its effects on the radiation field are taken into account by an 'image streamer' in the ground. The relevant geometry is shown in Figure 5. First, the initiation of the streamer at z = 0 will give rise to a vertical radiation field at ground level, and this radiation field can be described by the equation [36][37][38] In Equation (4), Subscript 1 refers to the radiation generated at the initiation of the streamer. Note that we use the sign convention where the electric field directed out of the ground is considered positive. Atmosphere 2020, 11, 549 7 of 27 If the backward current does not disperse as it travels along the weakly conducting channel, = and = . If the backward current disperses, > and > . Equations (1)-(3) describe the current elements of the streamer burst. The next step is to write down expressions for the radiation generated during the initiation and branching of the streamer channel. Radiation Field Generated by the Initiation and Branching of a Streamer Channel The streamer system radiates every time a streamer is initiated or when a new streamer head is created during the branching of a streamer. This is the case because it is only during the initiation of a new streamer head that new charges are accelerated from rest to move (note that in the case of positive charge it is an effective movement) with the speed of the streamer. In the calculations to follow, we assume that during streamer branching both branches will propagate almost in a vertical direction. Without this assumption, one has to take into account the branching angle in calculating the radiation field. Consider the initiation of a single streamer from the origin of the streamer burst (i.e., at = 0) at time t = 0. We assume that the streamer moves vertically downwards with uniform speed denoted by . We assume that the ground is perfectly conducting and its effects on the radiation field are taken into account by an 'image streamer' in the ground. The relevant geometry is shown in Figure 5. First, the initiation of the streamer at = 0 will give rise to a vertical radiation field at ground level, and this radiation field can be described by the equation [36][37][38] , In Equation (4), Subscript 1 refers to the radiation generated at the initiation of the streamer. Note that we use the sign convention where the electric field directed out of the ground is considered positive. Assume that the streamer will make a branch when its head is located at a distance from its Ground Plane Initiating height of CID Assume that the streamer will make a branch when its head is located at a distance z from its origin (see Figure 5). If the negative charge that is being created is also assumed to propagate backwards, the radiation field produced during the branching consists of three pulses. The first radiation pulse is created by the forward movement of the positive charge head of the new branch. The second radiation pulse is created by the backward movement of negative charge associated with the newly created streamer head and the third radiation pulse is created by the termination of the backward moving negative current pulse at the origin of the streamer burst. The radiation fields generated at the initiation of these current pulses can be described mathematically as follows [36,37]: The parameters of the above equations are also defined in Figure 5. Similarly, the radiation field produced during the termination of the backward moving current pulse at the origin of the streamer burst (i.e., z = 0) is given by In the above equations, subscript 2 in Equation (5) refers to the radiation generated by the forward moving positive charge, subscript 3 in Equation (6) refers to the radiation generated by the backward moving negative charge, and subscript 4 in Equation (7) refers to the radiation generated when the backward moving negative charge is stopped at the streamer origin. The above equations describe completely the radiation fields produced during a single branching event. If the negative charge created during the branching event remains where it was created, then the total radiation field generated by the branching process can be described by Equation (5) alone. The distant radiation field of the CID or the NBP is created by the cumulative effect of the radiation fields generated by the branching of the forward moving streamers. Now, we are ready to write down an expression for the electric field generated by the streamer burst. Electric Field Generated by the Streamer Burst First of all, from the analysis presented in the previous section, one can note that the temporal variation of the radiation field of the streamer burst is controlled by the time evolution of the number of streamer heads of the streamer burst as a function of time. In the growing stage of the streamer burst, the number of streamer heads will increase as a function of time due to branching and, in the decaying stage, the total number of propagating streamer heads starts to decrease (due to the termination of some of the streamer channels) and eventually reaches zero. Thus, the total number of propagating streamer heads can be mathematically represented by a function that increases initially with time, reaches a peak, and then decays. As we will show later, this function can be inferred directly from the measured radiation field of the CID. For the moment, let us denote the growth and decay of the streamer heads in the streamer burst as a function of time by n(t). The parameter n(t) is the number of streamer heads moving forward at any given time t. The way in which the number of streamer heads vary with z, i.e., n(z), can be obtained by replacing t by z/v s . Note that n can be expressed equivalently as a function of z (location of streamer head) or t (time taken by the streamer to reach z). Once this function is defined, we will be in a position to write down the expressions for the electric fields produced by the CID. The electric field generated by the streamer burst can be divided into radiation field, velocity field and static field [36,37]. The radiation field is produced by accelerating charges, for example when currents are initiated or terminated. The velocity field is the modified Coulomb field generated by the moving charges associated with the current, and the static field is the Coulomb field produced by the stationary charges. In the streamer system under consideration, as we have seen in Section 4, radiation fields are produced by the currents associated with the positive charge of the newly created streamer heads, the currents associated with the negative charge of the newly created streamer heads moving towards the origin of the streamer burst, and the radiation generated as these backward moving currents are terminated at the origin of the streamer burst. Velocity fields are generated by current pulses moving forward (positive charge) and backwards (negative charge) inside the streamer burst and the static fields are produced by charges deposited by the terminated positive heads (positive charge) and the charges deposited at the origin by the backward moving currents (negative charge). Let us now write down expressions for these field components. Radiation Field Let us assign t = 0 for the time at which the streamer burst is initiated at height H above the perfectly conducting ground plane. The relevant geometry is shown in Figure 6. The burst moves towards ground with uniform speed v s . As before, we assign z = 0 to the origin of the streamer burst and the z coordinate increases towards the ground. Let us consider the radiation emitted by the events taking place in the streamer burst when it travels from z to z + dz. The number of new streamer heads generated as the streamer burst travels across this elementary distance is Atmosphere 2020, 11, 549 9 of 27 Let us assign = 0 for the time at which the streamer burst is initiated at height H above the perfectly conducting ground plane. The relevant geometry is shown in Figure 6. The burst moves towards ground with uniform speed . As before, we assign = 0 to the origin of the streamer burst and the z coordinate increases towards the ground. Let us consider the radiation emitted by the events taking place in the streamer burst when it travels from z to z dz + . The number of new streamer heads generated as the streamer burst travels across this elementary distance is Radiation Field The positive current caused by the newly created streamer heads, say ( ) dz i t , when the streamer burst travels from z to z dz + is given by Thus, the radiation field generated by the newly created positive current pulses moving towards ground is In the above equation, is the location at which the cessation of the streamer burst takes place. Similarly, the radiation field generated by the current associated with the negative charge moving towards the streamer origin is given by Furthermore, the radiation field generated by the termination of this current at the streamer origin is Initiating height of CID The positive current caused by the newly created streamer heads, say i dz (t), when the streamer burst travels fromz to z + dz is given by Thus, the radiation field generated by the newly created positive current pulses moving towards ground is In the above equation, z max is the location at which the cessation of the streamer burst takes place. Similarly, the radiation field generated by the current associated with the negative charge moving towards the streamer origin is given by Furthermore, the radiation field generated by the termination of this current at the streamer origin is In Equations (11) and (12), z m is the distance from the origin at which dn(z) dz becomes negative. In writing down Equations (11) and (12) we are assuming that no new streamer heads are created in dz is negative. The total radiation field generated by the streamer burst can be obtained by summing up the different contributions. That is, Once the speed and the time evolution of the streamer heads are specified, one can calculate the resulting radiation field from the equations given above. If the negative charge is assumed to be localized at the place of creation, the radiation field is given by Equation (10) alone. Note also that we have assumed that all the streamers are vertically oriented. Velocity Field Generated by the Streamer Burst The velocity field is created both by the forward and backward moving current pulses located inside the elementary spatial distance dz. The number of current pulses (or streamer heads) moving forward and located within the spatial distance dz is equal to n(z). Thus, the velocity field generated by the forward moving current is [36,37]: The total velocity field generated by the forward moving current is Before writing down the expression for the velocity field caused by the backward moving currents, it is necessary to express the backward moving current flowing through dz as a function of time. The geometry necessary for the calculation is given in Figure 7. First, observe that the backward moving currents associated with all the new heads created in the region ahead of the location z will flow through the elementary length dz. Consider an elementary length dξ located at ξ where ξ > z. The current at z generated by the new heads created at dξ is given by Thus, the total backward current moving across the element dz is given by From this, the velocity field generated by the backward moving electron current is The total velocity field is given by the sum of E 1u and E 2u . Static Field Generated by the Streamer Burst During the movement of the streamer burst, there are two sources that will generate a static field. The first source is the negative charge that will accumulate due to the termination of backward moving negative current at the origin of the streamer burst. If we assume that the negative charge does not travel backwards, one has to modify the static field appropriately as we will show later. The other source is the positive charge that will accumulate at the forward end of the streamer burst due to the cessation of the propagation of the streamer heads. Now, the backward current reaching the origin of the streamer burst as a function of time is given by From this, one can estimate the magnitude of the negative charge that will accumulate at = 0 as Thus, the electrostatic field produced by the accumulation of negative charge at the origin of the streamer burst is (note that the field is directed out of the ground) Now, let us consider the accumulation of positive charge along the path of the streamer burst. First, observe that the accumulation of positive charge takes place in the region where = ( ) is negative, that is, in the region where the termination of the streamer heads is taking place. Consider an element at distance from the origin. The value of z is such that is negative. The positive charge deposited in element is The electric field produced at ground level by this positive charge is given by Substituting for z dq from Equation (22), the total static field produced by the positive charge accumulated along the streamer burst can be written as Ground Plane Initiating height of CID Figure 7. Geometry necessary for the calculation of the velocity field generated by the streamer burst. Static Field Generated by the Streamer Burst During the movement of the streamer burst, there are two sources that will generate a static field. The first source is the negative charge that will accumulate due to the termination of backward moving negative current at the origin of the streamer burst. If we assume that the negative charge does not travel backwards, one has to modify the static field appropriately as we will show later. The other source is the positive charge that will accumulate at the forward end of the streamer burst due to the cessation of the propagation of the streamer heads. Now, the backward current reaching the origin of the streamer burst as a function of time is given by From this, one can estimate the magnitude of the negative charge that will accumulate at z = 0 as Thus, the electrostatic field produced by the accumulation of negative charge at the origin of the streamer burst is (note that the field is directed out of the ground) Now, let us consider the accumulation of positive charge along the path of the streamer burst. First, observe that the accumulation of positive charge takes place in the region where dn z = dn(z) dz dz is negative, that is, in the region where the termination of the streamer heads is taking place. Consider an element dz at distance z from the origin. The value of z is such that dn z is negative. The positive charge deposited in element dz is The electric field produced at ground level by this positive charge is given by Substituting for dq z from Equation (22), the total static field produced by the positive charge accumulated along the streamer burst can be written as In the above equation, z m is the distance from the origin of the streamer burst when dn(z) dz becomes negative. Thus, the total static field produced by the streamer burst is given by If we assume that the negative charge remains localized at the place of creation, then the electric field generated by the negative charge is The total field in this case is given by Growth Parameters of Streamer Burst That Best Represent CID Equations (8)-(27) represent the components of the electric field produced by the streamer burst. The next task is to estimate the growth parameters, i.e., n(t) (or n(z)), pertinent to the NBP. In principle, this can be estimated directly by the field signature of the NBP. If the time resolution ∆t needed in the calculations is such that ∆t >> τ p and ∆t >> τ n , then, for all practical purposes, the forward and backward moving current can be represented by a Dirac delta function and the growth curve of the streamer burst can be obtained analytically from the field signature of NBPs. If ∆t ≤ τ p and ∆t ≤ τ n , the growth curve can be obtained through numerical means from the NBP. Let us consider the case that ∆t >> τ p and ∆t >> τ n . First, consider the case of a non-conducting streamer channel. In this case, the radiation field generated by the streamer burst can be described by the equation If the distance is much larger than the length of the streamer burst, the above equation reduces to From this, one obtains Thus, if the speed and the distance to the location of the NBP is known, the growth curve can be obtained directly from Equation (30). This shows that the growth curve of the streamer burst is directly proportional to the integral of the NBP. If one considers the backward moving current (i.e., partially conducting streamer channel), the radiation field is given by Atmosphere 2020, 11, 549 13 of 27 Following a similar analysis, we obtain Note that in the last term the time derivative is evaluated at time t/2. This can be simplified to This equation can be easily solved numerically using an iterative method to obtain the growth curve. Note that if 2z m < z max then for times greater than 2z m /v s the radiation field is given by Equation (29). In the case ∆t << τ p and ∆t << τ n , the growth curve can be extracted numerically from Equations (28) and (31). In order to obtain a function that can be mathematically manipulated easily, we have extracted the growth curve from a large number of NBPs obtained in Sri Lanka from the study conducted by Gunasekara et al. [11]. We observed that the overall features of the growth curve as a function of time t can be described by the function given below with the values of τ r and τ d in the range, respectively, of 0.1-0.4 µs and 2-10 µs best representing the measured NBPs: The particular form of the function given in Equation (34) was selected to make sure that the second derivative of this function, which is related to the time derivative of the radiation field, behaves in a physically acceptable manner. In this expression, the value of n 0 decides the amplitude of the NBP at a given distance. The growth curve as a function of z, i.e., n(z) can be obtained from Equation (34) by replacing t by z/v s . If the time of initiation of the streamer burst is assigned to t = 0, the number of new streamer heads created during the time interval t → t + dt is given by dn(t) dt dt. The calculated NBPs using this growth curve with τ r = 0.1 µs and τ d = 4 µs are depicted in Figure 8. In the calculation, we have selected the value of n 0 to make the peak of the radiation field at 100 km equal to 10 V/m and the charge q on the streamer head is assumed to be 1.6 × 10 −10 C, which corresponds, as mentioned earlier, to 10 9 elementary charges. Note also that the value of n 0 we need to match a given amplitude of the radiation field depends on the value of q selected in the calculation. Furthermore, the speed of the streamer burst is kept constant at 3 × 10 7 m/s in the calculation. In the study conducted by Gunasekara et al. [11], the risetime, zero crossing time, and the duration of the NBPs were about 0.6-1 µs, 3-4 µs, and 16-20 µs, respectively. The parameters of the calculated NBPs agree reasonably well with these parameters. Of course, observe that the growth curve can be extracted directly from the measured NBPs. This equation can be easily solved numerically using an iterative method to obtain the growth curve. Note that if 2zm < zmax then for times greater than 2zm/vs the radiation field is given by Equation (29). In the case << and n t τ Δ << , the growth curve can be extracted numerically from equations (28) and (31). In order to obtain a function that can be mathematically manipulated easily, we have extracted the growth curve from a large number of NBPs obtained in Sri Lanka from the study conducted by Gunasekara et al. [11]. We observed that the overall features of the growth curve as a function of time t can be described by the function given below with the values of r τ and d τ in the range, respectively, of 0.1 -0.4 μs and 2 -10 μs best representing the measured NBPs: The particular form of the function given in Equation (34) was selected to make sure that the second derivative of this function, which is related to the time derivative of the radiation field, behaves in a physically acceptable manner. In this expression, the value of 0 n decides the amplitude of the NBP at a given distance. The growth curve as a function of z , i.e. n(z) can be obtained from Equation (34) Figure 9 shows the total number of streamer heads propagating and the number of new streamer heads created as a function of z corresponding to the radiation fields depicted in Figure 8. The calculated NBPs using this growth curve with = 0.1 μs and = 4 μs are depicted in Figure 8. The Growth Rate of Streamer Heads and the Current In the calculation, we have selected the value of 0 n to make the peak of the radiation field at 100 km equal to 10 V/m and the charge q on the streamer head is assumed to be 1.6 × 10 C, which corresponds, as mentioned earlier, to 10 elementary charges. Note also that the value of 0 n we need to match a given amplitude of the radiation field depends on the value of q selected in the calculation. Furthermore, the speed of the streamer burst is kept constant at 3 x 10 7 m/s in the calculation. In the study conducted by Gunasekara et al. [11], the risetime, zero crossing time, and the duration of the NBPs were about 0.6-1 μs, 3-4 μs, and 16-20 μs, respectively. The parameters of the calculated NBPs agree reasonably well with these parameters. Of course, observe that the growth curve can be extracted directly from the measured NBPs. Figure 9 shows the total number of streamer heads propagating and the number of new streamer heads created as a function of z corresponding to the radiation fields depicted in Figure 8. The Growth Rate of Streamer Heads and the Current Observe from these diagrams that the number of streamer heads propagating forward at a given distance from the origin (Figure 9a) increases initially, reaches a peak, and then decreases. Furthermore, in Figure 9b, the number of newly created streamer heads increases initially and it then goes on to decrease and becomes negative with increasing distance. The meaning of the negative number, as mentioned earlier, is that, instead of creation, more streamer heads are being stopped on the way. Note that, due to the short duration of the current pulse associated with the charge distribution of the streamer head, the current associated with the CID is compressed almost to a very thin region in the longitudinal direction. Thus, the CID appears as a forward moving very thin current sheet whose amplitude is modulated as it propagates downwards. Behind this front, negative charges propagate towards the origin of the CID. The current associated with the forward moving front of Current, A Figure 9. (a) Number of forward moving streamer heads present at a given location. Curves i and ii correspond to non-conducting and partially conducting streamer channels; (b) number of newly created streamer heads as a function of distance from the origin; (c) peak current (sum of the currents associated with the forward moving streamer heads) associated with the forward moving front of the streamer burst corresponds to the curve i in (a). Curves i and ii correspond to non-conducting and partially conducting streamer channels. Observe from these diagrams that the number of streamer heads propagating forward at a given distance from the origin (Figure 9a) increases initially, reaches a peak, and then decreases. Furthermore, in Figure 9b, the number of newly created streamer heads increases initially and it then goes on to decrease and becomes negative with increasing distance. The meaning of the negative number, as mentioned earlier, is that, instead of creation, more streamer heads are being stopped on the way. Note that, due to the short duration of the current pulse associated with the charge distribution of the streamer head, the current associated with the CID is compressed almost to a very thin region in the longitudinal direction. Thus, the CID appears as a forward moving very thin current sheet whose amplitude is modulated as it propagates downwards. Behind this front, negative charges propagate towards the origin of the CID. The current associated with the forward moving front of the streamer burst is shown in Figure 9c. Note that this current is equal to the sum of the currents associated with the forward moving streamer heads. This is the case because it is obtained by multiplying the number of streamer heads in curve (i) of Figure 9a by the current associated with a single streamer. If all the forward moving streamer heads are located on a single horizontal plane then the peak current associated with the front can reach a peak value of about 4.5 × 10 8 A. However, in reality there will be a vertical spread in the location of the forward moving streamer heads and this will lead to a reduction in the current associated with the front will be reduced considerably. For example, if all the forward moving streamer heads are located on a single horizontal plane the width of the streamer front will be about 1 cm and the duration of the current associated with the streamer front will be about 0.5 ns. If the forward moving streamer heads are dispersed along a vertical direction so that the width of the streamer front is about a meter, the peak current will be reduced by a factor of hundred. Observe that the current density associated with this current sheet needs not be very high because the streamer front can have a cross section of hundreds of square meters. A rough value of the radial expansion of the streamer burst can be calculated as follows. First observe that the minimum radial distance between two streamer heads cannot be smaller than the active region of each streamer head. If the distance is smaller, they will start competing for the same electron avalanches and the consequence would be that one will grow at the expense of the other. Let us denote the lateral extension (or radius) of this active region by r a . Thus, at a minimum, each streamer head takes an area of about πr 2 a in a horizontal plane so that it will not interact with other streamers. The area of the cross section of the streamer burst at a given value of z is equal to n(z)πr 2 a . As n(z) starts decreasing, the area remains at the value corresponding to n(z)πr 2 a max . This is the case because, as the demise of the streamer heads takes place, the remaining streamers will continue along the same direction that they were traveling at the location where the peak of n(z) was reached. At standard atmospheric pressure, the active region of a streamer head is about 100 µm and, in the low-pressure atmosphere, where CIDs usually take place, it may increase to about 1 mm following similarity laws [30,35]. Using this value as a measure of the minimum separation between the streamer heads and assuming that the cross section of the streamer burst is approximately spherical, the calculated lateral expansion (or the radius) of the streamer burst as a function of z is shown in Figure 10 for the two cases considered earlier. Note that the expansion of the streamer burst is nearly exponential initially, but, as time goes on, the expansion stops and the radius of the streamer burst becomes constant. Note that, in reality, the separation between the streamer heads could be larger than the active region of the streamer heads (and also the radius of the active region could be larger than 1mm) and for this reason the calculated value has to be taken as a lower bound. Atmosphere 2020, 11, 549 15 of 27 the streamer burst is shown in Figure 9c. Note that this current is equal to the sum of the currents associated with the forward moving streamer heads. This is the case because it is obtained by multiplying the number of streamer heads in curve (i) of Figure 9a by the current associated with a single streamer. If all the forward moving streamer heads are located on a single horizontal plane then the peak current associated with the front can reach a peak value of about 4.5 x 10 8 A. However, in reality there will be a vertical spread in the location of the forward moving streamer heads and this will lead to a reduction in the current associated with the front will be reduced considerably. For example, if all the forward moving streamer heads are located on a single horizontal plane the width of the streamer front will be about 1 cm and the duration of the current associated with the streamer front will be about 0.5 ns. If the forward moving streamer heads are dispersed along a vertical direction so that the width of the streamer front is about a meter, the peak current will be reduced by a factor of hundred. Observe that the current density associated with this current sheet needs not be very high because the streamer front can have a cross section of hundreds of square meters. A rough value of the radial expansion of the streamer burst can be calculated as follows. First observe that the minimum radial distance between two streamer heads cannot be smaller than the active region of each streamer head. If the distance is smaller, they will start competing for the same electron avalanches and the consequence would be that one will grow at the expense of the other. Let us denote the lateral extension (or radius) of this active region by a r . Thus, at a minimum, each streamer head takes an area of about 2 a r π in a horizontal plane so that it will not interact with other streamers. The area of the cross section of the streamer burst at a given value of z is equal to ( ) . As ( ) starts decreasing, the area remains at the value corresponding to( ( ) ) . This is the case because, as the demise of the streamer heads takes place, the remaining streamers will continue along the same direction that they were traveling at the location where the peak of ( ) was reached. At standard atmospheric pressure, the active region of a streamer head is about 100 μm and, in the low-pressure atmosphere, where CIDs usually take place, it may increase to about 1 mm following similarity laws [30,35]. Using this value as a measure of the minimum separation between the streamer heads and assuming that the cross section of the streamer burst is approximately spherical, the calculated lateral expansion (or the radius) of the streamer burst as a function of z is shown in Figure 10 for the two cases considered earlier. Note that the expansion of the streamer burst is nearly exponential initially, but, as time goes on, the expansion stops and the radius of the streamer burst becomes constant. Note that, in reality, the separation between the streamer heads could be larger than the active region of the streamer heads (and also the radius of the active region could be larger than 1mm) and for this reason the calculated value has to be taken as a lower bound. Charge Deposition in Space by the Streamer Burst The positive charge deposited by the streamer burst as a function of the z-coordinate is shown in Figure 11 for the two cases considered earlier. Figure 11a shows the charge distribution for a non-conducting streamer channel, i.e., negative charge remains where it is created. In this case, negative charge is deposited close to the streamer origin and the positive charge is deposited away from it. Figure 11b shows the charge distribution for a partially conducting streamer channel where the negative charge is transported to the streamer origin. In this case, only positive charge will be deposited along the streamer tracks. The positive charge deposited and the charge moment associated with these two cases are 0.24 C, 74 Cm, and 0.13 C, 46 Cm, respectively. For a given peak value of NBP, the amount of charge deposited and the current moment increases as the width of the NBP increases, i.e., with increasing τ d . For values of τ d around 10 µs, a CID associated with a 10 V/m NBP at 100 km, the charge deposited, and the charge moment reach about 0.5 C and 326 Cm, and 0.45 C and 256 Cm, respectively, for the non-conducting and conducting streamer channels. For a given width of the NBP, these parameters increase linearly with the peak value of the NBP. Recall that in our case the peak value of NBPs at 100 km is 10 V/m. Note also that, for a given set of growth parameters, the charge moment increases with increasing speed. This is the case because charges are displaced over longer lengths as the speed increases. The positive charge deposited by the streamer burst as a function of the z-coordinate is shown in Figure 11 for the two cases considered earlier. Figure 11a shows the charge distribution for a nonconducting streamer channel, i.e., negative charge remains where it is created. In this case, negative charge is deposited close to the streamer origin and the positive charge is deposited away from it. Figure 11b shows the charge distribution for a partially conducting streamer channel where the negative charge is transported to the streamer origin. In this case, only positive charge will be deposited along the streamer tracks. The positive charge deposited and the charge moment associated with these two cases are 0.24 C, 74 Cm, and 0.13 C, 46 Cm, respectively. For a given peak value of NBP, the amount of charge deposited and the current moment increases as the width of the NBP increases, i.e., with increasing . For values of around 10 μs, a CID associated with a 10 V/m NBP at 100 km, the charge deposited, and the charge moment reach about 0.5 C and 326 Cm, and 0.45 C and 256 Cm, respectively, for the non-conducting and conducting streamer channels. For a given width of the NBP, these parameters increase linearly with the peak value of the NBP. Recall that in our case the peak value of NBPs at 100 km is 10 V/m. Note also that, for a given set of growth parameters, the charge moment increases with increasing speed. This is the case because charges are displaced over longer lengths as the speed increases. In the analysis presented earlier, we have assumed that the negative charges will propagate backwards keeping the same concentration as the head of the streamer, i.e., = . However, the streamer channel is only weakly conducting and strong dispersion of current waves propagating along this channel will take place. Thus, the assumption that the negative charge will propagate without dispersion is not physically reasonable. In order to correct for this, we have to assume the The Effect of Dispersion of the Backward Moving Current In the analysis presented earlier, we have assumed that the negative charges will propagate backwards keeping the same concentration as the head of the streamer, i.e., τ n = τ p . However, the streamer channel is only weakly conducting and strong dispersion of current waves propagating along this channel will take place. Thus, the assumption that the negative charge will propagate without dispersion is not physically reasonable. In order to correct for this, we have to assume the time constant of the backward current to be much longer than the value of about 0.5 ns estimated for forward moving current pulses. The effect of increasing the duration (i.e., τ n ) of the backward moving electron current is shown in Figure 12. Observe how the waveforms will be modified by the dispersion of the backward moving current waveform. Depending on the amount of dispersion, which may vary from one burst to another, it can give rise to either subsidiary peaks or provide oscillation in the tail of the NBPs. As depicted in Figure 1, both of these features have been detected in the experimentally observed NBPs. Interestingly, the oscillations in the tail of NBPs were explained as bouncing current waveforms along a conducting channel by Nag and Rakov [5]. Here, we provide an alternative explanation for this feature. In the analysis presented earlier, we have assumed that the negative charges will propagate backwards keeping the same concentration as the head of the streamer, i.e., = . However, the streamer channel is only weakly conducting and strong dispersion of current waves propagating along this channel will take place. Thus, the assumption that the negative charge will propagate without dispersion is not physically reasonable. In order to correct for this, we have to assume the Reasons for the Finer Features of the NBPs The information gathered on NBPs recently shows that some of the narrow bipolar pulses are characterized by fine structure such as several peaks in the rising part as illustrated by the results obtained by Karunaratne et al. [6] and Leal et al. [8]. As shown in Section 7.3, some of this fine structure could very well be due to the dispersion of the backward moving electron current. In the case of laboratory experiments, if a voltage large enough to initiate streamers is applied to an electrode, one may encounter multiple streamer bursts from the electrode [31][32][33]. The first burst may reduce the electric field in the vicinity of the electrode, but, as the space charge drifts off, another streamer burst could be initiated. The same physical phenomenon may also happen in the case of the streamer bursts in the cloud: once initiated, instead of one streamer burst, several streamer bursts could be generated from the same region. Moreover, initiation of multiple streamer bursts could be enhanced by the negative charge reaching the streamer origin at later times. Thus, the CID may contain several streamer bursts that may occur in succession from the same region of origin of the burst. The effect of multiple streamer bursts on the radiation fields is shown in Figure 13. In this calculation, we have assumed that individual streamer bursts are identical in their temporal behavior but they are displaced in time. We have simulated only two streamer bursts. Note that fine structure similar to that observed in experiments (see Figure 1) could very well be produced by two or more streamer bursts originating from the same place. Figure 12. Observe how the waveforms will be modified by the dispersion of the backward moving current waveform. Depending on the amount of dispersion, which may vary from one burst to another, it can give rise to either subsidiary peaks or provide oscillation in the tail of the NBPs. As depicted in Figure 1, both of these features have been detected in the experimentally observed NBPs. Interestingly, the oscillations in the tail of NBPs were explained as bouncing current waveforms along a conducting channel by Nag and Rakov [5]. Here, we provide an alternative explanation for this feature. Reasons for the Finer Features of the NBPs The information gathered on NBPs recently shows that some of the narrow bipolar pulses are characterized by fine structure such as several peaks in the rising part as illustrated by the results obtained by Karunaratne et al. [6] and Leal et al. [8]. As shown in Section 7.3, some of this fine structure could very well be due to the dispersion of the backward moving electron current. In the case of laboratory experiments, if a voltage large enough to initiate streamers is applied to an electrode, one may encounter multiple streamer bursts from the electrode [31][32][33]. The first burst may reduce the electric field in the vicinity of the electrode, but, as the space charge drifts off, another streamer burst could be initiated. The same physical phenomenon may also happen in the case of the streamer bursts in the cloud: once initiated, instead of one streamer burst, several streamer bursts could be generated from the same region. Moreover, initiation of multiple streamer bursts could be enhanced by the negative charge reaching the streamer origin at later times. Thus, the CID may contain several streamer bursts that may occur in succession from the same region of origin of the burst. The effect of multiple streamer bursts on the radiation fields is shown in Figure 13. In this calculation, we have assumed that individual streamer bursts are identical in their temporal behavior but they are displaced in time. We have simulated only two streamer bursts. Note that fine structure similar to that observed in experiments (see Figure 1) could very well be produced by two or more streamer bursts originating from the same place. (a) (b) Figure 13. The effect of multiple streamer bursts arising from the same origin on the NBP. In the simulation, the growth parameters of each streamer burst is assumed identical to the one used previously except that the value of was selected to make the peak of the NBP equal to 10 V/m at 100 km. (a) separation between streamer bursts 1.0 μs; (b) separation between streamer bursts 1.5 μs. The Effect of the Random Nature of the Branching Process First, observe that the radiation amplitude of the NBP at a small time interval is decided by the number of new heads created during that time interval. The new heads are created through the branching process. If we could resolve the experimentally observed growth curve in nanosecond resolution, the number of streamer heads created over a given time interval may not follow a smooth curve, as shown in Figure 9a. This is the case because all the streamers do not branch at the same time, and there is some 'randomness' involved in the generation of new streamer heads. Thus, at very small time intervals, the growth curve may not be as smooth as Equation (34) Electric field, V/m Figure 13. The effect of multiple streamer bursts arising from the same origin on the NBP. In the simulation, the growth parameters of each streamer burst is assumed identical to the one used previously except that the value of n 0 was selected to make the peak of the NBP equal to 10 V/m at 100 km. (a) separation between streamer bursts 1.0 µs; (b) separation between streamer bursts 1.5 µs. The Effect of the Random Nature of the Branching Process First, observe that the radiation amplitude of the NBP at a small time interval is decided by the number of new heads created during that time interval. The new heads are created through the branching process. If we could resolve the experimentally observed growth curve in nanosecond resolution, the number of streamer heads created over a given time interval may not follow a smooth curve, as shown in Figure 9a. This is the case because all the streamers do not branch at the same time, and there is some 'randomness' involved in the generation of new streamer heads. Thus, at very small time intervals, the growth curve may not be as smooth as Equation (34) suggests. Now, consider a time interval in such a way that the amplitude of the NBP does not change significantly, say ∆t. The amplitude of the NBP at this time is decided by the number of new streamer heads created during this time interval. Let us say that the number of new streamer heads needed in this time interval to generate the NBP amplitude is N. Let us now divide this time interval into a large number of smaller intervals m. The number N is generated by the sum of new streamer heads created during these smaller time intervals. If there is no randomness involved in the creation of streamer heads, each of these time intervals, i.e., ∆t/m, would have N/m new streamer heads. However, given the random nature of the streamer branching, it is physically not possible to have an equal number of streamer heads in each smaller interval. The number of new streamer heads will be filled into these small time intervals randomly, creating the total number N in the larger time interval. This process can be mathematically simulated by assuming that the branching event is random within the smaller time interval, but it is constrained to produce N new heads in the time interval ∆t. The electromagnetic field at 100 km over a perfectly conducting ground with and without this randomness for the case of non-conducting streamer channels is shown in Figure 14a,b. In this example, we have assumed that ∆t is equal to 1 ns, and it was divided into 0.1 ns intervals in the analysis (i.e., m = 10). Note that the NBP with and without the random branching appears smooth on the given time scale. However, in example shown in Figure 14b, there is random 'noise' superimposed on the electric field corresponding to random streamer branching. As we will show below, the randomness could introduce a significant amount of noise in the time derivative of the waveform. However, how it appears in the measurements depends on the upper cutoff bandwidth of the measuring system. Figure 15b-e show the derivative of the electric field as measured with recording instruments having different risetimes. For comparison purposes, Figure 15a shows the derivative of the electric field without taking into account the random streamer branching. Note how the random nature of the streamer branching becomes apparent in the radiation field time derivative. This shows that the NBPs may appear smooth in broadband radiation because of the low time resolution of the measuring system. However, the derivatives measured in high time resolution show a very ragged structure indicating there is abundant fine structure on the radiation field. The fine structure in the calculated derivatives are in agreement with the measurements shown in Figure 2. As illustrated here, this feature could be produced by the stochastic nature of the branching process. Atmosphere 2020, 11,549 18 of 27 this time interval. Let us say that the number of new streamer heads needed in this time interval to generate the NBP amplitude is N. Let us now divide this time interval into a large number of smaller intervals m. The number N is generated by the sum of new streamer heads created during these smaller time intervals. If there is no randomness involved in the creation of streamer heads, each of these time intervals, i.e., / , would have N/m new streamer heads. However, given the random nature of the streamer branching, it is physically not possible to have an equal number of streamer heads in each smaller interval. The number of new streamer heads will be filled into these small time intervals randomly, creating the total number N in the larger time interval. This process can be mathematically simulated by assuming that the branching event is random within the smaller time interval, but it is constrained to produce N new heads in the time interval t Δ . The electromagnetic field at 100 km over a perfectly conducting ground with and without this randomness for the case of non-conducting streamer channels is shown in figures 14a and 14b. In this example, we have assumed that t Δ is equal to 1 ns, and it was divided into 0.1 ns intervals in the analysis (i.e. m = 10). Note that the NBP with and without the random branching appears smooth on the given time scale. However, in example shown in Figure 14b, there is random 'noise' superimposed on the electric field corresponding to random streamer branching. As we will show below, the randomness could introduce a significant amount of noise in the time derivative of the waveform. However, how it appears in the measurements depends on the upper cutoff bandwidth of the measuring system. Figures 15b, 15c, 15d, and 15e show the derivative of the electric field as measured with recording instruments having different risetimes. For comparison purposes, Figure 15a shows the derivative of the electric field without taking into account the random streamer branching. Note how the random nature of the streamer branching becomes apparent in the radiation field time derivative. This shows that the NBPs may appear smooth in broadband radiation because of the low time resolution of the measuring system. However, the derivatives measured in high time resolution show a very ragged structure indicating there is abundant fine structure on the radiation field. The fine structure in the calculated derivatives are in agreement with the measurements shown in Figure 2. As illustrated here, this feature could be produced by the stochastic nature of the branching process. Figure 16a,b depict the frequency spectrum of the radiation field at 100 km, calculated over a perfectly conducting ground, with and without the stochastic nature of the growth curve. Observe how the stochastic nature of the streamer process gives rise to a significant frequency content beyond about 10 6 Hz. Figure 16. (a) Frequency spectrum of the radiation field without taking into account the randomness of the streamer breakdown. The spectra were range normalized so that 0 dB corresponds to 1 (V/m/Hz) at 100 km; (b) frequency spectrum of the radiation field including the randomness of the streamer breakdown. The spectra were range normalized so that 0 dB corresponds to 1 (V/m/Hz) at 100 km. Figures 16a and16b depict the frequency spectrum of the radiation field at 100 km, calculated over a perfectly conducting ground, with and without the stochastic nature of the growth curve. Observe how the stochastic nature of the streamer process gives rise to a significant frequency content beyond about 10 6 Hz. Field Signature as a Function of the Distance So far, we have studied the effect of various parameters of the streamer burst on the radiation field generated by CIDs. Not much information is available on the field signatures of CIDs measured close to their point of origin, but references [39] and [40] provide a few examples. These examples show the presence of a static field in the close signature of the NBP. The polarity of this static field is opposite to that of the initial peak of the radiation field. On the other hand, the results obtained by Leal et al. [8] show that, even up to a distance of about 10 km, the NBPs do not show a significant electrostatic field. Figure 17a depicts the electric field at 10 km for the example where the corresponding radiation field is shown in Figure 8b. Note that one cannot discern a significant static field in this example. Figure 17b depicts the same example but now at a distance of 5 km. Note that there is a recognizable Field Signature as a Function of the Distance So far, we have studied the effect of various parameters of the streamer burst on the radiation field generated by CIDs. Not much information is available on the field signatures of CIDs measured close to their point of origin, but references [39,40] provide a few examples. These examples show the presence of a static field in the close signature of the NBP. The polarity of this static field is opposite to that of the initial peak of the radiation field. On the other hand, the results obtained by Leal et al. [8] show that, even up to a distance of about 10 km, the NBPs do not show a significant electrostatic field. Figure 17a depicts the electric field at 10 km for the example where the corresponding radiation field is shown in Figure 8b. Note that one cannot discern a significant static field in this example. Figure 17b depicts the same example but now at a distance of 5 km. Note that there is a recognizable static step in this example. The magnitude of the static field at a given distance depends on the amount of charge displaced in the CID. Since the charge displaced by CIDs increases with the duration of the NBP, one would expect the static field at a given distance to increase as the duration of the CID increases. This can be observed from Figure 17c, where the electric field at 5 km distance is depicted for a CID with τ d = 8 µs. Note that the field does not reach to zero level due to the presence of the static field. Figures 16a and16b depict the frequency spectrum of the radiation field at 100 km, calculated over a perfectly conducting ground, with and without the stochastic nature of the growth curve. Observe how the stochastic nature of the streamer process gives rise to a significant frequency content beyond about 10 6 Hz. Field Signature as a Function of the Distance So far, we have studied the effect of various parameters of the streamer burst on the radiation field generated by CIDs. Not much information is available on the field signatures of CIDs measured close to their point of origin, but references [39] and [40] provide a few examples. These examples show the presence of a static field in the close signature of the NBP. The polarity of this static field is opposite to that of the initial peak of the radiation field. On the other hand, the results obtained by Leal et al. [8] show that, even up to a distance of about 10 km, the NBPs do not show a significant electrostatic field. (a) (b) (c) Figure 17. (a) Electric field at 10 km corresponding to the example shown in Figure 8b; (b) electric field at 5 km corresponding to the example shown in Figure 8b; (c) electric field at 5 km when the decay time constant of the growth curve is increased from 4 μs to 8 μs. Figure 17a depicts the electric field at 10 km for the example where the corresponding radiation field is shown in Figure 8b. Note that one cannot discern a significant static field in this example. Figure 17b depicts the same example but now at a distance of 5 km. Note that there is a recognizable Figure 18 shows the same example of Figure 17c but with the speed increased from 3 × 10 7 m/s to 6 × 10 7 m/s. Again, the peak amplitude at 100 km is normalized to 10 V/m. Observe also that the magnitude of the opposite overshoot of the close field, which is generated mostly by the velocity field, decreases as the speed of the streamers increases. The reason for this is the reduction of the velocity field as the streamer speed increases. This is caused by the factor 1 − v s 2 /c 2 present in the field equations pertinent to the velocity field. Actually, the velocity field goes to zero as the speed of propagation of the streamers reaches the speed of light. Note that the static field has a polarity opposite to that of the initial radiation field. This feature is in agreement with the experimental data presented in [39,40]. Atmosphere 2020, 11, 549 21 of 27 static step in this example. The magnitude of the static field at a given distance depends on the amount of charge displaced in the CID. Since the charge displaced by CIDs increases with the duration of the NBP, one would expect the static field at a given distance to increase as the duration of the CID increases. This can be observed from Figure 17c, where the electric field at 5 km distance is depicted for a CID with d τ = 8 μs. Note that the field does not reach to zero level due to the presence of the static field. Figure 18. The same example as in Figure 17c but with the speed of propagation increased to 6 x 10 7 m/s. Note that, in this example too, the peak value at 100 km is selected to be 10 V/m. Since the peak of the radiation field increases linearly with the speed of propagation, the value of 0 n had to be reduced by half compared to Figure 17c to keep the peak radiation field at 10 V/m. Figure 18 shows the same example of Figure 17c but with the speed increased from 3 x 10 7 m/s to 6 x 10 7 m/s. Again, the peak amplitude at 100 km is normalized to 10 V/m. Observe also that the magnitude of the opposite overshoot of the close field, which is generated mostly by the velocity field, decreases as the speed of the streamers increases. The reason for this is the reduction of the velocity field as the streamer speed increases. This is caused by the factor (1 − / ) present in the field equations pertinent to the velocity field. Actually, the velocity field goes to zero as the speed of propagation of the streamers reaches the speed of light. Note that the static field has a polarity opposite to that of the initial radiation field. This feature is in agreement with the experimental data presented in [39] and [40]. Discussion In the results presented here, we have treated the CID as a normal streamer burst and the NBP as the radiation generated by the streamer burst. Let us discuss here the various questions that will arise from this assumption. Streamer Initiation As outlined previously, streamer initiation is preceded by electron avalanches, and this indicates that the electric field has to increase beyond the breakdown electric field over a length of space, called the critical avalanche length, so that the avalanche can be converted into a streamer. The critical avalanche length decreases with increasing electric field. For example, if the field is uniform and just above the breakdown electric field in atmospheric pressure, the critical avalanche length will be about 1-10 cm. Once a streamer is initiated, it will continue to propagate in the background electric field if the latter remains above a certain threshold. However, how this background electric field that is large enough to initiate and maintain the propagation of streamers is generated from the cloud particles is a question that has not yet been answered by the atmospheric electricity community. One possibility is the local increase of the electric field by cloud particles [41,42]. Another possibility is the stochastic nature of the air turbulence inside the cloud that will compress the cloud particles into a smaller volume momentarily, thus increasing the electric field and giving rise to the generation of streamers from a collection of cloud particles [43,44]. The third possibility is the development of regions of high Electric field, V/m Figure 18. The same example as in Figure 17c but with the speed of propagation increased to 6 × 10 7 m/s. Note that, in this example too, the peak value at 100 km is selected to be 10 V/m. Since the peak of the radiation field increases linearly with the speed of propagation, the value of n 0 had to be reduced by half compared to Figure 17c to keep the peak radiation field at 10 V/m. Discussion In the results presented here, we have treated the CID as a normal streamer burst and the NBP as the radiation generated by the streamer burst. Let us discuss here the various questions that will arise from this assumption. Streamer Initiation As outlined previously, streamer initiation is preceded by electron avalanches, and this indicates that the electric field has to increase beyond the breakdown electric field over a length of space, called the critical avalanche length, so that the avalanche can be converted into a streamer. The critical avalanche length decreases with increasing electric field. For example, if the field is uniform and just above the breakdown electric field in atmospheric pressure, the critical avalanche length will be about 1-10 cm. Once a streamer is initiated, it will continue to propagate in the background electric field if the latter remains above a certain threshold. However, how this background electric field that is large enough to initiate and maintain the propagation of streamers is generated from the cloud particles is a question that has not yet been answered by the atmospheric electricity community. One possibility is the local increase of the electric field by cloud particles [41,42]. Another possibility is the stochastic nature of the air turbulence inside the cloud that will compress the cloud particles into a smaller volume momentarily, thus increasing the electric field and giving rise to the generation of streamers from a collection of cloud particles [43,44]. The third possibility is the development of regions of high electric fields due to the action of relativistic avalanches [45]. We assume that CIDs are initiated by a process unknown to us or perhaps by a process similar to the ones described above. Thermalization The difference between CID and the streamer bursts taking place under normal atmospheric conditions is the apparent lack of a hot conducting channel associated with the CID. At atmospheric pressure, charges as small as several micro-coulombs flowing through a single channel are capable of heating the channel [30,33]. Of course, a single streamer channel is a cold discharge and its current is not large enough to heat the streamer channel. However, the accumulated current and the charge associated with all the streamers passing through the common stem are large enough to heat the stem and give rise to a hot channel section. Even in a CID, it is doubtful whether a single streamer channel can give rise to a conducting channel section. As in the case of laboratory discharges, the charge and current from a large number of streamers have to pass through a common stem or channel section in order to create a hot channel in a CID. The reason, for the apparent absence of a hot channel, if a CID is a pure streamer burst, could be due to the low air pressure in the region under consideration. Let us consider the physical processes which lead to the heating of the channel. The heating of the channel takes place when the neutral particles and ions gain enough energy so that thermal ionization becomes effective. This process requires effective transfer of energy from electrons to ions and neutrals. Certain conditions have to be satisfied before this energy transfer can take place and the process through which this effective energy transfer from electrons to neutrals is achieved is called the thermalization. Let us consider this process in more detail [46]. In the streamer phase (or cold phase) of the discharge, many free electrons are lost due to attachment to electronegative Oxygen. Furthermore, a considerable amount of energy gained by electrons from the electric field is used in exciting molecular vibrations. Since the electrons can transfer only a small fraction of their energy to neutral atoms during elastic collisions, the electrons have a higher temperature than the neutrals. That is, the gas and the electrons are not in thermal equilibrium. As the gas temperature rises to about 1600-2000 K, rapid detachment of the electrons from negative Oxygen ions supplies the discharge with a copious amount of electrons, thus enhancing the ionization. As the temperature rises, the Vibration-Translation relaxation time decreases and the vibrational energy converts back to translational energy, thus accelerating the heating process. As the ionization process continues, the electron density in the channel continues to increase. When the electron density increases to about 10 17 cm −3 , a new process starts in the discharge channel. This is the strong interaction of electrons with positive ions through long range Coulomb forces. The Coulomb interaction leads to a rapid transfer of the energy of electrons to positive ions causing the electron temperature to decrease and the temperature of the positive ions to increase. The positive ions, having the same mass as the neutrals, transfer their energy to neutrals very quickly, in a time on the order of 10 −8 s. This process is called thermalization. The transfer of energy from the electrons to the ions and neutrals during the thermalization process results in a rapid heating of the gas. At this stage, the thermal ionization sets in causing a rapid increase in the ionization and the conductivity of the channel. The rapid increase in the conductivity of the channel leads to an increase in the current in the discharge channel and the collapse of the applied voltage leading to a spark. During thermalization, as the electron temperature decreases, the gas temperature increases and very quickly all the components of the discharge, namely electrons, ions, and neutrals, will achieve the same temperature and the discharge will reach local thermodynamic equilibrium. As outlined above, the thermalization requires electron densities of about 10 17 electrons per cubic centimeter. At atmospheric pressure, there are about 2.68 × 10 19 neutral particles per cubic centimeter and, in order to reach electron densities on the order of 10 17 per cubic centimeter, the level of ionization has to be about 0.01. Immediately after thermalization, the electron density in the discharge channel may reach values about 10 18 cm −3 thanks to the contribution of thermal ionization. This corresponds to about 0.1 level of ionization. At low air density corresponding to height of 10 km or more, in order to create 10 17 cm −3 necessary for thermalization, the ionization level of air has to reach a value close to 0.1. It is doubtful whether such level of ionization can be reached in the discharge channel purely due to electron collisions without the support of thermal ionization. This fact may be the reason for the absence of hot channels in CIDs. Another reason that may contribute to the lack of a thermal channel in CIDs could be the following. The charge transported by CIDs lie in the range of 0.5 to 1C. It is possible that the region of this charge transport is distributed over a large cloud volume decreasing the possibility of channel thermalization. Thus, the higher the altitude of the origin of CIDs, the more difficult the creation of a thermalized channel by the streamer burst would be. However, if a CID-like discharge takes place at lower altitudes where the air density is high, it may directly lead to a thermalized channel and, instead of giving rise to a CID, it may indeed give rise to a hot channel leading to a lightning flash. This is the case because, at higher pressures, the higher particle density could aid in generating the electron densities on the order of 10 17 cm −3 necessary for the thermalization and subsequent heating of the channel. Moreover, at lower altitudes, the spatial extent of the source region generating the streamer bursts would be restricted by this thermalization process. This is the case because, when the stem of streamer channels first becomes thermalized, it would start growing rapidly due to the field enhancement at its tip while transporting opposite charge rapidly into the source region. This will reduce the electric field in the source region and clamp down on the growth of streamers from the other regions. This is exactly what is observed in the laboratory. Even though a large number of streamer bursts may start from the electrode, only one will be thermalized and give rise to a leader discharge while the others die down. Thus, a CID-type streamer burst taking place at higher air densities may directly give rise to a leader and dampen the growth of other streamers from the source region, leading to a lightning discharge. Thus, the streamer burst associated with the initiation of the leader may contain much less charge, and its electric field signature would be weak in comparison to a NBP. These could be the reasons why NBPs are rarely observed in Sweden, where the charge centers are located at low altitudes, but are a very frequent feature in tropical thunderstorms. Since the thermalization of air becomes more probable during streamer bursts in regions of high pressure, a streamer burst that originates in Swedish thunderclouds may end up more often as a lightning event than a CID. Furthermore, Electric fields in thunderclouds have been estimated to be of the order of 100 kV/m. The background electric field necessary for streamer propagation at 8-10 km height is close to or less than this value. Thus, if streamers are initiated by local enhancement of the electric field, they will be able to propagate long distances in the background electric field. In regions of high air density, however, one needs a higher electric field for streamer propagation and the chances that streamer bursts can propagate long distances become rarer. This again could also be a possible explanation for the lack of CIDs in Swedish thunderstorms. Of course, in between these two extreme situations, i.e., streamer bursts leading either to lightning flashes (at low altitudes) or CIDs (at high altitudes), there could be cases where a CID would give rise to a lightning discharge occasionally (at mid altitudes) if the charge associated with the CID is large enough. Moreover, it is also possible that the CIDs are generated during the charging stage of the storms and, even if a thermal channel is produced, the background conditions may not be suitable for the initiation of lightning flashes. It is important to point out that a single streamer is a cold discharge, and it is not capable of giving rise to a hot channel. In laboratory discharges the heating takes place at a common streamer stem through which the accumulated current and charge from a large number of streamers are passing. The situation could be the same in the streamer discharges that develop inside the clouds and, whenever hot channels are generated inside clouds by streamer systems, it is probably achieved at the common streamer stems. Streamer Speed At standard atmospheric pressure, the streamer propagates at maximum speeds of about 5 × 10 6 m/s. Not much information is available on the speed of streamers at low pressures. The similarity laws predict that the streamer speeds should not depend on the atmospheric pressure. On the other hand, the cloud environment consists of a large number of cloud particles of various dimensions. The particle density increases with decreasing radii of the particles. For example, in mature clouds, the number of particles may increase to several hundred per cubic meter. As the streamer front propagates, it generates a high electric field ahead of it. As the number of streamer heads intensifies, the streamer front appears as a charged sheet and the electric field extends over a significant distance in front of the streamer system. This electric field could be large enough to generate streamers from the cloud particles located ahead of the streamer front and in principle the streamer front can travel with the speed of light, the speed at which the electric field extends in front of the streamer front. However, in reality, the speed will be less than the speed of light. This could be the case because, for efficient propagation, the charge in the negative streamers generated by the particles towards the forward moving streamer front has to be neutralized quickly by the positive streamers associated with the streamer front. Otherwise, the electric field in front of the streamer system will be reduced. Thus, the speed of the streamer system is somewhat reduced by the finite time necessary for this charge neutralization. This particle assisted propagation (depicted in Figure 19) could be a reason for the fast propagation of streamers inside the cloud environment. Atmosphere 2020, 11, 549 24 of 27 propagates, it generates a high electric field ahead of it. As the number of streamer heads intensifies, the streamer front appears as a charged sheet and the electric field extends over a significant distance in front of the streamer system. This electric field could be large enough to generate streamers from the cloud particles located ahead of the streamer front and in principle the streamer front can travel with the speed of light, the speed at which the electric field extends in front of the streamer front. However, in reality, the speed will be less than the speed of light. This could be the case because, for efficient propagation, the charge in the negative streamers generated by the particles towards the forward moving streamer front has to be neutralized quickly by the positive streamers associated with the streamer front. Otherwise, the electric field in front of the streamer system will be reduced. Thus, the speed of the streamer system is somewhat reduced by the finite time necessary for this charge neutralization. This particle assisted propagation (depicted in Figure 19) could be a reason for the fast propagation of streamers inside the cloud environment. Figure 19. Schematic representation of the cloud particle assisted streamer propagation. As the streamer front moves, the electric field ahead of the streamer front causes the cloud particles located ahead of the streamer front to generate bi-directional streamers (frame to the left). The positive component of the bidirectional streamers will propagate ahead as the new front of the streamer burst while the negative counterpart of the bidirectional streamer neutralizes the positive charge of the previous front (frame to the right). The process is repeated continuously allowing the streamer burst to move rapidly. A final comment In a recent paper Cooray et al. [14] showed that main features of NBPs can be explained by treating CIDs as a series of relativistic avalanches. Here we have shown that the features of NBPs can be explained by treating CIDs as a streamer burst. Further experimental data are needed to find out whether all the electromagnetic emissions from thunderclouds having features similar to those of Cloud Particle Negative streamer Positive streamer Figure 19. Schematic representation of the cloud particle assisted streamer propagation. As the streamer front moves, the electric field ahead of the streamer front causes the cloud particles located ahead of the streamer front to generate bi-directional streamers (frame to the left). The positive component of the bidirectional streamers will propagate ahead as the new front of the streamer burst while the negative counterpart of the bidirectional streamer neutralizes the positive charge of the previous front (frame to the right). The process is repeated continuously allowing the streamer burst to move rapidly. A final Comment In a recent paper Cooray et al. [14] showed that main features of NBPs can be explained by treating CIDs as a series of relativistic avalanches. Here we have shown that the features of NBPs can be explained by treating CIDs as a streamer burst. Further experimental data are needed to find out whether all the electromagnetic emissions from thunderclouds having features similar to those of NBPs are generated by streamer bursts or whether some of them are generated by relativistic avalanches. Moreover, the possibility that some of the NBPs could be a result of combined streamer bursts and relativistic avalanches has also to be investigated. Conclusions In this paper, we have studied the features of CIDs assuming that they consist of streamer bursts without any conducting channels. A typical CID may contain about 10 9 streamer heads during the time of its maximum growth. A CID consists of a current front of several nanosecond duration that travels forward with the speed of the streamers. The amplitude of this current front increases initially during the streamer growth and decays subsequently as the streamer burst continues to propagate. Depending on the conductivity of the streamer channels, there could be a low-level current flow behind this current front which transports negative charge towards the streamer origin. The features of the current associated with the CID are very different from the radiation field that it generates. The duration of the radiation field of a CID is about 10-20 µs, whereas the duration of the propagating current pulse associated with the CID is no more than a few nanoseconds. The peak current of a CID is the result of a multitude of small currents associated with a large number of streamers. If all the forward moving streamer heads lie in a single horizontal plane, according to the simulations the cumulative current that radiates at its peak value could be about 10 8 A. However, the current associated with an individual streamer is no more than a few hundreds of mA. If the location of the forward moving streamer heads are spread along a vertical direction, the peak current will be reduced considerably. Moreover, this large current is spread over an horizontal area of several tens to several hundreds of square meters. The streamer model of the CID could explain the fine structure of the radiation fields present both in the electric field and electric field time derivative. One important feature of the model is that, once the distant radiation field generated by a CID is measured, the model can be used to obtain information on how the number of streamers in the streamer burst varies in time. This is the case because the temporal variation of the streamer growth curve is proportional to the time integral of the radiation field. If the speed of development of the streamer burst is available, the spatial growth of the streamer burst can also be obtained. Author Contributions: V.C. and G.C. conceived the idea and developed the mathematics and the computer software. All authors contributed equally in the analysis and in writing the paper. All authors have read and agreed to the published version of the manuscript. Conflicts of Interest: The authors declare no conflict of interest.
24,963.4
2020-05-25T00:00:00.000
[ "Physics" ]
Evaluation of the DWT-PCA/SVD Recognition Algorithm on Reconstructed Frontal Face Images The face is the second most important biometric part of the human body, next to the finger print. Recognition of face image with partial occlusion (half image) is an intractable exercise as occlusions affect the performance of the recognition module. To this end, occluded images are sometimes reconstructed or completed with some imputation mechanism before recognition. This study assessed the performance of the principal component analysis and singular value decomposition algorithm using discrete wavelet transform (DWT-PCA/SVD) as preprocessing mechanism on the reconstructed face image database. The reconstruction of the half face images was done leveraging on the property of bilateral symmetry of frontal faces. Numerical assessment of the performance of the adopted recognition algorithm gave average recognition rates of 95% and 75% when left and right reconstructed face images were used for recognition, respectively. It was evident from the statistical assessment that the DWTPCA/SVD algorithm gives relatively lower average recognition distance for the left reconstructed face images. DWT-PCA/SVD is therefore recommended as a suitable algorithm for recognizing face images under partial occlusion (half face images). The algorithm performs relatively better on left reconstructed face images. Introduction The heightened interest of researchers in the subject of face recognition is mainly due to various application areas of efficient and resilient face recognition modules. These include bankcard identification, security monitoring, access control, and surveillance control systems. All these applications are very vital for effective, efficient communication, and interactions among people. According to Galton [1], the traditional way of classifying faces is by collecting facial profiles such as curves, findings their norms, and classifying other profiles by their deviation from the norm. Recent rapid advances in face recognition modules can be attributed to active development of algorithm, accessibility of larger face recognition database, and the statistical or numerical techniques used for evaluating the performance of the facial recognition algorithm. According to Turk and Pentland [2], face recognition algorithms' performances are restricted by constrained environments. Some of these constraints are illuminations, ageing, occlusion of face, and varying head tilt and facial expressions. In the case of partially occluded faces, occlusion-insensitive, local matching, and reconstruction techniques have been used for identification [3]. A special case of partially occluded faces occurs where either the left or right face is occluded or segmented, and the remaining half (nonoccluded part) is used for recognition. This can be regarded as performing face recognition using half face images [4]. Singh and Nandi [5] assessed the performance of PCA on the full, left and right half face images. They reported no difference in recognition rates between the left and right half faces but achieved higher accuracy for the left half face. They also found no difference in accuracy rates between the full face and half face images. Their study however revealed that the recognition rate for half faces was half that of the full face images. It was evident from their study that the performance of their algorithm was challenged by intense occlusions. Asiedu et al. [6] evaluated the performance of principal component analysis with singular value decomposition using fast Fourier transform (FFTPCA/SVD) for the preprocessing algorithm on the reconstructed face database. They found that the recognition rates of the FFT-PCA/SVD algorithm in the recognition of left and right reconstructed face images were 95% and 90%, respectively. However, the statistical evaluation of the algorithm's performance showed that the average recognition distances for the left and right reconstructed face images are not significantly different. They recommended the FFT-PCA/SVD face recognition algorithm as viable for the recognition of partially occluded face images; although, its performance was somewhat hindered by occlusion constraint. The performance of the DWT-PCA/SVD face recognition algorithm on varying head tilt/pose was evaluated by Asiedu et al. [7]. Their study revealed that the recognition rate of the DWT-PCA/SVD algorithm declines for headposes greater than 20°. The algorithm gave a perfect recognition rate when used to recognize face images captured under angular constraints less than or equal to 20°. They recommended the discrete wavelet transformation (DWT) as a viable noise reduction mechanism. It can be inferred from the above literature and current advances that the performances of face recognition algorithms are still hindered by occlusion on the face images. In this study, we leveraged on the property of bilateral symmetry of frontal faces to reconstruct half face images (partial occluded faces) and assessed the performance of the DWT-PCA/SVD face recognition algorithm on the reconstructed face images database. Twenty images reconstructed from the half face images (created through vertical segmentation) of the train images were captured into the test-image database. These images were used for testing of the recognition algorithm. Material and Methods The captured images were digitized into gray-scale precision and resized into 200 × 200 dimensions, and the data types changed into double precision for preprocessing. This was done to keep uniformity and allow for easy computations. That is, this made the images (matrices) conformable and enhanced the computations. The subjects in the train image database are shown in Figures 1 and 2 Similarly, from Asiedu et al. [6], the right segmented half images can be reconstructed using the following steps: In Figure 3, we present a sample of the original full image, left and right half images, and their reconstructed images used as the test images in this study. Research Design. The first stage in the recognition process is to preprocess the train images using the adopted preprocessing mechanisms (mean centering and discrete wavelet transform (DFT)). After preprocessing, unique face features are extracted using the PCA/SVD algorithm and stored in the system's memory as a created knowledge for recognition. The performance of the study algorithm (DWT-PCA/SVD) was assessed on two test image databases: left reconstructed face images (test image database 1) and right reconstructed face images (test image database 2). As stated earlier, samples of these test image databases are shown in Figure 3. The test images are also preprocessed using the mean centering and discrete wavelet transform (DWT) mechanisms. Their unique features are also extracted using PCA/SVD for recognition. These features are then passed to the classifier where they are matched with the train image features stored in memory. It is important to note that only one test image database (left reconstructed face images or right reconstructed face images) is used in the face recognition module along with the train image database at a time. Figure 4 shows a design of the study recognition module. Preprocessing Stage. Preprocessing is an effective method used to suppress unwanted image feature distortion for further processing. This helps to reduce the noise acquired and improve the quality of image for recognition. Face image preprocessing also makes the estimation process simpler and better conditioned for recognition. In this study, we adopt mean centering and discrete wavelet transform (DWT) as the preprocessing mechanisms. According to Li et al. [8], DWT can also be used in image encryption applications; although, a watermarking algorithm based on the DWT is not robust to geometric attacks, please refer to Li et al. [8] for more information on a robust double-encrypted watermarking algorithm for image encryption. Discrete Wavelet Transform (DWT). Basically, DWT is a technique that aids in transforming image pixels into wavelet for wavelet-based compression and coding. According to Kociolek et al. [9], DWT is a linear transformation that operates on a data vector whose length is an integer power of two, transforming it into a numerically different vector of the same length. It provides a principled way of downsizing the range images and also captures both frequency and location information. Journal of Applied Mathematics Now, in the frequency domain, distribution of the frequency is transformed in each step. Define L as low frequency band and H as high frequency band. In DWT, the LL subband represents the lower resolution estimate of the original value, while mid-frequency and high-frequency details subband HL, LH, and HH represent horizontal edge, vertical edge, and diagonal edge details, respectively [7]. Most of the energy is concentrated in low frequency subband, and that is why the LL subband (the approximate coefficients of the decomposition) is the only subband among the four subbands used to produce the next level of decomposition. More so, the LL subband contains only the low-frequency components of the image and as such relatively free of noise. The facial expression features are captured in the HL subband, whereas the face pose features are captured in LH subband (the vertical features of outline). The subband HH is the unstable band in all subbands because it is easily disturbed by noises, expressions, and poses, whereas the subband LL is the most stable subband [7]. The DWT refers to a set of transforms, each with a different set of wavelet basis functions. The Haar and the Daubechies sets of wavelets are the two most common wavelets. The other form of wavelets includes the Morlet, Coiflets, Biorthogonal, and Mexican Hat Symlets. In this study, we adopt the Haar wavelet transform because it is the simplest wavelet transform and can efficiently support the interest of the study. The Haar wavelet applies a pair of low-pass and high-pass filters to image decomposition first in image columns and then in image rows independently. It is also worthy to note that in the transformation process described above, we rely on the proposed convolution theorem by Wei and Li [10] which states that "a modified ordinary convolution in time domain is equivalent to simple multipli-cation operations for Offset Linear Canonical transform (OLCT) and Fourier transform." According to Asiedu et al. [7], if we consider a vectorized image X j of dimension N where N is even, then the single-level Haar transform decomposes X j into two signals of length N/2. These are the mean coefficient vector U 1 with components and detail coefficient vector We concatenate U 1 and V 1 into another N-vector, which can be regarded as a linear matrix transformation of X j . We then filter the transformed vector f 1 with the Gaussian filter. This is because the Gaussian noise is the default noise acquired due to illumination variations. The DWT is invertible, so that the original signal can be completely recovered from its DWT representation [11]. Journal of Applied Mathematics The transformed vector f 1 is inverted to X j with components Figure 5 shows the DWT cycle using the Haar wavelet. 2.5. The Implementation DWT-PCA/SVD Algorithm. The DWT-PCA/SVD algorithm was adopted as the recognition algorithm for this study. We motivate the mathematical foundation of the algorithm as follows. Define the sample X whose elements are the vectorized form of the individual images in the study as X = ðX 1 , X 2 , ⋯, X n Þ. be the mean of the jth vectorized image; then, the mean centering of the jth image is given by The dispersion matrix C of the vectorized image matrix is given as where W = ðw 1 , w 2 , ⋯, w n Þ is the mean centered matrix. We now perform singular value decomposition (SVD) of the dispersion matrix C, to obtain the eigenvalues and their corresponding eigenvectors. The SVD decomposition yields two orthogonal matrices U and V and a diagonal matrix Σ. The eigenfaces are then computed as where u j is the jth column vector of the orthogonal matrix U. The principal components extracted from the training set are given as and β T = ½β 1 , β 2 , ⋯, β n . These are stored in memory as created knowledge for recognition. We now consider test images from the two test image databases (left reconstructed face images and right reconstructed face image) described above (Section 2.3). When an unknown face (test image) is passed through the recognition system, its unique features are extracted as where β * T = ½β * 1 , β * 2 , ⋯, β * n is the principal component (extracted features) of the test image. The recognition distances (d) are computed as The minimum Euclidean distance d ji = min ½d, j = 1, 2, ⋯, n, and i = 1, 2 is selected as the recognition distance for the closest match. Figure 6 that the study recognition algorithm (DWT-PCA/SVD) correctly recognized all the left reconstructed images from the MIT-database. Also, there were two mismatches or wrong matches when the right reconstructed face images were used as test images for recognition from the MIT-database. Results and Discussion Similarly, Figure 7 contains the left and right reconstructed face images (captured in test image database 2), recognition distance and their corresponding images in the train image database that were selected as the closest match in the recognition exercise. The images in Figure 7 are from the Japanese Female Facial Expressions (JAFFE) database. It is seen from Figure 7 that the study recognition algorithm (DWT-PCA/SVD) recorded one mismatch or wrong match when the left reconstructed images from the JAFFE database were used as test images in the recognition module. The algorithm (DWT-PCA/SVD) recorded three mismatches or wrong matches when the right reconstructed face images were used as test images for recognition from the JAFFE-database. Overall (considering both MIT and JAFFE databases), the DWT-PCA/SVD algorithm recorded two mismatches or wrong matches when the left reconstructed face images were used for recognition and five mismatches or wrong matches when the right reconstructed face images were used as test images for recognition. Numerical Assessment of the DWT-PCA/SVD Algorithm. The main numerical performance metrics adopted for assessment of the study algorithm (DWT-PCA/SVD) were the average recognition rate and computational time (runtime of the algorithm). According to Asiedu et al. [6], the average recognition rate, R avg , of an algorithm is given as where t run is the number of times the algorithm is executed (number of experimental runs), n i cr is the number of correct matches recorded in the i th run of the algorithm, and n tot is the number of test images in a single run of the algorithm. The average error rate, E avg , given as accounts for the proportion of wrong matches (mismatches) when the study algorithm (DWT-PCA/SVD) is adopted for recognition using the specified test image databases. Now, when the left reconstructed face images are used as test images in the recognition module and the number of times the algorithm is executed, t run = 10, then the total number of correct matches is∑ 10 i=1 n i cr = 19. Also, the number of test images in a single run of an experiment, n tot = 2. Therefore, the average recognition rate of the DWT-PCA/SVD Algorithm is Similarly, when the right reconstructed face images are used as test images in the recognition module and the number of times the algorithm is executed, t run = 10, then the total number of correct recognition (matches) is ∑ 10 i=1 n i cr = 15. Here again, the number of test images in a single experimental run, n tot = 2. The average recognition rate of the study algorithm (DWT-PCA/SVD) is then calculated as ð18Þ and the average error rate E avg is The average computational time of the algorithm was about 2 seconds for the recognition of 20 face images in a test image database. Statistical Evaluation of the DWT-PCA/SVD Algorithm. Table 1 contains some summary statistics of the recognition distances shown in Figures 6 and 7. From Table 1, the average recognition distance of the study algorithm when the left reconstructed images are used as test images is 482.0342 with a standard error of 70.5521. Also, the average recognition distance of the study algorithm when the right reconstructed images are used as test images is 529.7775 with a standard error of 87.5666. It can be inferred from Table 1 that the study algorithm (DWT-PCA/SVD) performs better when the left reconstructed images are used as test images. This is because a relatively lower recognition distance is always preferred as it signifies a closer match. This is consistent with the results from the numerical assessment of the study algorithm. Table 2 shows the sample correlation of 0.869 between the recognition distances for left reconstructed images and the right reconstructed images and its corresponding p value, p ≤ 0:001. This signifies a strong positive linear relationship between the recognition distance for the left reconstructed images and the recognition distance for the right Conclusion and Recommendation The study used the DWT-PCA/SVD face recognition algorithm for recognition on left and right reconstructed face image databases. The reconstruction of the face images becomes necessary in the presence of partial occlusion. We leveraged on the property of bilateral symmetry of the human face to reconstruct the faces from left and right half images. The results of the recognition exercise revealed that the average recognition rates of the study algorithm (DWT-PCA/SVD) are 95% and 75% when the left and right reconstructed face images are used as test images, respectively. It is therefore evident from the numerical assessment that the DWT-PCA/SVD face recognition algorithm performs relatively better when the left reconstructed images are used as test images for recognition. Evidence from the statistical evaluation also shows that the DWT-PCA/SVD algorithm gives relatively lower average recognition distance (482.0342 with a standard error of 70.5521) when the left reconstructed face images are used as test images. This makes the left reconstruction of the face images preferred to the right reconstruction of the face images. The findings of the study are consistent with those of Asiedu et al. [6] and Singh and Nandi [5]. The DWT-PCA/SVD algorithm is recommended as a suitable algorithm for face image recognition under partial occlusion (half face images). The algorithm has a remarkable performance when used for recognition of left reconstructed face images. Data Availability The image data supporting this study are from previously reported studies and datasets, which have been cited. The processed data are available upon request from the corresponding author. Conflicts of Interest The authors declare that there is no conflict of interest.
4,197.2
2021-04-07T00:00:00.000
[ "Computer Science" ]
Hybrid image steganography method using LZW and genetic algorithm for hiding confidential data Digital images are commonly used in steganography due to the popularity of digital image transfer and exchange through the Internet. However, the tradeoff between managing high capacity of secret data and ensuring high security and quality of stego image is a major challenge. In this paper, a hybrid steganography method based on Haar Discrete Wavelet Transform (HDWT), Lempel Ziv Welch (LZW) algorithm, Genetic Algorithm (GA), and the Optimal Pixel Adjustment Process (OPAP) is proposed. The cover image is divided into non-overlapping blocks of n x n pixels. Then, the HDWT is used to increase the robustness of the stego image against attacks. In order to increase the capacity for, and security of, the hidden image, the LZW algorithm is applied on the secret message. After that, the GA is employed to give the encoded and compressed secret message cover image coefficients. The GA is used to find the optimal mapping function for each block in the image. Lastly, the OPAP is applied to reduce the error, i.e., the difference between the cover image blocks and the stego image blocks. This step is a further improvement to the stego image quality. The proposed method was evaluated using four standard images as covers and three types of secret messages. The results demonstrate higher visual quality of the stego image with a large size of embedded secret data than what is generated by already-known techniques. The experimental results show that the information-hiding capacity of the proposed method reached to 50% with high PSNR (52.83 dB). Thus, the herein proposed hybrid image steganography method improves the quality of the stego image over those of the state-of-the-art methods. all that, the is to reduce the size of the secret data. This improves the hiding capacity of the proposed algorithm. Then, the encoded secret data are embedded in each cover image block according to a mapping function that is obtained for each block by using the GA, which enhances the quality of the stego image and improves its security. The OPAP algorithm is applied to each block to reduce the error, that is, the difference between the stego image blocks and the cover image blocks. This enhances the quality of the stego image. The proposed hybrid image steganography algorithm was implemented and evaluated on four standard images, as cover images, and three types of secret messages. The evaluation results suggest that this algorithm improves the hiding of secret data and the visual quality and security of the stego image. and Introduction Digital communication is usually vulnerable to eavesdropping and malicious interference. In this context, susceptibility to security and privacy threats in digital communication is commonly treated with either cryptography or steganography. After application of cryptography to digital communication, the messages look like meaningless jumble of characters, which may raise some suspicion. In this case, these messages can be observed by an eavesdropper but can not be understood. On the other hand, after application of steganography to the digital messages, they will remain as normal messages that are secured while still intact. This makes it difficult for the observers to discover them [1]. Various techniques are used to perform steganography. They can be broadly classified into spatialdomain techniques and transform-domain techniques. The spatial-domain techniques have low robustness against attacks and are not very secure. Hence, any attacks to the stego image may destroy the secret data. Besides, the secret data can be detected and extracted easily from the stego image [2][3][4]. The transform-domain techniques, however, are more secure and robust against the different attacks than the spatial-domain techniques. However, the size of secret data that can be hidden in a cover image is less than what the spatial-domain techniques allow for [5,6]. In addition, in both domain categories, the larger the size of the secret data, the lower the quality of the stego image. Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT), and Discrete Wavelet Transform (DWT) are the transforms most commonly used to convert the image into the frequency domain [6][7][8]. (SVD) and DWT was poposed. Scene change analysis was employed to embed the watermark repeatedly in the singular values of high-order tensors computed form the DWT coefficients of selected frames from each scene. However, the scheme employs similar embedding strategies to the SVD-based scheme. Consequently, it is prone to the risk of high false positive. In this research, a method is proposed to embed a large volume of secret data at higher quality and level of security than the state-of-the-art methods by employing two-dimensional (2-D) Haar Discrete Wavelet Transform (HDWT), the Lempel Ziv Welch (LZW) algorithm as a compression algorithm, and Genetic Algorithm (GA) as an optimization algorithm. The Proposed Method In this work, the researchers propose a hybrid steganography method that applies a similar technique to that reported in [16], with the slight modification of incorporating the GA to achieve optimal mapping for each block size of interest, ultimately to improve the quality of the stego image. The proposed method consists of two algorithms; an embedding algorithm and an extraction algorithm. In the embedding algorithm, the cover image is divided into non-overlapping blocks of nxn pixels. Then, each block is decomposed into four sub-bands by using 2-D HDWT. The secret message is then encoded and compressed using the LZW algorithm. After that, the GA is employed to embed k-length of encoded secret message bits in the block coefficients of each cover image. This is purposely done to identify the optimal mapping function. The OPAP is then applied to each block to reduce the error, that is, the difference between the cover image blocks and the stego image blocks. Thereafter, the inverse HDWT is applied to each block to obtain the overall stego image. In the extraction algorithm, the stego image is divided into non-overlapping blocks of nxn pixels. Then, each block is decomposed by using the 2-D HDWT. After that, the encoded secret message is extracted from the block coefficients according to the mapping function. Lastly, the LZW decompression algorithm is applied to obtain (i.e., decode) the secret message. This section describes the proposed method in detail. Discrete Wavelet Transform (DWT) The 2-D DWT can be implemented using two digital channel filters and a group of down samplers as shown in Fig. 1. The digital filters used are a Low-Pass Filter (LPF) and a High-Pass Filter (HPF). Four different sub-images are obtained by applying first-level (1-L), 2-D DWT on the image. The most commonly used filter is the Haar filter [21]. To apply the HDWT, the cover image of size MxN pixels should be divided into non-overlapping blocks of nxn pixel size. Experiments in the present study revealed that using blocks of the 8x8 size gives the best results. Thereupon, the researchers used the 8×8 pixel size in the present study as shown in Fig. 2. Fig. 2 Cover image blocks of 8×8 pixel size Each block is decomposed by using 1-L 2-D HDWT into four sub-bands. The LL1 is the first sub-band that contains approximation coefficients. The LH1 is the second sub-band that contains horizontal detail coefficients. Meantime, the HL1 is the third sub-band that contains vertical detail coefficients. Lastly, the HH1 sub-band contains diagonal detail coefficients. Each sub-band has the 4×4 pixel size as illustrated in Fig. 3. The Lempel Ziv Welch (LZW) Algorithm There are several techniques for data compression such as the Huffman, LZW, and JPEG techniques. The LZW method was used in this work because it is a general compression algorithm that works on almost any type of data. It effectively reduces the size of the secret message. It is a simple and lossless compression algorithm. In addition, it does not need a dictionary table during the decompression process, where the table can be reconstructed [12,22]. In the current study, the secret message is encoded and compressed using the LZW algorithm. This process increases the information-hiding capacity. The Genetic Algorithm (GA) The encoded and compressed secret message should be embedded in the cover image coefficients with the lowest possible distortion. To do so, a number of k-length bits of the encoded and compressed secret message are embedded in each nxn block coefficients. The Genetic Algorithm (GA) is then employed to effectivlly enhance the quality of the stego image by determining the optimal mapping as illustrated in the following sub-sections. Chromosome Design The first step to take when applying the GA is designing its chromosome. Here, each nxn block is encoded into chromosome, which is a vector of 64 parameters (genes) that contains permutations from 1 to 64 as can be seen in Fig. 4. Fitness Evaluation Choosing the fitness function is one of the most important steps in the GA [23]. This algorithm aims at finding the optimal mapping for each nxn block that improves the stego image quality. Within this context, the Peak Signal to Noise Ratio (PSNR) is used as a fitness function owing to that it is a good measure of the stego image quality [24]. The evaluation begins with the initial population and is repeated with each new generation. The PSNR is defined by Eq. 1 [25]: where x and y are the image coordinates, M and N are the dimensions of the image, Cxy is the cover image, and Sxy is the stego image. Parent Selection After evaluating the initial solutions in view of the fitness function, some of the solutions are selected based on the fitness values and designated as parents. The most fitting solutions are usually selected to produce offspring for the new population. Even the weakest solutions have a chance to be selected in the effort to avoid local minima [17]. In this approach, the roulette wheel method is used for the selection. This method gives equal chance to all of the individuals in the population to be selected. Therefore, diversity in the population is preserved. Crossover The crossover operators are the backbone of the GA that determine its performance. After selecting the two parents, each gene of the parent sequence between the crossover points is swapped to generate offspring. This method is not useful for the proposed GA approach because each gene in the chromosome can be used only once. To solve this problem, the permutation crossover method is employed, where two crossover points are randomly selected from the parent, then the genes existing between the points of crossover are inverted to get offspring. Mutation In the mutation process, genes are chosen randomly from the current population and modified. In the proposed GA approach, the gene can not be changed according to the usual traditional mutation. Instead, mutation permutation, which swaps two genes in each chromosome, is used. After the selection, crossover, and mutation, new solutions are generated. These steps are then repeated until the pre-specified stopping criteria are met. The Optimal Pixel Adjustment Process (OPAP) The main objective of the OPAP is to minimize the embedding error between the cover image and the stego image. This leads to improvement of the quality the of stego image [26]. Let Pi be the pixel value of the i th pixel in the cover image. The Ṕi is the pixel value of the i th pixel in the stego image that is obtained from direct replacement of the k-LSB of Pi with k-length of secret message bits. The P''i is the pixel value of i th pixel in the refined stego image that is obtained after implementation of the OPAP. And δi, which is equal to Ṕi -Pi, is the embedding error between Pi and Ṕi according to the simple Least-Significant Bit (LSB) embedding. Therefore, the embedding error has the bounds of -2 k and 2 k : The OPAP algorithm can be described as follows [26]: Embedding Algorithm In the proposed method, the embedding process can be described as follows: The Extraction Algorithm The following steps describe the extraction process. Output: Cover image, Secret message Steps: 1. Read the stego image. 2. Divide the stego image into non-overlapping blocks of nxn pixels. 5. Apply LZW decompression to obtain the secret message. Fig. 6 represents the flowchart of the extraction algorithm. Experimental Results and Discussion The proposed algorithm has been tested by using four standard gray-scale images as cover images. These images are 'Lena', 'Jet', "Baboon', and 'Boat', all having the size of 512×512 pixels as illustrated in Fig. 7. In this study, three types of secret messages were used (Fig. 8) to highlight the novelty of the proposed method with the three different data types. The first type is a standard gray-scale image with a resolution of 512×256 pixels while the second type is a text message and the third type is a random message with a similar size to that of the gray-scale image. The sizes of the cover image and the secret message were 2,097,152 and 1,048,576 bits, respectively. Therefore, the hiding capacity was set at 50% in order to hide the maximum size of the secret data so as spotlight the novelty of the proposed method while maintaining a high PSNR value. The parameters used to assess the performance of the stego images were the following. A. PSNR The PSNR measures the average cumulative squared errors, i.e., differences between the stego image and the cover image as defined in Eq. 1. B. Hidden Capacity (HC) The Hidden Capacity (HC) measures the maximum size of data that is embedded in a cover image [27]. It is defined as follows (Eq. where Sb is the number of secret message bits that are hidden and Cb is the number of cover image bits. C. Histogram analysis Histogram analysis aims at determining the change in the stego image relative to the cover image. For finding the optimal mapping between the blocks of 4×4 and 8×8 pixel sizes, several experiments were conducted. The experimentation results are summarized by Table 1. The results disclose that in all experiments the block size of 8×8 pixels has higher enhancement effect according to PSNR and histogram analysis than the block size of 4×4 pixels. The reason behind this improvement is the size of the chromosome of the GA, which plays vital role in identifying the optimal solution and saving time during the process. Moreover, the results of using the GA with LZW lead to the conclusion that combining these two techniques together has positive effects on the values of the PSNR and HC, as well as on outcomes of the histogram analysis. Performance of the proposed algorithm was compared with levels of performance of existing algorithms as shown in Table 2. The Lena and Baboon cover images were used for comparison of performance among eight existing algorithms while the Jet and Boat cover images were used for comparison of performance among four related methods based on the results reported in [4,8,9,11,15,16,18,28,29] and each of the PSNR and HC as the performance evaluation criteria (Table 2). The comparison (Table 2) uncovers that the proposed algorithm has an comparable performance with the algorithms presented in [4,8,9,16,18,[28][29][30][31] in terms of the PSNR and HC. The algorithms introduced in [11,15] have high PSNR values but their HC values are lower than the HC values produced by the proposed hybrid image steganography algorithm. Therefore, the increase in the PSNR value is expected. However, the best model is the one that simultaneously has the highest PSNR and HC values. The proposed algorithm has similar HC value to that reported in [11]. Meanwhile, the PSNR values obtained in the current study with the 'Lena' and 'Boat' cover images were 52.59 and 51, respectively. Similarly, the proposed algorithm was tested with the same HC of [15]. The PSNR value obtained with the 'Lena' cover image was 53.42. Accordingly, results of the experiments support that using the LZW and GA leads to tangible improvements in the PSNR and HC values. Histogram analysis was used to show imperceptibility of the stego image. Figures 9-12 compare between the histograms of the cover images and the stego images. Outcomes of this analysis point out small differences between the histograms of both images. This means higher imperceptibility of the stego images than the cover images. In conclusion, the results of this study support that integration of GA with LZW has high positive effects on the PSNR and HC values, as well as on outcomes of histogram analysis. In addition, security of the stego image is enhanced as the results presented in Table 2 Conclusions In this paper, a hybrid image steganography algorithm has been proposed to improve the quality of the stego image and raise the capacity for secret data. The method is based on integration of HDWT, LZW, GA, and OPAP. The HDWT increases the security of the stego image because the secret data are distributed among all pixels. After that, the LZW compression is employed to reduce the size of the secret data. This improves the hiding capacity of the proposed algorithm. Then, the encoded secret data are embedded in each cover image block according to a mapping function that is obtained for each block by using the GA, which enhances the quality of the stego image and improves its security. The OPAP algorithm is applied to each block to reduce the error, that is, the difference between the stego image blocks and the cover image blocks. This enhances the quality of the stego image. The proposed hybrid image steganography algorithm was implemented and evaluated on four standard images, as cover images, and three types of secret messages. The evaluation results suggest that this algorithm improves the hiding of secret data and the visual quality and security of the stego image.
4,110.4
2020-10-15T00:00:00.000
[ "Computer Science" ]
Biobjective Optimization and Evaluation for Transit Signal Priority Strategies at Bus Stop-to-Stop Segment This paper proposes a new optimization framework for the transit signal priority strategies in terms of green extension, red truncation, and phase insertion at the stop-to-stop segment of bus lines. The optimization objective is to minimize both passenger delay and the deviation from bus schedule simultaneously.The objective functions are defined with respect to the segment between bus stops, which can include the adjacent signalized intersections and downstream bus stops. The transit priority signal timing is optimized by using a biobjective optimization framework considering both the total delay at a segment and the delay deviation from the arrival schedules at bus stops. The proposed framework is evaluated using a VISSIM model calibrated with field traffic volume and traffic signal data of Caochangmen Boulevard in Nanjing, China.The optimized TSP-based phasing plans result in the reduced delay and improved reliability, compared with the non-TSP scenario under the different traffic flow conditions in the morning peak hour. The evaluation results indicate the promising performance of the proposed optimization framework in reducing the passenger delay and improving the bus schedule adherence for the urban transit system. Introduction Traffic congestion has been a challenging problem in urban areas. Public transportation, with high passenger capacity per vehicle, has long been considered an effective solution for congestion mitigation. However, on urban arterials, the performance of public transportation such as bus transit is largely affected by the signal timing at intersections and the interaction between transit and other general vehicles. The transit signal priority (TSP) control has been found to be a promising and cost-effective solution to improve the efficiency and level of service. Smith was among the first to conduct bus preemption experiments to reduce transit travel time [1]. Since then, many studies have proposed different TSP scenarios and reported benefits in the field. In existing studies, intersection-based state variables are used in transit signal priority control with isolated intersections [2,3], coordinated intersections within an arterial corridor [4,5], and arterial networks [6,7]. Recently, TSP optimization strategies considering bus stop based performance metrics are paid more and more attentions. Ma et al. presented a coordinated transit priority control optimization model to provide effective priority control for transit while minimizing the adverse impact on general traffic movements among coordinated intersections between two successive bus stops [8]. Feng analyzed the joint effects of different kinds of factors and improvement strategies on bus traveling reliability at the stop-to-stop segment level using the data along an urban arterial corridor in Portland, Oregon, USA [9]. Traffic delay is chosen as an important variable in objective functions for TSP optimization. Such delay terms include transit vehicle delay [10], general vehicle delay [11], and passenger average delay [12]. A number of studies also utilized transit reliability as the other key evaluation parameter for TSP optimization. Existing studies use transit reliability indexes used in TSP optimization including traveling punctuality (transit schedule adherence) [13] and regularity (headway maintenance) [14]. Furthermore, the impacts of traffic flow characteristics Intersection #k Intersection #1 Figure 1: Visualization of the stop-to-stop segment. [15], geometry configurations [16,17], and lane-changing behaviors [18] have also been used in the objective function in TSP optimization. The methodologies for solving TSP optimization models include simulation-based methods [19], genetic algorithms [6,20], artificial neural network [20], heuristic algorithms [21], and multiobjective optimization [22]. The performance of different TSP strategies is analyzed and evaluated [4,[23][24][25]. The analyzing unit in existing studies is primarily the stop-to-stop segment consisting of segment and intersection between two bus stops. Such analyzing scheme may ignore the interaction between bus stops and nearby upstream and downstream intersections [7]. Moreover, to evaluate the performance of transit systems, the excessive delay and poor schedule adherence should also be integrated into the objective functions. The proposed TSP optimization model in this paper is a biobjective model considering both the traffic delay and transit service reliability. A VISSIM-based simulation platform is established for analyzing and evaluating the performance of three optimized TSP scenarios including the green extension, red truncation, and phase insertion. The simulation is calibrated by field traffic flow, signal, and transit schedule data on Caochangmen Boulevard in Nanjing, China. The paper is organized as follows. In Section 2, the biobjective optimization framework is proposed including the analyzing unit, the formulation of objective functions, and the applied TSP scenarios. Section 3 is the field-data based analysis and evaluation of the proposed optimization framework and strategies. The three optimal TSP plans are generated, and the performance of them is analyzed and simulated by use of VISSIM-based simulation platform. Conclusions and recommendations are made in the last section. Methodology The proposed biobjective optimization framework is formulated with respect to the stop-to-stop segment as depicted in Figure 1. Such segment includes the road segment between two consecutive bus stops and the adjacent signalized intersections in between. Such choice of the basic analyzing unit allows the simultaneous monitoring of both intersection-based and segmentbased performance metrics. Two performance metrics are introduced into the objective function of the optimization model, passenger delay, and transit schedule adherence. Passenger delay is a more suitable delay metric than vehicle delay since it takes into account the occupancy difference between auto and transit mode. The total passenger delay in the stop-to-stop segment including passenger delay at intersection and bus stop is considered for TSP optimization. Transit schedule adherence defined as arriving early or late from the scheduled stop arrival time is used as an indicator for the deterioration of system reliability. Three different TSP signal phasing plans are considered including the green extension, red truncation, and phase insertion. Biobjective Optimization Model. The proposed biobjective optimization framework is described in (1). Both total passenger delay ( ) of the stop-to-stop segment and delay deviation (Δ ) of the bus service stop are in the objective functions for TSP optimization: s.t. max ≤ 0.9 ∈ , ∈ , ∈ . (1) ( , ) denotes the total passenger delay at the intersections on each stop-to-stop segment given the assigned green times ( ) and signal phasing plans ( ). In this paper, the duration of TSP green times and signal phasing plans are optimized with fixed cycle lengths. is the set of all intersections on the stop-to-stop segment. is the set of all transit routes on the segment. is the set of all buses on one transit route. and̂are the actual and scheduled passenger delays for bus of route at bus service stop. and are the green time ratio and degree of saturation of general vehicles, respectively, for phase of cycle at intersection . Passenger Delay at Intersection. The total passenger delay at intersections on the stop-to-stop segment consists of the general vehicle delay, transit vehicle delay, pedestrian delay, and bicycle delay. (1) General Vehicle Delay. Based on the Webster's delay formula, the average delay for general vehicles in phase of cycle at one signalized intersection is illustrated in where is the cycle length and , , , and are the average delay, arrival rate, green time ratio, and degree of saturation of general vehicles for phase of cycle at one intersection, respectively. The formula for the total general vehicle delay at one intersection is addressed as the following: 3 Cumulative number of pwEB Figure 2: Illustrative pedestrian delay patterns at intersection. where denotes the set of all cycles at one intersection during time , denotes the set of all phases of a cycle at one intersection, and is the average passenger occupancy of general vehicle for phase of cycle at one intersection. (2) Transit Vehicle Delay. The average delay of transit vehicles in cycle at one signalized intersection can be formulated as follows: where , , , and are the average delay, arrival rate, green time ratio, and degree of saturation of transit vehicles in cycle at one intersection, respectively. The passenger delay of transit vehicles at one intersection can be calculated as the following: where denotes the average passenger occupancy of buses in cycle at one intersection. (3) Pedestrian Delay. Figure 2 illustrates the cycle-by-cycle accumulative flow patterns of pedestrians passing through one approach of the signalized intersection. The arrival and the average departure rate of pedestrians within each signal cycle are denoted by ( ) and ( ), respectively. Shaded areas ( EB ) bounded by the arrival rate curve, departure rate curve, and the time axis are the total pedestrian delay on the eastbound approach of signalized intersection in cycle . The passenger delay formula can then be described as follows: where and denote, respectively, the green time and the clearance time for pedestrian to walk through the intersection. The pedestrian clearance time in each cycle can be generated based on The total pedestrian delay at a signalized intersection per cycle is calculated as below: where SB , NB , and WB are the pedestrian delay on the southbound, northbound, and westbound approach of a signalized intersection in cycle , respectively. (4) Bicycle Delay. For most signalized intersections, bicycles usually utilize pedestrian green time to pass the intersection due to the lack of dedicated signal phase for bicycles. Therefore, the bicycle delay on the approach of signalized intersection in cycle (revealed in (9)) can be calculated based on the same model. The total delay of bicycles ( ) at a signalized intersection within each cycle can be calculated using (10): where SB , NB , EB , and WB are the bicycle delays on SB, NB, EB, and WB approaches of a signalized intersection, respectively. The arrival rate and departure rate of bicycles at an intersection in cycle are indicated by ( ) and ( ), respectively. Figure 3 illustrates the waiting delay patterns of passengers at bus stop bay with each bus arrival. The arrival and boarding rate of passengers bus of route at bus stop are ℎ ( ) and Passenger Delay at Bus Stop. ( ), respectively. The cross-point of ℎ ( ) and ( ) demonstrates the rate of passenger boarding the bus on route . denotes the headway between bus and ( − 1) on route . The dwelling time of bus at the bus stop on route ( ) can be calculated as below: The shaded area ( ) surrounded by ℎ ( ) and ( ) curves and the time axis is the passenger delay for bus of route at bus stops. The passenger delay for each arriving transit vehicle at bus stops can be formulated in the following equation which includes the waiting time delay including the passenger waiting time of all buses which are served at stop bay: Figure 4 illustrates a sample four-phase ring diagram with three transit signal priority strategies including the green extension, red truncation, and phase insertion. Phase 1 includes a through and right movement for vehicles and a through movement for pedestrians on the major street. Phase 2 includes a left-turn movement for the main street. Phase 3 consists of a through and right movement for vehicles and a through movement for pedestrians on the minor street. Finally, Phase 4 has one leftturn phase on the minor street. The bus movements follow the through movement in the major direction. The original phase plan without TSP is Plan 1. Plan 2 is the TSP-based phase plan with green phase extension where ge is the duration of green time extension for bus priority. Plan 3 is the TSP phase plan of the red phase truncation, where rt is the duration of the red time truncation. Plan 4 is the TSP plan with phase insertion where Phase 1 green time is split into Δ pi and 1 pi . In this study, we assume that the cycle length is constant, and the phase plan is predetermined and fixed. Signal phasing plans with three TSP strategies including the green extension, red truncation, and phase insertion are selected based on their priority rules. The green time duration of the three TSP scenarios for transit vehicles is calculated by the following equations: where is the green time extension and is the red time truncation. is the lost time in each cycle of the original phasing plan without TSP control. , ge , rt , and pi are the green times of the original phasing plan without TSP control, the TSP plan with the green extension, the TPS plan with the red truncation, and the TSP plan with the phase insertion in phase , respectively. Δ pi is the yellow time of phase insertion TSP plan for the insertion phase. Genetic Algorithm Solution. Genetic algorithms (GAs) are the heuristic optimization methods based on the mechanisms of natural selection and evolution [26]. GA cannot guarantee global optimal solution but can reach relatively optimal solution with reasonable time. In this paper, GA is applied to optimize the signal timing plans in the proposed biobjective optimization framework for TSP problems. The biobjective optimization framework is generated by using Genetic Algorithm Toolbox of MATLAB R2014a. Experiment Design. In this study, field data collected from Caochangmen Boulevard in Nanjing, China, are used in the numerical experiments for the proposed TSP models. The Caochangmen Boulevard is a major commuting arterial corridor. Figure 5 shows the stop-to-stop segment for TSP optimization. Table 1 in 15minute intervals. The flow data are collected during six days from May 19 to May 21, 2015, and April 12 to April 14, 2016. Figure 6 illustrates an original signal phase plan (without TSP strategy) for the intersection of Caochangmen Boulevard and Longyuanxi Avenue. The cycle of this intersection signal phase is 160 seconds. Phase 1 is provided for the EB and WB through movements for vehicles, pedestrians, and bicycles. Phase 2 is the right-turn phase for all approaches. Phases 3 and 5 are, respectively, the left-turn phases for Caochangmen Boulevard and Longyuanxi Avenue. Phase 4 is provided for the NB and SB through movements for all travel modes. The geometric conditions and traffic volumes for the intersection of Caochangmen Boulevard and Xingjian Street are demonstrated in Figure 7 and Table 2. The westbound approach has two through lanes, one shared through/rightturn lane, and two exclusive left-turn lanes. The eastbound approach has three through lanes, one exclusive right-turn lane, and one exclusive left-turn lane. The northbound approach has one through lane, one exclusive right-turn lane, and one exclusive left-turn lane. The southbound approach is a one-way street only allowing NB traffic. Figure 7 illustrates the original signal phase plan (without TSP strategy) for the intersection of Caochangmen Boulevard and Xingjian Street. The cycle of this intersection signal phase is 160 seconds. Phase 1 is provided for the EB and WB through movements. Phase 2 is the right-turn phase for all approaches. Phase 3 is the left-turn phase for EB and WB approaches. Phase 4 is provided for the northbound and southbound through and left-turn movements of vehicles, pedestrians, and bicycles. Several preliminary experiments were performed to determine the lane capacity which is 1700 vehicles per lane per hour approximately [27], and the bus lane capacity is approximately 850 buses per lane per hour [28]. The average passenger occupancy of general vehicles at these two intersections during rush hour is observed to be at 1.8 persons per vehicle in field observations. Passenger occupancy of these bus routes is estimated based on automated passenger count (APC) data and the empirical calculation of Nanjing Transit Agency. Table 3 lists the average passenger occupancy of all transit routes during the test days at the different approaches of the two intersections. The number of BP is higher than AP at Longjiang bus stop in the morning rush hour for the five bus routes, and the number of BP directly affects the duration of dwell time of buses in the morning rush hour. Experimental Design. The numerical experiment is conducted with the following three assumptions: (1) The capacities for each approach of signalized intersections are fixed and not affected by traffic operations. (2) Uniform arrivals of bicycles and pedestrians are assumed during each 15 minutes of the morning rush hour. (3) Uniform arrival rates are also assumed for passengers coming to Longjiang bus stop during each 15 minutes of the morning rush hour. Several preliminary experiments were performed to determine the best operational parameters for the GA optimizer used in this study. Results from those experiments led to the selection of the following GA parameters: (i) Population of 300 individuals. The field data collected from Caochangmen Boulevard (in Nanjing, China) are used to build the bus operational scenarios. The proposed biobjective optimization framework is solved by the Genetic Algorithm Toolbox of MATLAB R2014a. The optimization variables are the durations of TSP green times at the two intersections. Table 4 presents the passenger delay at the segment, the delay deviation at the bus stop, and the maximum degree of saturation for the approaches at the two intersections with and without TSP optimization. The optimized phasing plans of the green extension, red truncation, and phase insertion TSP strategies are generated by using the signal phasing allocation methods. The optimized TSP phasing plans for the three strategies at two intersections of this segment are presented in Figures 10 and 11. Results Simulation Analysis. VISSIM-based simulation platform ( Figure 12) is established. Four phasing plans including the non-TSP, TSP with green extension, TSP with red truncation, and TSP with phase insertion are simulated. The passenger delay at the intersections and bus stops of this experimental segment is calculated based on simulationbased average vehicle delay, observation-based traffic volume, and observation-based occupancy. Table 5 summarizes the passenger delay at the intersection of Caochangmen Boulevard at Longyuanxi Avenue intersection (ID1), and Caochangmen Boulevard at Xingjian Street (ID2), the Longjiang bus stop (SD), the stop-to-stop segment (TD), and the delay deviation at the bus stop (DD) for the four signal phasing plans. Figure 13 illustrates the passenger delay and the deviation reduction ratio of the three TSP strategies compared with the original non-TSP signal phasing plan at the experimental segment in the morning rush hour. The results of Figure 13 demonstrate the significant reduction of over 6% in passenger delay and over 8% in schedule deviation at the test segment. The three TSP scenarios can significantly decrease passenger travel time on the targeted major bus routes and improve the schedule reliability of the transit system. The performances of the TSP plans with the green extension and red truncation are similar in reducing passenger delay and schedule deviation. TSP plans with the green extension and red truncation have advantages in decreasing total passenger delay at bus stop-to-stop segment over TSP plans with phase insertion with over 2% more reduction, while the latter performs better in reducing delay deviation at the bus stop with over 5% more reduction than the TSP plans with the green extension and red truncation. Therefore, GE plan and RT plan perform better in segment total delay reduction, while PI plan performs better in delay deviation reduction at bus stop. In addition, the performance of three optimized TSP plans varies in different saturation conditions. Traffic volume of Caochangmen Boulevard (west-east) between the two intersections in morning rush hour is computed at 1841 pcu according to the data in Tables 1 and 2. The reduction rates at different 15-minute intervals in the morning rush hour are different. The performance of the TSP strategies does not perform as well as those in the first and the last 15-minute periods. Therefore, the three TSP scenarios for reducing delay and improving the reliability of transit system will be weakened under saturated and oversaturated flow condition. Conclusion This paper presents a biobjective TSP optimization framework that can provide effective priority control for transit requests, while minimizing the total passenger delay on each stop-to-stop segment (including both adjacent signalized intersections and downstream bus stops) and the schedule deviation at bus stops. The biobjective optimization model is presented to calculate the duration of the allocated transit green time, and the signal phasing allocation method is proposed to generate TSP phasing plans. A numerical experiment is conducted by simulating bus operations with field volume and phasing data collected at one segment of Caochangmen Boulevard, in Nanjing, China. The original non-TSP phasing plan, optimized TSP phasing plan with the green extension, red truncation, and phase insertion strategies are simulated and evaluated using the VISSIM simulation platform. The case study results validate the effectiveness of the proposed framework, and the performance of the three transit signal priority plans is analyzed and evaluated under different traffic demand patterns during the morning rush hour. Future work includes more extensive numerical experiments or field tests to assess the effectiveness of the proposed model under the interactive effect between intersections and bus stops. Another potential extension is to expand towards urban traffic network for systematic improvement of the efficiency and reliability of transit system while minimizing the negative impact on general vehicles under complex traffic condition.
4,856.6
2016-05-29T00:00:00.000
[ "Computer Science" ]
FRET Dyes Significantly Affect SAXS Intensities of Proteins Structural analyses in biophysics aim at revealing a relationship between a molecule’s dynamic structure and its physiological function. Förster resonance energy transfer (FRET) and small-angle X-ray scattering (SAXS) are complementary experimental approaches to this. Their concomitant application in combined studies has recently opened a lively debate on how to interpret FRET measurements in the light of SAXS data with the popular example of the radius of gyration, commonly derived from both FRET and SAXS. There still is a lack of understanding in how to mutually relate and interpret quantities equally obtained from FRET or SAXS, and to what extent FRET dyes affect SAXS intensities in combined applications. In the present work, we examine the interplay of FRET and SAXS from a computational simulation perspective. Molecular simulations are a valuable complement to experimental approaches and supply instructive information on dynamics. As FRET depends not only on the mutual separation but also on the relative orientations, the dynamics, and therefore also the shapes of the dyes, we utilize a novel method for simulating FRET-dye-labeled proteins to investigate these aspects in atomic detail. We perform structure-based simulations of four different proteins with and without dyes in both folded and unfolded conformations. In-silico derived radii of gyration are different with and without dyes and depend on the chosen dye pair. The dyes apparently influence the dynamics of unfolded systems. We find that FRET dyes attached to a protein have a significant impact on theoretical SAXS intensities calculated from simulated structures, especially for small proteins. Radii of gyration from FRET and SAXS deviate systematically, which points to further underlying mechanisms beyond prevalent explanation approaches. Introduction In the past decades, an enormous variety of protein structures has been accumulated experimentally by employing sophisticated high-resolution techniques such as X-ray crystallography or nuclear magnetic resonance spectroscopy (NMR). [1] With cellular function, however, being dictated by the interplay between static structures and dynamic conformational changes, alternative methods have been catching up so as to elucidate the dynamic nature of the structure-function paradigm. Förster resonance energy transfer (FRET) and small-angle X-ray scattering (SAXS) are particularly popular approaches to this and complementary to the aforementioned methods. SAXS can be used to study average structures of various systems and enables even time-resolved analyses of conformational transitions in direct response to altered external conditions. [2] A solution of biomolecules is exposed to X-rays and the integrated scattered intensity is recorded in the small-angle regime, which contains information on structural features of the solute molecules. FRET provides access to time-resolved distance information on, e.g., folding dynamics, [3] intermediate structures, [4,5] and function-related conformational transitions. [6] After labeling specific molecular sites with fluorescent dyes, the distance-dependent energy transfer efficiency between them is measured. Both FRET and SAXS are widely applied for analysis of unfolded and intrinsically disordered proteins (IDPs). [7,8] The characteristics of such systems are of great interest due to their relevance to folding and the physiological prevalence of partly and entirely unstructured proteins as, despite lacking definite structure, IDPs fulfill important functional roles. Polymer physics is applied to understand the dynamics of unstructured proteins with their high conformational diversity and further relate their properties to folding and function, and the validity of such approaches has been studied extensively in the context of FRET and SAXS. [8][9][10][11] More recently, there has been an ongoing discussion on how to interpret FRET measurements in the light of SAXS data, especially for IDPs and unfolded ensembles. A popular structural quantity equally derived from FRET and SAXS is the radius of gyration R g , a measure of overall molecular size. Important questions are how to mutually relate and interpret derived values of R g obtained by either of the methods, and to what extent FRET dyes influence SAXS intensities in concomitant applications. Recent studies find that FRET implies IDPs to be compacted in water by comparison with high denaturant concentrations, while this compaction could not be validated within SAXS, which is known as the so-called SAXS-FRET controversy. [12][13][14] For globular proteins, theory and simulation predict the dimensions of unfolded conformations to decrease with the denaturant concentration. Whereas interpretation of single-molecule FRET data supports this prediction, SAXS data point to the opposite. [15] Based on theoretical considerations, simulations, and new experimental data, Thirumalai et al. found that sizes of unfolded states of globular proteins have to decrease as the denaturant concentration goes down, and stated compaction of unfolded proteins to be universal. [15] In this context, water's critical role as a solvent further comes to the fore. [9,15,16] These findings are in accordance with results by Reddy et al., who studied the SAXS-FRET controversy in coarse-grained simulations including denaturant using the example of Ubiquitin. [11] A possible explanation for these at first glance contradicting observations is a decoupling of size and shape fluctuations, leading to the conclusion that FRET and SAXS do not measure the same quantity but are complementary approaches. [17] Fuertes et al. hypothesize proteins to be subject to a sequencespecific decoupling of end-to-end distance R e measured by FRET and radius of gyration R g deduced from SAXS, and as heteropolymers, proteins may exhibit diverse R g -R e relationships. [18] Other studies assume the analysis methods to be the primary source of the apparent discrepancies. [19] Based on combined FRET and SAXS studies of unfolded proteins and IDPs, Borgia et al. suggest SAXS measurements to be basically model-free, whereas interpretation of FRET data always relies on a model such as a Gaussian or excludedvolume chain to relate R g and R e . [20] Zheng et al. performed explicit-solvent MD simulations of a 79-residue IDP, revealing potential discrepancies between FRET and SAXS for this particular system. [21] It however remains unclear if and, if yes, to what extent FRET dyes influence SAXS measurements, and how distinct calculation methods for R g differ with respect to their results. Molecular simulations are the ideal tool to clarify these issues. They can be applied to study the influence of FRET dyes on scattering patterns and give access to all different variants of R g . Here, we illuminate the interplay of combined FRET and SAXS from a computational simulation point of view. Molecular simulations are a valuable complement to experiments and, depending on their complexity, provide insightful information up to atomistic dynamics of a system. FRET does not directly access quantitative information about molecular distances, but measures the energy transfer efficiency between the dyes. This efficiency depends not only on the separation distance of the dyes but also on their relative orientation and dynamics, which can be observed within molecular simulations best. We consider a novel method for simulating FRETdye-labeled proteins using native structure-based models (SBMs) on the atomistic level. [22,23] Based on energy landscape theory and the principle of minimal frustration, [24][25][26][27] SBMs probe dynamics arising from the system's native geometry. [28] By this means, force field complexity is drastically decreased without loss of substantial information on the system's characteristics, resulting in improved sampling and high computational efficiency. In particular, such models enable thorough sampling of large conformational ensembles such as intrinsically disordered or unfolded systems. Using the simulation protocol by Reinartz et al., [23] we calculate theoretical SAXS curves from molecular simulations of four different proteins with and without dyes. By comparing these intensities, we investigate the influence of FRET dyes on scattering curves from SAXS for both folded and unfolded ensembles. Furthermore, we derive and compare different variants of the radius of gyration as a particularly popular quantity accessible in both FRET and SAXS. In doing so, we hope to make an important contribution to elucidating the relationship and interplay between the experimental methods of FRET and SAXS. Förster Resonance Energy Transfer Förster resonance energy transfer (FRET) [30] is a mechanism describing non-radiative energy transfer between two lightsensitive molecules. An electronically excited donor may transfer energy to an acceptor via non-radiative dipole-dipole coupling. The efficiency of this energy transfer depends on the sixth power of the distance between donor and acceptor. FRET consequently is extremely sensitive to small distance changes in the nanometer range and also referred to as a "spectroscopic ruler". [31] By labeling specific protein residues with suitable dyes as illustrated in Figure 1, different conformations become distinguishable and conformational changes can be observed directly through changes in spatial dye separation. Experimentally, the FRET efficiency E is measured, which depends on the inter-dye distance R DA as [32] The Förster radius R 0 is given by the donor-acceptor distance at which E equals 0.5. It depends on the relative orientations of donor and acceptor represented by the dipole orientation factor k 2 as R 6 0 / k 2 . [32] Rotational dye diffusion is usually assumed to be fast with respect to the lifetime of the Full Paper Isr. J. Chem. 2020, 60, 725 -734 excited state, yielding a constant value of k 2 ¼ 2=3 in the "isotropic averaging regime". [32] In contrast to this, dye molecules are modeled explicitly at atomistic resolution in the structure-based protocol for simulation of dye-labeled proteins by Reinartz et al. [23] applied here. Thus, k 2 can be calculated directly from such simulations without further approximations. Small-angle X-ray Scattering Small-angle X-ray scattering (SAXS) is an efficient tool for low-resolution structural characterization of dissolved biomolecules. [2,33] A solution of proteins is exposed to X-rays with wavelength l. The integrated intensity from elastic scattering is measured in the small-angle regime as a function of momentum transfer q ¼ 4psinq= l where 2q is the scattering angle. SAXS records the averaged scattering intensity over the entire conformational ensemble and all possible orientations of the solute molecules. Ideally, this isotropic intensity distribution is proportional to the spatially averaged scattering from a single particle. The net solute scattering, in return, is related to the electron density difference between solute and solvent. The spherically averaged scattering intensity I of a molecule modeled as a collection of elementary scatterers, e. g., atoms or amino acids, can be calculated via the Debye equation: [34] r ij is the distance between two scatterers i and j, f i and f j are the corresponding form factors. Different parts of such an intensity pattern provide information about different structural features. However, it is important to note that the signal-to-noise ratio of experimentally measured intensities decreases rapidly with increasing momentum transfer q. For small q, the intensity can be described by the Guinier approximation: [35] Accordingly, R g can be extracted from the curve slope in a Guinier plot. Note that the Guinier approximation is only valid for qR g < 1:3 for globular proteins [2] and in an even smaller range for elongated structures. Structure-based Models Gō-type or structure-based models (SBMs) provide a minimalistic description of biomolecular dynamics arising from the native geometry. Giving access to biologically relevant timescales, computationally efficient SBMs provide rich information on the system's characteristics. Successful applications cover a wide range of protein dynamics such as folding pathways [36][37][38][39][40][41] and kinetics. [42] SBMs are also employed for structure prediction, [43][44][45][46] integrative structural modeling of experimental data from, e. g., SAXS [47] or cryo-EM, [48] and investigation of transition state ensembles. [49,50] Founded on energy landscape theory and the principle of minimal frustration, protein dynamics are modeled based on the assumption that native interactions are generally stabilizing, whereas non-native interactions are only included to preserve excluded volume. [24][25][26][27][28] The essential part lies in the so-called contact potential. Each native contact defined by a pair interaction between atoms spatially close in the native state is assigned an attractive potential, whereas a purely repulsive excluded-volume term is included for all atom pairs. As a result, an overall energetic drive to the native structure overtops kinetic traps which would originate from non-native interactions. We use an all-atom SBM taking into account all heavy atoms of the protein [22] as implemented in eSBMTools. [38] With native bond lengths r 0 , bond angles q 0 , and proper and improper dihedral angles f 0 and c 0 , the simplified potential reads: [51] V SBM ¼ Tenth type III domain of fibronectin ( 10 FNIII, PDB code: 1TTG [29] ) with AF 546 (AF546, blue) and AF 647 (AF647, red) dyes attached at residues 11 and 86, respectively. The C a atoms of these residues are shown as blue and red sphere, respectively. Inter-dye distance R DA and C a distance R C a are marked. Full Paper Isr. J. Chem. 2020, 60, 725 -734 Numerical values of energetic weights K, the excluded volume for Pauli repulsions, and the functional form of the Gaussian contact potential C G can be found in Supplementary Information S2.1 (see also Refs. [23] and [52]). Simulation of Dye-labeled Proteins To simulate protein systems with dye pairs attached, we apply a novel structure-based simulation protocol developed by Reinartz et al. [23] In this method, quantum-chemical calculations are initially carried out to obtain three-dimensional dye structures from available chemical structures. Subsequently, linkers are added to bind the dyes to the protein. The dyes are parametrized for inclusion into the SBM, where the only interaction considered is excluded-volume repulsion. [23] In a last step, they are attached preferably orthogonally to the protein surface. Simulations are run in GROMACS v4.5.4 [53] using the structure-based potential introduced in Eq. (4) and molecular dynamics parameters as described in Ref. [23] (see also Supplementary Information S2). Proteins As a first test system, we use the 94-residue tenth type III module of fibronectin ( 10 FNIII, PDB code: 1TTG [29] ) depicted in Figure 1. Fibronectin is a homodimeric glycoprotein of the extracellular matrix. It plays a major role in cell adhesion, growth, migration, and differentiation, and is important for wound healing and embryonic development. [54] Altered expression, degradation, and organization of this protein have been associated with several pathologies, including cancer and fibrosis. [55] Chymotrypsin inhibitor 2 (CI-2, PDB code: 2CI2 [56] ) is a widely studied and well-understood 83-residue serine proteinase inhibitor from barley seeds. It was among the first proteins to have its folding/unfolding transition state extensively characterized by the protein engineering method. [57,58] Its denatured state and folding were subsequently characterized by NMR and hydrogen exchange. [59][60][61] We study the globular 66-residue cold shock protein from Thermotoga maritima (CspTm, PDB code: 1G6P [62] ) as a third system. Upon rapid temperature decrease, many bacteria produce small cold shock proteins. During cold shock, the efficiency of transcription and translation is reduced due to stabilization of nucleic acid secondary structure. Cold shock proteins are thought to counteract this by preventing the formation of messenger RNA secondary structure at low temperature as nucleic acid chaperones. Cytolysin A (ClyA) of Escherichia coli is a pore-forming hemolytic toxin. This protein exists as a monomer of 303 residues (PDB code: 1QOY [63] ) and undergoes a conformational change to the protomer before assembling into a dodecameric pore (PDB code: 2WCD). [64] Dyes We use two pairs of the Alexa Fluor (AF) family of fluorescent dyes, [65] which are frequently applied as cell and tissue labels in fluorescence microscopy. The excitation and emission spectra of the AF series cover the visible spectrum and extend into the infrared. Individual members are numbered according to their approximated excitation maxima (in nm). We use the AF 488 dye with C 5 -linker (AF488) and AF 594 dye with C 5 -linker (AF594), and the AF 546 dye with C 5 -linker (AF546) and AF 647 dye with C 2 -linker (AF647). Additionally, we use the Biotium dye CF680R (B680) for simulations with three dyes. [66] Figures 1 and 2 show examples of the studied systems. A detailed list and depictions of all composite systems can be found in Supplementary Information S1. For structures and parameters of the dyes, see Ref. [23]. Calculation of SAXS Profiles from Structural Models From a computational simulation perspective, theoretical calculation of accurate scattering patterns from atomic positions is a key factor for successful analysis and interpretation of SAXS data. Existing methods can be divided into either implicit-or explicit-solvent. One drawback of the computationally more efficient and widely used implicit-solvent methods is their dependence on several non-trivial free parameters with the most prominent example of the solvation shell's excess density. Given experimental data, the latter can be determined by a least-squares fit of the forwardly calculated curve at the risk of overfitting. Otherwise, it is set to 10 % to 15 % of the bulk water electron density. [67] At this point, it is important to note that it may have different values for folded Full Paper Isr. J. Chem. 2020, 60, 725 -734 and unstructured proteins depending on their specific solvation properties. [68] We apply the popular implicit-solvent method CRYSOL, which uses multipole expansion to evaluate spherically averaged scattering patterns from biomolecular structures. [67] To simulate the primary hydration layer, the solvation shell is approximated by a border layer of 3 � effective thickness and excess density d1 with respect to the average density of free bulk water 1 0 ¼ 0:334 e� À 3 . [67] According to Henriques et al., d1 can substantially influence SAXS curves forwardly calculated from structural models, especially for unfolded proteins, and small variations to d1 can change computed radii of gyration by 5 % to 10 %. [68] They report that the CRYSOL default value of 0:03 e� À 3 yields suboptimal results and generally suggest lower solvation shell contrasts between 0:01 e� À 3 and 0:02 e� À 3 . Whilst a value of 0:0125 e� À 3 is recommended for folded proteins, specifying a single density contrast is not valid for disordered proteins. [68] To assess the influence of d1 on R g for the systems studied here, we conduct a sensitivity analysis and compare derived values of R g for different values of d1 in the range of 0:00 e� À 3 to 0:03 e� À 3 . Results can be found in Supplementary Information S6. As expected, different values slightly affect SAXS-derived R g , which generally increase with d1. With the exception of d1 ¼ 0:00 e� À 3 neglecting the solvation shell completely, we find the overall trend discussed in Section 3 to be preserved among different values of d1 . Radius of Gyration A popular structural feature derived from both FRET and SAXS is the radius of gyration R g , a measure of a molecule's spatial extent. GROMACS calculates it as [53] R g ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P where r i is the distance of atom i to the molecular center of mass and m i is the atomic mass deposited in a GROMACS parameter file. Determination of Different R g Variants To analyze different R g variants in the context of FRET and SAXS, we first calculate a "true" reference value R g;gmx from the molecular model of the protein without dyes using GROMACS (see Eq. (5)). Second, we consider the corresponding value R þdyes g;gmx computed from the molecular model with dyes. Analysis of the Guinier region in SAXS provides two additional values R g;saxs and R þdyes g;saxs . Due to the occasionally narrow Guinier region, these may contain errors, in particular for large elongated systems as the unfolded monomer and protomer (also see Supplementary Information S5). For unfolded proteins, we calculate R g as done previously in experimental work. [20] The proteins are assumed to behave like excluded-volume chains [23,69] (see also Supplementary Information S3). To mimic dyes attached to the chain termini, we analyze truncated systems. Based on simulations of respective full systems, we only consider atoms of residues between the dye positions, neglecting remaining residues in all calculations. We then extract the end-to-end C a distance R e (corresponding to the C a distance between dye-labeled residues) to calculate the apparent radius of gyration for excluded volume chains. [20] FRET is often assumed to measure the distance between dye-labeled C a atoms, thus ignoring contributions of the dyes' linkers. Modeling the linkers as chain extensions of a certain additional sequence length L, the inter-dye distance R DA can be rescaled to a corresponding C a distance via [20] f with the number of considered residues between dye-labeled sites N interÀ dye , scaling exponent n, and additional sequence length L for the dye pairs from [23]. With this factor, we obtain Influence of FRET Dyes on SAXS Measurements To start with, we examine the direct impact of FRET dyes on SAXS measurements and derived quantities. Recapitulate that experimental SAXS data always reflect an average over all possible solute conformations and orientations. To mimic this ensemble average, we compute a representative intensity curve for each system by considering 5 000 structures equidistantly distributed over one simulation. Having calculated the individual intensity of each structure with CRYSOL, [67] we determine mean and standard deviation of the resulting 'array' of intensities to obtain an ensemble-averaged curve and assess the degree of agreement of SAXS curves from different conformations. We proceed accordingly for simulations of all folded and unfolded systems, each with and without dyes. From these representative intensity curves, SAXS-derived radii of gyration are computed and compared to a "true" Full Paper Isr. J. Chem. 2020, 60, 725 -734 reference given by the average R g;gmx of each simulated ensemble, where single-frame values are calculated according to Eq. (5). To further investigate how the dyes influence the systems, we additionally study R g;gmx distributions, discussed in Supplementary Information S4 in detail. As shown in Figure 3a, the resulting intensity curves of 10 FNIII with and without dyes exhibit considerable differences for the folded ensemble. Because dyes change both size and shape of a system, this is to be expected. Although rather small, a difference is still observable for the unfolded ensemble due to an increased chain size with dyes attached. Similar curve shapes indicate that the dyes have a minor but still visible influence in this case. For the larger system of monomer, differences in Figure 3b are almost insignificant in accordance with our expectations. All derived plots for 10 FNIII are shown in Figure 4. The Guinier approximation in Figure 4a is only valid in a certain region where ln I q ð Þ=I 0 ð Þ ð Þ versus q 2 can legitimately be More details on the systems can be found in Supplementary Information S1. Average intensities (solid lines) versus momentum transfer q along with each intensity-curve distribution's standard deviation over corresponding single-frame intensities directly calculated from individual simulation snapshots (shaded area). It is important to note that this standard deviation is to be interpreted as the intensity distribution width at a particular q point rather than an actual "error" in the sense of statistical uncertainties or systematic deviations as they would occur in experimental data. In accordance with R g distributions (see Supplementary Information S4), the standard deviation may be considered a measure of conformational heterogeneity in the underlying simulated ensemble, which also shows in the fact that standard deviations for unfolded systems are consistently larger than those for folded systems. Curves are depicted for both systems in the folded states without (green) and with dyes (red) and in the unfolded states without (blue) and with dyes (orange). Figure 3 to the Guinier representation. R g errors resulting from the linear regression in the Guinier analysis are in the order of 10 À 2 to 10 À 4 R g and can be found in Supplementary Information S5. (b) The dimensionless Kratky plot gives information about the protein's conformations. Both plots are shown for 10 FNIII in folded states (green, red) and unfolded states (blue, orange) without and with dyes, respectively. More details on the systems can be found in Supplementary Information S1. Full Paper Isr. J. Chem. 2020, 60, 725 -734 approximated by a straight line in a linear fit. The slopes, from which respective radii of gyration are extracted, are different for both folded and unfolded states as well as for the system with and without dyes. The Kratky plot in Figure 4b exhibits a distinct peak for the folded ensemble and a plateau for the unfolded ensemble, thus giving a perfect example of how this kind of analysis can be used to study molecular folding. In the folded case, the broader peak for the system with dyes suggests a less compact structure compared to the purely proteinic system. Comparison of R g Variants Ratios of all R g variants to the "true" R g;gmx are presented in Figure 5 for the folded systems. R g;gmx naturally depends on the protein only. With dyes attached, the systems appear to be larger, manifesting in a greater R þdyes g;gmx with respect to R g;gmx . As expected, the smaller the system, the more significant is this effect. SAXS-derived R g values show a similar shift, i. e. R þdyes g;saxs . Exempt from this is ClyA protomer with dyes at positions 56/252, where R þdyes g;gmx is actually lower than R g;gmx . This can be explained by a center-of-mass shift due to the dyes in favor of a reduced radius of gyration (see Figure 2). As evident from CspTm and ClyA monomer and protomer, different dye positions affect R g only marginally. In contrast, dye types seem to have a more pronounced effect, as shown for CI-2. We find that R g variants derived from SAXS apparently overestimate R g for small systems, whereas underestimating it for larger systems in some cases. For the smaller proteins 10 FNIII, CI-2, and CspTm, SAXS-derived R g values are consistently larger than those calculated with the mass-weighted formula in Eq. (5). This is true for systems with and without dyes as well as for folded and unfolded conformations and in full accordance with the expectations. This overestimation could be triggered by the CRYSOL method for calculating SAXS profiles from structural models, which takes into account the hydration shell, or arise from neglecting hydrogen atoms in the molecular model. The only exception from this typical behavior is ClyA in its elongated monomer and protomer configuration. Here, all values are located in a very narrow range, and SAXS-derived R g are similar to or slightly smaller than corresponding R g;gmx . We assume this counter-intuitive behavior to be caused by a rather narrow Guinier region, which likely results in a greater error in the linear regression. Analogous results for the unfolded systems are depicted in Figure 6. As apparent from CI-2 and CspTm, R g;gmx is affected by both dye types and positions here, suggesting a subtle but however perceivable influence of the dyes on the chain dynamics. Just as for the folded systems, R þdyes g;gmx is consistently larger than R g;gmx . This effect is related to the dyes' distance in the protein sequence and can be illustrated using the examples of monomer and protomer. With dyes attached to the termini affecting the occupied volume to a greater extent than if attached in the middle, the observed shift increases with the dye separation. The more peripheral the dye-labeling positions in a protein sequence, the more the dyes with their linkers increase the dimensions of a system as reflected by R g , in particular for completely elongated unfolded conformations. For the smaller systems CI-2, CspTm, and 10 FNIII, R g;saxs and R þdyes g;saxs show the expected tendency just as for the folded case. For larger ClyA monomer and protomer, we find the SAXS- Figure 5. R g variants for the folded systems with respect to R g;gmx (green line) given at the bottom in Å. We study 10 FNIII, CI-2 with two different dye pairs (AF546/AF647 and AF488/AF594), CspTm with AF488/AF594 at three different labeling positions, and ClyA monomer, protomer, and dodecamer with AF488/AF594/(B680) at different labeling sites. More details on the systems can be found in Supplementary Information S1. R g values calculated from atomic structures with dyes (R þdyes g;gmx , red) and those derived from SAXS curves without (R g;saxs , blue) and with dyes (R þdyes g;saxs , orange) are depicted. R g errors derived from the Guinier linear regression are listed in Supplementary Information S5. Full Paper Isr. J. Chem. 2020, 60, 725 -734 derived values of R g;saxs and R þdyes g;saxs to be almost identical to the respective references of R g;gmx and R þdyes g;gmx . Finally, we analyze R g variants obtained from end-to-end distances presented in Figure 7. Here, we only consider the residues between the dye positions to mimic labeling at the termini. The ratios of R þdyes g;gmx and R þdyes g;saxs with respect to R g;gmx are in good agreement and both shifted to higher values as before (see Figure 6). R g;saxs , R app g;C a , and R app g;R DA are all very similar to R g;gmx . R app g;R DA is consistently larger than R g;saxs , pointing to a small systematic difference in the quantities accessible to FRET and SAXS. Note that investigating IDPs in varying denaturant concentrations as done experimentally [20] and in explicit-solvent MD simulations [21] is not yet possible within the structure-based simulation protocol. However, this Figure 6. R g variants for the unfolded systems with respect to R g;gmx (green line) given at the bottom in Å. The systems studied are 10 FNIII, CI-2 with two different dye pairs (AF546/AF647 and AF488/AF594), CspTm with AF488/AF594 at three different labeling positions, and ClyA monomer and protomer with AF488/AF594/(B680) at different labeling sites. More details on the systems can be found in Supplementary Information S1. R g values calculated from atomic structures with dyes (R þdyes g;gmx , red) and those derived from SAXS curves without (R g;saxs , blue) and with dyes (R þdyes g;saxs , orange) are depicted. R g errors derived from the Guinier linear regression are listed in Supplementary Information S5. Figure 7. R g variants for different truncated systems in the unfolded states with respect to R g;gmx (green line) given at the bottom in Å. We study 10 FNIII, CI-2 with two different dye pairs (AF546/AF647 and AF488/AF594), CspTm with AF488/AF594 at three different labeling positions, and ClyA monomer and protomer with AF488/AF594 at different labeling sites. More details on the systems can be found in Supplementary Information S1. R g values calculated from atomic structures with dyes (R þdyes g;gmx , red), those derived from SAXS curves without ( R g;saxs , blue) and with dyes (R þdyes g;saxs , orange), and apparent values calculated from C a end-to-end distance (R app g;Ca , brown) and inter-dye distance ( R app g;R DA , purple) are shown. R g errors derived from the Guinier linear regression are listed in Supplementary Information S5. Conclusion We find that FRET dyes attached to a protein significantly affect SAXS measurements on that system, as the dyes change both its size and shape. This effect is particularly pronounced for small proteins. In the case of unfolded ensembles, the difference is small however observable, while almost insignificant for larger systems. Systems appear to be larger with dyes than without, manifesting in a larger radius of gyration. In line with our expectations, the smaller the protein, the more significant is this effect. Dye types also show an effect on R g . For unfolded ensembles, dye positions further affect the values derived, and our findings suggest a subtle but observable influence of FRET dyes on the chain dynamics. This means that, when performing both FRET and SAXS measurements on the same system, respective effects have to be taken into account in the data analysis methods applied. We find that R g variants derived from SAXS apparently overestimate R g for small systems, whereas underestimating it slightly for some of the larger systems. As expected, SAXS-derived variants shift to higher values for systems with dyes attached. All R g values derived by FRET and SAXS are in good agreement, consistent with prior work suggesting that the analysis methods are the primary source of the discrepancies observed. [19,20] However, we find the FRET-derived R g variant to be consistently larger than the SAXS-derived value, pointing to a small systematic difference in the quantities accessible to FRET and SAXS.
7,519.4
2020-07-01T00:00:00.000
[ "Chemistry", "Materials Science", "Physics" ]
Thermal Spraying of Oxide Ceramic and Ceramic Metallic Coatings Thermal Spraying is called a group of processes by means of that thin ceramic and ceramic metallic (cermet) coatings can be applied on a vast variety of materials, so called substrates. The goal is to reach considerably different characteristics on the surface of the component part regarding the resistance against abrasion and corrosion, the electrical conductivity and many more. This chapter intends to give an overview of the different processes, the processable feedstock materials, the different areas of application and new developments in the field of Thermal Spraying. Introduction Thermal Spraying is called a group of processes by means of that thin ceramic and ceramic metallic (cermet) coatings can be applied on a vast variety of materials, so called substrates.The goal is to reach considerably different characteristics on the surface of the component part regarding the resistance against abrasion and corrosion, the electrical conductivity and many more.This chapter intends to give an overview of the different processes, the processable feedstock materials, the different areas of application and new developments in the field of Thermal Spraying. Thermal spray processes and coatings´ microstructure All together thermal spray processes make use of heat and kinetic energy to warm-up and propel feedstock material to build up a coating on the substrate.Often the goal is to melt the feedstock thoroughly due to reach a dense microstructure, but in some cases the feedstock impinges in solid state and is deformed by the kinetic energy as the particles reach supersonic velocity before impact.Dependent on the source of energy distinctly different process characteristics and therefore visibly diverse microstructure and properties of the coatings can be obtained. In the norm DIN EN 657 "Thermal Spraying" the different processes are distinguished by the means of the energy source.The processes being widely in operation are based on the energy sources flames and electric or gas discharges.Although the lasers assisted spraying techniques are coming more and more into operation, they cover only a small segment compared to the conventional techniques.By means of molten bath and "cold" or, in other terms, kinetic spraying only metallic feedstock can be used.Therefore both processes are not covered in this chapter.In fact the focus of this chapter lies on workings done in the fields of thermal spraying by means of Atmospheric plasma spraying (APS) as well as High velocity oxyfuel spraying (HVOF, see markings in Figure 1 on the following page). Achieving near net shape coatings Besides metallic feedstock for their electrical and tribological properties as well as repair purposes many ceramic materials can be sprayed.The commonly used feedstock can be divided into oxide ceramics and the embedding of covalent bound materials like carbides and borides in metallic binder phases (so called cermets derived from "ceramic metals").Besides the different hardness of the hard phase and the two-phase nature of cermet coatings, the feedstock itself is manufactured by totally different production routes.For both the molten and crushed oxide ceramics and the usually agglomerated and sintered cermet powders there is the trend to use finer grain sizes to reach denser and better microstructures of the coatings on the one hand (Gell, M., et al., 2001;Tilmann et al., 2008a).On the other hand with fine feedstock powders near net shape coatings can be sprayed, showing a comparable low surface roughness, both allowing to reduce the costs of finishing workings (Matthäus, G., Wolf, J. & Ackermann, D., 2010;Tilmann et al., 2008b).Fig. 1.Classification of Thermal Spray processes with regard to the source of energy (after DIN EN 657 "Thermal Spraying") In the following the differences in the performance of abrasion and corrosion resistant coatings regarding the deposition efficiency, surface roughness, hardness, porosity, wear behavior and corrosion resistance will be discussed regarding the inset feedstock grain size and the resulting coatings´ microstructure.Several feedstock materials typically used for named fields of operation (WC-CoCr 86/10/4, Cr 3 C 2 -Ni20Cr 75/25 and Cr 2 O 3 ) were considered for developing near net shape coatings.In contrast to grain sizes commonly used in thermal spray processes of up to appr.50 µm, the grain sizes of all examined powders were specified with a maximum of 25 µm (-15+5 µm, -20+5 µm and -25+5 µm).Different types of conventional and one specialized powder feeder were investigated regarding their abilities of continuous feeding.For the coating experiments the kerosene fuelled HVOF-gun K2 (GTV GmbH, Luckenbach, Germany) was used to apply the carbide based feedstock materials (WC-CoCr and Cr 3 C 2 -NiCr), whereas the conventional APS-gun F4 (Sulzer Metco AG, Wohlen, Switzerland) was used to apply Cr 2 O 3 coatings.Compared to coatings being sprayed using conventional fractionated feedstock, the coatings based on fine feedstock showed better results concerning their key characteristics. www.intechopen.com Thermal Spraying of Oxide Ceramic and Ceramic Metallic Coatings 169 Comparison of microstructures, phase contents and deposition rates The micrographs on the following page show the microstructures obtained by spraying fine feedstock with particle sizes < 25 µm (left side) and conventional fractionated feedstock (-45+5 µm in case of chromia and -45+25 µm for the cermets, right hand side). The parameter settings for the spraying experiments were investigated using methods of designed experiments (for a detailed discussion see chapter 3).After conducting tests regarding a continuous feeding of the different feedstock powders, preliminary test series were conducted to evaluate the effects of the main process parameters regarding the feedstock grain size, the amperage in case of APS and the air-fuel-ratio in case of HVOF, spraying distance and powder feed rate.The results of these experiments were investigated regarding the coatings criteria named at the beginning.For finding optimal parameter sets the economic relevant criteria deposition efficiency and surface roughness were given the highest priority as well as reaching sufficiently high indention hardness at the same time. The microstructures of the optimum parameter sets for the fine grained feedstock compared to coatings sprayed with conventional fractionated feedstock are shown in Figure 2. The metallographic cross sections of the coatings showed, that the porosity of the coatings can be decreased by processing fine powders.Measurements by means of image analysis revealed, that the ratio of porosity in case of the near net shape coatings is approximately only on quarter to one third compared to the conventional coating systems reaching values of 0.1 % in case of the WC-CoCr coating.At the same time the roughness of the top layers described by the profile parameters roughness average (R a ) and height (R Z ) is also considerably lower.For all fine feedstock powders R a values in the range of 2.5 to 2.7 ± 0.1 µm of the as sprayed coatings could be reached, whereas for the usually applied powders the values were significantly higher with 4.5 ± 0.3 µm in case of chromia and 6.7 ± 0.4 µm for the cermet coatings.Furthermore the uniformity of the coatings is significantly better when spraying the fine feedstock permitting the goal of applying near net shape coatings.But these efforts are accompanied by considerably lower deposition rates caused by lower mass throughputs and the difficult heat transfer to the relative high melting NiCr-matrix in case of HVOF spraying of the Cr 3 C 2 -NiCr feedstock.On the other hand this disadvantage can be equalized by the aim of achieving coatings of lower thickness resulting in comparable times for the spraying process for both the fine and the coarse fractionated feedstock. Then again when spraying the finer powders in the spray process, there is also a higher risk of overheating the small spray particles.In particular the composition of the carbide based coatings can be changed because of decarburization and oxidation effects.The examination of the metallographic cross sections under this aspect showed, that especially the coatings based on fine Cr 3 C 2 -NiCr powder showed strong oxidation (see the dark-gray phases in Figure 2b left hand side).In order to achieve more information about these phase changes the carbide based samples were analyzed by X-ray diffraction.The obtained X-ray diffraction patterns are shown in Figure 3.The pattern of the Cr 3 C 2 -NiCr sample sprayed with feedstock -15+5 µm (see lower pattern in Fig. 3 a) shows noticeable Cr 2 O 3 peaks indicating that a strong oxidation of the spray particles took place during the spray process.Furthermore decarburization effects were also stronger when using the fine powder.In the sample sprayed with the standard feedstock, the dominating carbide phase was Cr 3 C 2 , whereas the other coating was dominated by the lower carbide phase Cr 23 C 6 .For the WC-CoCr samples the effect of decarburization was examined by determining the intensity ratio of the strongest WC in relation to the W 2 C peak (I W2C(100) /I WC(100) , see Fig. 3 b).Similar values of 0.22 in case of the fine and 0.17 for the coarser fractionated feedstock were obtained indicating a stronger decarburization when spraying the fine feedstock.Compared to the Cr 3 C 2 -NiCr samples these effects of phase chances were quite low. Indentation hardness One characteristic criterion determining the wear resistance of thermal sprayed coatings is the hardness, which is usually measured by indentation techniques.The Vickers hardness indentation test is well-established both in the course of the quality management of job shops as well as in the characterization of coatings reported in literature.Another technique is the superficial Rockwell hardness testing, by means of that the coatings can be analysed without metallographic preparation.To investigate the suitability of both methods and the influences on the measurement results, a cause-and-effect diagram was established for the indentation testing of thermal spray coatings (see Figure 4).The goal of the workings was the reduction of the variability of the measuring results to enhance the comparability. Fig. 4. Cause-and-effect diagram of the indentation hardness measurement of coatings A large number of predominantly oxide ceramic and cermet coating systems were investigated concerning the different sources of variat i on depicte d i n F ig ure 4. In the following especially the influence of the microstructure, the loading force, the inset type of hardness tester and the necessary quantity of measurement repetitions on one sample regarding the increase of the repeatability are discussed.In the following The mean values were derived from 10 measurements for each sample and measurement technique.The results of the measurements of different experimental series were investigated regarding their distribution and the appearance of outliers using the span of standard deviation and Grubb´s test.In most cases the values are not normal distributed, but also hardly any outlier can be detected.Therefore the goal was chosen to reduce the standard deviation of the measurements as the repeatability between different operators, hardness testing devices etc. is expected to increase with decreasing standard deviation.For first evidence the standard deviations of the first 5 measurements and of 10 measurements were compared to get information about the necessary number of measurements to receive robust results. The results of the two different types of Rockwell hardness testers (one manual Wilson device and a digital type STRUERS DuraJet with closed loop control of the applied force) do not differ very much.The standard deviation is lower than approximately 4 % of the mean measured value and is often higher when it was calculated from ten values instead of the first five ones.This might be due to influences of the microstructure on the results like unmelted particles in the case of chromia and the bimodal hardness distribution of the cermet type coatings.Furthermore the derived mean indentation hardness value is comparable for both the fine and the coarse fractioned feedstock.In case of the Vickers testing the same effect was established.As the Vickers measurements were performed by a minor experienced operator, the tests were repeated by another more experienced person. For the sample C1 a significant lower value of 1124 HV0.3 with comparable standard deviation values of 68 and 79, respectively, were derived.In a further series on the same sample the standard deviation could be reduced significantly to 27 for both 5 and 10 measurements by excluding nonuniformly shaped indentation pits showing different lengths of the two diagonals.The mean value of 1152 HV0.3 seems to be the most reliable one.When increasing the loading force to 0.5 kp, the standard deviation is comparable low with 24 to 27.But the calculated mean value of 1327 HV0.5 is considerably higher than all values derived with 0.3 kp loading force.Nevertheless the comparison with other testing series showing less dense microstructures resulted in the conclusion, that the standard deviation of Vickers measurements is lowest with 0.3 kp loading force.When applying 0.1 kp, the indentation pits are too small to be analysed correctly, and the standard deviation rises againg.With the higher force of 0.5 increased cracking occurs due to not optimal cohesion of the coatings.Therefore it is the best solution to choose 0.3 kp loading force to obtain results of high reproducibility. To investigate the necessary number of measurement repetitions in correlation to the porosity as weakening effect of the coatings cohesion, samples with extraordinary high and relative low porosity were measured 50 times with all techniques.The relative uncertainty of the derived mean value is plotted over the number of repetitions (see Figure 5).It is calculated as follows: Calculation of standard deviation: Calculation of mean standard deviation: mean s n s  (2) Calculation of relative uncertainty: As expected the relative uncertainty of the derived mean hardness value is significantly higher for the samples with high porosity compared to the more dense coatings.The values tend to remain static when more than approximately 20 repetitions are made, whereas this trend is reached after circa the half quantity of measurements when testing the denser coatings.Furthermore not the same degree of certainty can be reached when testing coatings with the high porosity.In further workings this tool will be investigated to work out a measuring concept to classify the reliability of indentation hardness testing of thermal sprayed coatings. Corrosion and wear behaviour The corrosion resistance of the coatings was determined with salt spray tests according to DIN EN ISO standard 9227.For this purpose mild and stainless steel substrates were coated and were exposed for 240 hours to a corroding atmosphere produced by spraying a sodium chloride solution.The appearance of corrosion products was evaluated every 24 hours.In addition the samples were weighed before and after the test period to determine mass increasing effects caused by formation of corrosion products.During the testing period the mass of the samples increased because of the formation of corrosion products, Table 2.In the case of the carbide based coatings the use of the fine fractionated feedstock lead to a considerable improvement in terms of corrosion resistance, the samples sprayed with fine powders showed significant less mass increases than the standard fractionated samples.The Cr 2 O 3 coatings showed a quite contrary behaviour.This is due to the fact, that the chromia coatings received no sealing treatment leaving, so that the salt media could reach the substrate through the thin coating more easily compared to the thicker conventional sample. The coatings on stainless steel substrates showed the same behaviour like the coatings on mild steel substrates.But of course the actual values were lower due to the higher corrosion resistance of stainless steel.The wear resistance of the coatings was evaluated by ball-on-disk wear tests according to ASTM standard G 99.The ball-on-disk test is a model test for determining friction and wear of two solid surfaces being in sliding contact (ball against coated disk).A sintered WC6Co ball (10 mm in diameter) fixed into a steady ball holder was pressed against the coated and polished sample disk (105 mm in diameter) with a normal load of 40 N.The disk rotated 2500 cycles with a linear speed of 0.1 m/s.After the experiments the wear track was examined by microscopic analysis in order to determine the wear volume loss. Coating system The results of the wear tests after 2500 cycles showed different results for each spray feedstock material.The Cr 2 O 3 coatings regardless whether based on fine or standard powder fractions showed almost no volume loss.According to optical micrographs of the wear scars a tribofilm was formed consisting of plastically deformed debris and splats.This tribofilm was smoother than the original surface and was placed slightly above the mean line of the unworn surface protecting the surface from further wear, see Conclusion Fine Cr 2 O 3 , Cr 3 C 2 -NiCr, and WC-CoCr feedstock with grain sizes below 25 µm were processed in order to investigate the spraying of near net shape coatings.The characteristics of the coatings based on fine powders were analysed and compared to standard coatings based on -45+5/20 µm powder fractions.Compared to standard coatings it was possible to improve the key coating characteristics porosity, surface roughness and corrosion resistance significantly.Other coating properties like hardness or wear resistance showed comparable behaviour as that of standard samples.In case of spraying cermet feedstock, especially Cr 3 C 2 -NiCr, optimized parameter sets are necessary to control decarburization and oxidation. Design and optimization Thermal Spraying is an indirect process, where only the basic conditions can be controlled by altering the process parameters.A deterministic control of the transfer of heat and kinetic energy to the feedstock particle is not possible.Due to the vast variety of process parameterssome time said to be more than one hundred (Lugscheider & Bach, 2002) -sophisticated approaches of designed experiments are a good tool to both understand the complex interdependencies between the parameters and to optimize coatings properties due to the demands.In the following the basic considerations and the proof of suitability of statistical design of experiments are given for controlling and optimization of thermal spraying processes. Basic considerations The goal of conducting experiments is to get information about the functional relation between the process conditions and the resulting coatings properties determining both the economical effectiveness of the coating process as well as the coatings behaviour under operational conditions.For example the deposition efficiency (DE) of the feedstock material in case of plasma spraying of oxide ceramics, i.e. the percentage of the inset feedstock contributing to the coating buildup, is dependent of the chosen federate as well as the achievable heat transfer from the plasma to the feedstock particles.Therefore it can be assumed, that there is a functional correlation between the powder feedrate, the applied amperage to the plasma and the chosen plasma and secondary gas mixture (species, total flow and ratio) controlling the specific heat and therefore the capacity of heat transfer of the plasma.Two further parameters defined by the inset feedstock are its heat of fusion and median grain size as the heat is transferred from its surface into its volume.The spraying distance is parameter controlling the time of flight of the particles in the plasma and therefore the time of exposure to heat, but there is a strong interdependency with the applied amperage.The higher the amperage, the higher is the temperature and heat capacity of the plasma, but also its velocity and therefore the time of flight for the particles decreases with raising the amperage.All together the functional dependency of the DE can be stated as follows: DE = f (amperage, plasma gases, spraying distance, particle size, …) or in other terms: One approach to derive information about the correlation of the coating´s criteria DE with the parameters is to vary the process parameters one by one in every single spraying experiment holding two complications: The number of experiments is large and the interdependency between distinct parameters cannot be estimated.Therefore the use of statistically designed experiments is a good alternative, as both goals can be realized utilizing this tools (for an example see Heimann, 2008).The experiments are arranged in matrices with a deterministic alteration of the factors (i.e.parameters to be investigated) on distinct levels.Afterwards the coatings criteria are measured and the results are analysed regarding the factorial effects (i.e. the correlation with the parameters).The ways to obtain the correlation can be divided into factorial analysis by means of multiple regressions on the one hand and by analysing the variance of the measured results according to the variation of process parameters (ANOVA) methods.The usability of the second approach in the field of thermal spraying is shown in the following using the examples of the experimental series described in chapter 2. For a comprehensive overview of the methods including model testing etc. see (Dean & Voss, 1999;Mason, 2003; National Institute of Standards and Technology [NIST], 2011). Robust quality control basing on ANOVA techniques The variability of thermal spray processes regarding coatings characterics and quality is a well-known problem in application.In the field of the designing and development of feedstock and coating systems designed experiments are sophisticated tools to achieve sufficient coating qualities in specified tolerance regions.Besides the gathering of the relevant know-how regarding the spraying of certain feedstock etc., the processes must be insensitive against deviations over longer periods of time to reach this goal.For example, in Figure 7 a quadratic functional correlation between the deposition efficiency of feedstock and the applied amperage in the APS process is assumed.The basic tools of the method are the so called orthogonal arrays.Like the conventional matrices of designing experiments for factorial designs, the levels of the parameters to be investigated are arranged by given plans.But unlike the methods of DoE, the functional correlation between factors and the measured results are expressed in terms of a signal-to-noise ratio.The goal of the method is not to optimize one response regardless of other coatings criteria, but to achieve results being robust against the effect of noise factors like e.g. the wear of parts like the electrodes of the plasma gun etc.The signal factors also show effects on the results, but are kept normally constant, like e.g. the traverse speed of the gun relative to the substrate.In the following the results of applying the method are discussed. Applying orthogonal arrays for optimizing coatings Taguchi techniques were utilized in order to reduce the number of experiments and to evaluate and to adjust main process variables.The effectiveness of these techniques could be verified by spraying validation samples successfully.A Taguchi experimental design was used to reduce the number of coating experiments.Four main process variables, or factors, were identified and varied on three levels in an L 9 orthogonal array.This matrix dictated the combination of levels, at which the factors should be set for each experiment, Table 3. Furthermore the process output variables, or responses, which should be optimized, were defined.For near net shape coatings a surface roughness being as low as possible is requested.So the aim of the experiments was to obtain a set of spray parameters for each material, which allows the spraying of coatings with low surface roughness under consideration of cost-effective deposition rates.It was also tried to improve coating properties like hardness and porosity.The results were analysed by means of ANOVA to determine the relative contributions of the various main factors and interactions among them.This allowed a prediction for an optimal parameter set for each investigated spraying feedstock. Factor  Experiment No. Spray distance Powder carrier gas Table 3. Matrix used for the Taguchi experimental designs For validation samples were coated using the predicted optimum spray parameters shown in Table 4, the results are shown in Table 5.The values predicted and actually measured proved to be quite consistent.It can be reasoned that the validation experiments were able to confirm and to reproduce the predicted values. While comparing validation and standard samples the latter showed higher deposition rates.Of course this has to be ascribed mainly to the fact that coarser spraying feedstock were used to spray the standard samples.The validation samples showed significant lower surface roughness.Especially the carbide based coatings showed low R a values (about 2.7 µm) compared to the R a values of the standard samples (near 7 µm).The hardness of the coatings did not vary much regardless of which powder fraction was processed.It can be summarized, that by applying the method of signal-to-noise ratios derived from the evaluation of orthogonal arrays, the workings could be reduced to nine experiments while investigating the effects of four quantitative parameters on three levels.The results show, that by applying this technique, reproducible forecasts regarding the optimisation of thermal spray coatings can be derived. Developments of new applications As stated at the beginning there is the goal to make use of finer grain sizes of feedstock powders to reach denser coatings showing higher cohesion and adherence to the substrate.The lower limit of feeding powders into the process is in the one-digit micrometer range.By dispersing of the feedstock in a liquid outer phase or the formulation of feedstock direct in a suspension by chemical methods, use can be made of nanometer sized feedstock.In the following the efforts are shown in new results regarding the achievement of coatings, which could not be realized by means of thermal spraying before. Suspension plasma spraying of triboactive coatings Up to now no coating systems are marketable in the field of metal forming like the direct hot extrusion process, which provide both surface protection of the parts being in contact to the billet (i.e.container and die), and a significant reduction of the frictional losses being induced by the billet passing along the container walls.To dispense the use of lubricants and to enhance the usable forming capacity of the process, different oxide ceramics were given in one suspension and plasma sprayed.The aim is to reach a mixing of the feedstock to obtain deterministic solid solutions of the oxide phases which show a reduction of their coefficient of friction under dry sliding conditions.To reach this goal the high surface-tovolume ratio of feedstock with primary particle sizes below 100 nm was used.By means of x-ray diffraction it could be proven, that the desired phases could be synthesized.The coatings showed a considerable lowering of their frictional coefficient in tribological testing against steel 100Cr6 in the region of the operation temperatures for the hot extrusion of aluminium alloys.Besides the experimental work the fundamentals of the mixing process of different oxides regarding crystallographic aspects are discussed. Thermal sprayed coatings are not commonly used in the field of massive forming due to the high demands concerning the cohesion and adhesion of tool coatings.The cause is adhesive being induced by the elevated temperatures of operation and high relative velocities between the work piece and toolings resulting in high tensile and shear stresses.Nevertheless, there is the challenge to establish coatings to reduce both the wear of tools and frictional losses in the processes.For example in the case of direct hot extrusion, up to 60% of the forming force have to be applied to counterbalance frictional losses.To come up against that losses different lubricants and material separating agents are used, but with the disadvantages of a higher degree of reworking of the semifinished extruded product and a limited thermal stability of the substances.To overcome these disadvantages, the usability of specific oxide ceramic phases basing on titania was tested, which show a reduction of their frictional coefficient under tribological operation and elevated temperatures.The desired phases should be synthesized in the suspension plasma spraying process by mixing different oxide feedstock with titania in one suspension. Crystallographic aspects In the system Titanium-Oxygen different non stoichiometric phases are known, which show the ability for deformation under mechanical stress due to a shearing of crystal lattice planes.These phases show a reduction of the frictional coefficient in dry sliding conditions under elevated temperatures of some 100 degrees Celsius.The beneficial effect was linked to the shearing processes being temperature induced (Gardos, 1988), the fundamental mechanism of the shearing processes are discussed elsewhere (Anderson, S. and Tilley, R. J. D., 1972).As the phases are expected to be not thermodynamically stable (for a discussion of redistribution effects of titanium and oxygen see Wood, G. J. et al., 1982), another approach was intended for these workings.By addition of a second cation besides Ti 4+ , phases can be obtained which are homologues to the nonstoichiometric titaniumoxides.Those so called Andersson-phases were first described for the system Ti-Cr-O (Andersson, S., Sundholm, A. & Magnéli, A., 1959) showing a composition of Ti n-2 Cr 2 O 2n-1 . As chromium exhibits a high steam pressure with rising temperature and therefore may tend to evaporate out of the lattice, the homovalent substitution of the Ti 4+ -cation in the rutile base lattice was aspired.Several cations where chosen based on the rules for substitution processes stated by V. M. Goldschmidt (Goldschmidt, V. M., 1926), besides Cr 3+ primary Ni 3+ , Co 3+ and Zr 4+ by considering the ionic radii and coordination given in (Shannon, R.D., 1976).The goal is to reach phases with a similar composition compared to the Andersson-type phases on the one hand and a sufficient stability in temperature ranges up to 800° C on the other, which are commonly used for hot extrusion of aluminum and copper based alloys. The assumption that the applicability of substitution processes may lead to the formation of solid solutions of the desired stoichiometry can be proven by means of the Inorganic Crystal Structure Database.In Figure 9 for example the structures of the cubic Co(II)-oxide and of tetragonal rutile (i.e.Ti(IV)-oxide) are shown on top, where the oxygen is represented by the larger balls.From the structure it can be inferred, that both cations have similar radii, which is -besides the valence and the coordination by the surrounding ions -the key requirement for the dissolution of the oxides.When both oxides are mixed, a structure of lower symmetry (orthorombic) is formed with a composition of Co 2 Ti 4 O 10 .The difference compared to the aspired composition of Co 2 Ti 4 O 11 for n = 6 is due to the fact, that the divalent cobalt is incorporated in the structure instead of the trivalent ion.Like the most structures being crystallographic possible solid solution of rutile with the named oxides, the cobalt-titaniumoxide with trivalent Co-ions is not refined yet.Without the feasibility to refine the structures, the full quantitative Rietveld analysis by means of X-ray diffraction of the sprayed coatings is not possible. Phase analysis Three different mixtures of titania (rutile) with the named oxides of trivalent cobalt, nickel and chrome where sprayed on structural steel SJ235R.X-ray diffraction analysis were performed on the coating systems using copper radiation, the diffraction patterns are plotted in Figure 10 with an offset of 500 counts between the samples.The patterns where checked regarding the presence of unmelted or recrystallized feedstock, the possible solutions as well as reduced oxides and reaction products of the feedstock with the flux melting agent.Because of the marginal coating thickness of some tens of micrometres, the influence of the substrate is recorded in the patterns.As the relative intensity of reflection (RIR) of ferrite is considerably higher than that of the other phases present in the coatings, its peaks are of highest intensity (see the peaks at approximately 45 und 75° 2θ).Since no structure data is available for the aspired solid solutions, the reference intensity ratio stated in the ICDD PDF4 database entries where used to perform semi quantitative analysis.As no RIRs are given for the solid solutions in the powder diffraction files, values of phases with nearly identical stoichiometry where assumed.The fractions of ferrite were deducted and the adjusted phase contents of the coatings are given in Table 6.Table 6.Phase contents of the three coating systems In case of the Ni-and Co-containing coatings, significant amounts of Ti(IV)-oxides were measured, of which approximately one third is anatase.As stated in (Bolelli, G., et al., 2009), in case of rutile feedstock the phase content of anatase especially in suspension sprayed coatings can be explained by slow cooling due to re-solidification of molten droplets in the process, compared to formation of rutile in rapid quenching on the substrate.Considering this explanation another assumption might be the influence of elevated substrate temperatures in the SPS process leading to a more slowly cooling of molten titania particles after impinging on the substrate.To distinguish both possible mechanisms further investigations will be conducted considering the thermodynamics of the phase changes of both titania species.In the case that the anatase content correlates well with the content of re-solidificated particles in the coating, the anatase-to-rutile ratio can be used to optimize the injection and spraying parameters. For the coatings containing nickel, about 7% percent of Ni(II)-oxide were found, whereas in titania-cobalt-oxide systems no remains of the Co-feedstock was detected.The employed trivalent oxides of both cations decompose towards the divalent oxide at temperatures above approximately 600° in case of the Ni-oxide and 1910° C for the Co 2 O 3 .Otherwise the contents of borates formed by reactions of the boron oxide with the feedstock oxides is three times higher for the Co-based system compared to the titania-nickel-oxide coating.As the absolute value of the enthalpy of formation of the cobalt-borate is higher than that of the Niborate (Hawk, D. and Müller, F.;1980;Paul, A., 1975), the Co-oxide feedstock is diluted in the boron oxide to a much higher extent compared to the Ni-containing system, and no remaining Co 2 O 3 is embedded in the coating.In contrary to that the contents of Ni-borates are small in the titania-Ni-oxide coating, and remains of the Ni(II)-oxide are recorded.The phase contents of the aspired solid solutions are below 30 % for both coatings systems. Compared to the Ni-and Co-containing coatings the mixing of titania with chromia leeds to different phase compositions.Due to the marginal miscibility of chromia with boron oxide (Tombs, N. C.;Croft, W. J. & Mattraw, H. C.;1963), no borates and also just small amounts of the feedstock powders are found.The Andersson-phases with the mentioned stoichiometry of Ti n-2 Cr 2 O 2n-1 amount to three quarters of the total coatings composition.Therefore it can be concluded, that the degree of mixing of the feedstock is significantly higher for the titania-chromia system.If the melted phase of the boron oxide supports the mixture process of the both oxide ceramics without further reaction cannot be clarified. Possibly the heat of the process is better transferred to the coarser feedstock of approximatly 100 nm median crystallite size compared to 30 to 60 nm of the feedstock of the Ni-and Cocontaining coatings.As the heat transfer degreases drastically when the agglomerate size of the feedstock particles falls below a critical limit (so called Knudsen effect, Fauchais, P. et al., 2008), this might be a supposable explanation of the higher degree of feedstock mixing in the case of the titania-chromia system. In addition, with approximately 10 % significant amounts of chromium are present in the coatings, being formed by reduction of the chromia feedstock.This effect is only detected when spraying the suspension with the Triplex-II and not when using the DELTA-Gun, and further on when besides chromia titania is present in the suspension.This result is probably due to the large gap between the absolute values of the Gibbs free energy of the two oxides.Hence the chromia is reduced in the presence of titania.By means of visible spectroscopy protons where found supposedly originating from the vaporization of the water of the suspension, but no ions of oxygen where detected.Together with the lamellar flow of the plasma jet of the Triplex resulting in marginal entrainment of surrounding air, apparently the conditions are given for the reduction of the chromia towards chrome. Tribological testing of Andersson type coatings Since the contents of the solid solutions of titania with another oxide were the highest in case for the titania-chromia system, tribological testing for recording the coefficient of friction dependent on the temperature of operation were conducted with coatings of this Andersson type phases using a ball on disk configuration.The coatings rotated against a ball of 100Cr6 (1.3505, diameter = 5 mm) with 0.1 m/s, the loading force was 5 N.The coefficient of friction was recorded in three runs on different samples at room temperature, 600° and 800° C (see Figure 11).The friction pairing shows a COF of more than 0.6 when running at room temperature. When rising the temperatures up to 600° C, the ratio of the frictional force to loading force drops considerably to below 0.1.On the one hand this effect is surely due to the softening of the ball (see the debris of the ball on the coating in the second picture from top on the left hand in Figure 12), but this effect is desired as the billet in the extrusion process shows a comparable behavior.For this reason, the testing of the coatings in tribometer experiments is not directly comparable to the hot extrusion process, as the soft consistency of the flowing billet above yield stress cannot be tested because the ball would be abraded promptly and its holder would scratch the coating.But compared to the given values of operating unlubricated containers of more than 0.2 (Bauser, M.; Sauer, G. and Siegert, K., 2006), a significant lower frictional force was measured.Besides the tribological activity of the coating it shows good material separating properties against 100Cr6.When rising the temperatures to 800° C, the COF rises again to values of nearly 0.2.An explanation is the formation of black ferrous oxide (supposable magnetite) instead of the red oxide (presumably haematite), showing higher hardness and unfavourable tribolological properties (Barbezat, G., 2006). Conclusion By means of x-ray diffraction analysis it could be proven, that the mixing of titania and other oxide feedstock in the SPS process could be realized.For example the achieved Andersson type coating system sprayed with titania and chromia containing suspensions showed a temperature induced lowering of their coefficient of friction when rotated against 100Cr6.Further experiments will be conducted to better understand the parameters controlling the mixing process of the feedstock on one hand and regarding tribological experiments using aluminium and copper based extrusion alloys. Comparison of multielectrode plasma guns for development of new coatings When high throughput is intended, three cathode guns are a supposable solution.Due to their stationary plasma jet and elevated power characteristics, higher feeding rates concurrent with sufficient deposition efficiencies can be realized compared to one-cathode plasma guns.On the contrary to those well-known equipments a newly marketable system makes use of three anodes to combine high power inputs into the plasma as well as stable process conditions.Besides a more narrow nozzle outlet diameter compared to multicathode designs hydrogen can be used as secondary plasma gas, both resulting in higher plasma velocities and net powers.The conceptional designs of two guns are discussed as well as their suitability for suspension and shrouded plasma spraying.The efforts in achieving new plasma sprayed coating systems are presented. Design of marketable multielectrode plasma guns To overcome the disadvantages of conventional plasma guns especialy regarding the discontinuity of the free jet due to plasma arc root rotation, mulitelectrode guns were developed.Since more than ten years guns basing on the three-cathode-design guarantee high plasma net powers combined with stable feedstock injection conditions.Until now the guns have two disadvantages concerning the use of expensive helium as secondary gas accompanied by low plasma arc voltages on the one hand and the restriction of the minimal nozzle outlet diameter on the other.For example three single plasma fingers originate from the single cathodes being passed through a cascaded neutrode in case of the second generation of the Triplex-design (Sulzer Metco AG, Wohlen/Switzerland). Hence a minimal nozzle outlet diameter of the anode of 9 mm can be realized because of the thermal design of the gun. Another approach is the inverted design of a plasmatron, where one arc originates from a single cathode and is divided on three anodes after passing the cascade.Therefore for the DELTA-Gun (GTV GmbH, Luckenbach/Germany) a minimal nozzle outlet diameter of 7 mm can be achieved resulting in higher plasma velocities at the nozzle outlet.Furthermore hydrogen can be used as secondary gas and high brut plasma powers of 80 kW can be applied to the torch. Experimental The workings concentrated on the investigations, to what extent both gun concepts are appropriate for inert and reactive shrouded plasma spraying as well as the processing of nanoscaled suspensions.Feedstock was used being not commonly applied in plasma spraying to identify the potential of plasma spraying for possibly new applications.For demonstration purposes coating systems of titanium and chromium as well as their nitrides and Indium-Tin-Oxide (ITO) showing electrical conductance were chosen.For the chromium coatings feedstock obtained from GTV GmbH with two different particle size distributions (-25+5 µm and -45+5 µm) were investigated.As titanium feedstock a powder of -45+10 µm came into operation, which is manufactured and distributed by TLS Technik Spezialpulver GmbH (Bitterfeld/Germany).For SPS suspensions containing 5 wt.-%ITO (ANM PH 15695, Evonik Degussa GmbH, Marl/Germany) and Al 2 O 3 (Saint Gobain, Weilerswist/Germany) with primary crystallite sizes of some tens for the first and approximately 150 nm for the latter were used. Shrouded plasma spraying For both guns modules have been designed and machined to apply shroud gases around the plasma free jet (for details see Figure 14 on the following page).The attachments consist of water-cooled bodies, in which the shrouding gas is injected helically to ensure a sufficient shielding against the surrounding air after exiting the shroud.The feedstock injection is realized over middle sections between the exit of the gun nozzle and the shroud gas inlet to avoid interferences with the shroud gas flow.The body housings of the shrouds are integrated in the cooling circuit of the spraying equipment. The spraying experiments were conducted applying plasma brut powers of approximately 25 to 50 kW (see Table 7 for spraying parameters).When operating the Triplex high helium flows of 20 SLPM were used to guarantee a sufficient heat transfer to the feedstock, but for the DELTA-Gun no secondary gas was applied due to the formation of black depositions with a consistency of soot when hydrogen was applied.To guarantee an adequate shielding effect in the case of argon shroud gas on one hand and an effectual entrainment of nitrogen for reactive spraying of the feedstock on the other, high shroud gas flows of 90 SLPM were applied for spraying with both guns.The obtained coatings microstructures are illustrated in Figure 15.The micrographs on the left hand side show coatings sprayed in an inert atmosphere of argon, the ones on the right hand side the results of reactive spraying using nitrogen.The coatings of a and b were sprayed using the Triplex gun, whereas for spraying of the coatings c to f the DELTA-Gun was used.When spraying in inert atmosphere, the coatings show a uniformly and homogenous microstructure with a level of porosity comparable to conventional plasma sprayed coatings.Otherwise when nitrogen is supplied, coatings with high levels of open cavities and microstructures comparable to metallic sponges are built.This is supposedly due to a turbulent entrainment of the nitrogen shielding gas when the feedstock reacts towards the nitride.To proof the existence of the aspired nitride phases, semi-quantitative measurements by means of energy dispersive x-ray analysis (EDX) were performed on coatings sprayed with the Triplex.The results revealed contents between approximately 13 and 17 at.-%nitrogen.Further on, the nitrogen contents of the coatings were investigated using a N/O/H-Analyser (LECO Instruments Corp, St. Joseph/USA).Unfortunately the contents of nitrogen in the titanium coatings could not be measured to the high melting point of the titanium nitride, but for the chromium coating (section f) the nitrogen content was determined to account for approximately 10 at.-%. The existence of the nitride hard phases was also verified by indentation hardness measurements.When spraying the titanium with argon as shrouding gas, mean hardness values of 360 HV 0.1 in case of the Triplex and 270 HV 0.1 for the DELTA-Gun were measured.Otherwise with the employment of nitrogen significantly higher maximum values of more than 1200 and 1000 HV 0.1 were detected.To characterize the adhesion of the coating systems tensile adhesive tests according to DIN EN 582 on grid blasted 1.4301 substrates were conducted.Again slightly better values were achieved for the coatings sprayed with the Triplex-II gun, as the mean values of more than 50 MPa were measured for the titanium coatings compared to approximately 35 MPa for both the titanium and the coarsely grained chromium feedstock when spraying was performed with the DELTA-Gun.This might be due to the problems of injecting the feedstock in the case of the DELTA, as the gun uses a gas flow supporting the cooling of the anodes.Together with the plasma and the shrouding gas, the gas throughputs through the shroud module are high and a proper feedstock injection is not easily achieved.Therefore, further optimization potential is given for the shrouded spraying in case of the DELTA-Gun. Otherwise when using the fine fractionated chromium feedstock, the tensile adhesion of the coatings reach nearly 50 MPa, comparable to the coatings sprayed with the titanium feedstock with the Triplex gun. When spraying the titanium on polished substrates instead of the grid blasted samples, even higher tensile adhesive strengths of nearly 60 MPa were measured.This result being not expected is probably due to diffusion phenomena of the titanium into the austenitic substrate.The effect was not recorded when using ferritic steels.In Figure 16 the backscattered electron micrograph (left hand side) and an EDX line scan analysis (right hand side) of the interface section of a titanium coating on 1.4301 steel substrate is shown.The EDX analysis confirms the findings of a zone of some micrometers depth, in which the titanium diffused.It can be stated as remarkable result, that with the limited heat transfer to the substrate enough potential is given for the diffusion process.This is due to the high diffusion coefficients of both titanium and chromium in 1.4301 austenitic steel (Kale, G.;1998).To investigate the alteration of the feedstock in the suspension plasma spraying process, Indium-Tin-Oxide (assumed composition of 9:1) was suspension plasma sprayed.ITO is used to coat glass for electrically conductive coating beeing transparent in the visible spectrum.The coatings are commonly deposited by sol-gel methods and are used in touchscreen purposes.The goal was to reach thin optical transparent ITO coatings showing electrical conductance.When overheating the feedstock it tends to build coatings with a yellowish color, whereas the coating system shows no conductance when it is not uniformly deposited.To find optimal conditions the relevant parameters (solid content of feedstock, species of the outer phase, injection conditions, applied amperage and spraying distance) were varied.The melting behavior of the feedstock was tested with wipe tests (see SEM images on top of Figure 17).To determine the optical transparency of the coatings, four samples were measured using a VIS-spectrometer and the results were compared to uncoated and grid blasted glass (see transmission spectra in Figure 7).As source the tungsten lamp of the calibration module of a Tecnar DPV-2000 was used delivering a stable spectrum covering the whole visible range. The coated samples show a high degree of transparency over the whole visible spectrum.For example in the red range below 700 nm (see marking), the relative intensity measured is maximal 1 to 3 counts lower than that of the uncoated glass.This equals to a grade of transparency of 95 to 98%.It can be stated, that both requirements regarding the electrical conductance as well as the optical transparency of the coatings systems were fulfilled.These findings show, that by suspension plasma spraying new coating systems can be realized in fields of operation, where up until now coating deposition processes like CVD and PVD are used. Summary With the adaption of shroud gas modules to the multieletrode plasma guns Triplex-II and DELTA-Gun it could be proven, that the spraying feedstock being susceptible to chemical reactions can both sprayed in inert atmospheres using argon and reactively sprayed by applying nitrogen as shrouding gas.The inert conditions led to the formation of coatings showing a homogenous microstructure comparable to conventionally APS sprayed metallic coatings.In the case of the use of nitrogen, no dense coatings could be achieved, but the presence of nitride phases in the coatings could be proven.Further on by means of suspension plasma spraying glass was coated with electrically conductive coatings reaching optical grades regarding their transparency.These efforts show that by means of plasma spraying new coating systems can be achieved. Fig. 5 . Fig. 5. Relative uncertainty of indentation hardness values for chromia (a) and Cr 3 C 2 -NiCr (b) coatings in relation to the measurement technique, the quantity of measuring repetitions and the coatings porosity given in brackets Figure 6a . Fig. 6.Optical micrographs of wear scars after 2500 cycles in ball on disk-tests Fig. 7 . Fig. 7. Assumed correlation between applied amperage and relative deposition efficiencyThe sketch shows, that the same magnitude of deviation of the applied amperage from the chosen control value results in two different deviation spans of the resulting DE (R y1 and R y2 ).Another point is the influence of noise factors, which also can disturb the known relation between process parameters and the expected result.Following the approach after G. Taguchi, the effects of the control factors (i.e.process parameters) are extended by the effects of noise and signal factors (see Figure8). Fig. 8 . Fig.8.Scheme of the effects of control, noise and signal factors on the coating process (afterPhadke, 1989) Fig. 9 . Fig. 9. Structures of Co-and Ti-oxide (top) and of the "mixed" solid solution oxide (bottom) Fig. 10 . Fig. 10.Diffraction patterns of three suspension plasma sprayed coating systems Fig. 12 . Fig. 12. Top views of the scare tracks left and corresponding friction surfaces of the counterparts from room temperature (top) to 800° C (bottom) Fig. 15 . Fig. 15.Micrographs of shrouded plasma sprayed titanium (field a -d) and chromium (field e + f) feedstock using argon (left hand side) and nitrogen (right hand side) as shroud gas Fig. 16 . Fig. 16.BSE image of the interface of a titanium coating on 1.4301 austenitic steel (left) and corresponding EDX line scan (right hand side) Fig. 17 . Fig. 17.SEM images of wipe tests of suspension plasma sprayed ITO feedstock and top views of ITO coatings (Triplex-II left side, DELTA-Gun on the right hand side)With optimized parameter sets the coatings were sprayed on slides of borosilicate glass with both plasma guns.The coatings were uniformly deposited (see top view SEM images in Figure17), showing homogenous structures.The coatings were measured by a project partner regarding their thickness and electrical conductance.It could be proven, that coatings with a thickness of approximately 400 nm and a sheet resistance of 850 Ω could be achieved. Fig. 18 . Fig. 18.Transmission spectra of four ITO coated glass slides compared to uncoated and grid blasted glass Table 2 . Results of the corrosion tests: mass increase of coated samples exposed 240 h in salt spray fog. Table 4 . Predicted optimum spray parameters (the numbers in brackets show the corresponding parameter level) Table 5 . Predicted and measured results obtained from validation and standard samples
11,730.6
2012-02-24T00:00:00.000
[ "Engineering", "Materials Science" ]
Challenges of Superdense Coding with Accelerated Fermions Two particles, even being far from each other have quantum correlation as a result of the existence of entanglement between them. Therefore, information can be shared by entangled particles, sitting in separate places. Superdense coding is one of the quantum protocols that rely on entanglement. In this paper, we review superdense coding with a non-inertial observer in the beyond single mode approximation and investigate the probability of success for superdense coding. We analyze the mutual information due to the effects of acceleration on the quantum and classical correlations of the state. Entanglement behavior is studied considering an entanglement measure the so-called the concurrence. Comparing the mutual information and the concurrence with the probability of superdense coding is shown that quantities have different behaviors, particularly, when the beyond single mode approximation plays a powerful role. Introduction Entanglement has central importance in quantum information theory. Our current understanding of the universe reveals that it can be best described by relativistic physics and so, many implementations of quantum information tasks require relativistic system. In this sense, relativistic effects in quantum information have been exposed in vast domain of researches. It has been shown that entanglement in the limit of infinite acceleration is non-zero between spinor modes in single mode approximation [1]. Recently, entanglement treatments and its applications in quantum information processing have been investigated for non-inertial observers in the single mode approximation [2,3]. In order to understand how entanglement plays a role in the presence of Rindler horizon, the results have been extended to the beyond single mode approximation [4,5,6]. The superdense coding by a non-inertial particle has been studied in beyond single mode approximation. The main purpose of this research is to study how the quantum and classical correlations and particularly, the entanglement of the state can be useful for superdense coding by a non-inertial observer in beyond single mode approximation. We compare the probability of success in superdense coding with the quantum and classical correlations, the mutual information and a common entanglement measure, the concurrence [7,8]. It is shown these quantities have different behaviours in beyond single mode approximation. Superdense Coding Superdense coding process begins with a pair of entangled two-level particles that is shared between Alice, as sender, and Bob, as receiver [9,10]. An EPR pair in the two-dimension Hilbert space of the two particles, i. e., a Bell state as a maximally entangled state is used [11]. Alice and Bob share a Bell state, | 00 〉 , , as follows where subscripts A and B show Alice's qubit and Bob's one, respectively. Alice wants to send a two-bit message, 00, 01, 10, or 11, to Bob. She operates one of the four unitary operators, {I, , , }, on her qubit. By this operation the initial Bell state, Eq. (1), transforms to one of the four orthonormal Bell states. Then she transmits her manipulated qubit to Bob. Bob performs a measurement on the Bell-basis, yielding one of the four distinctive results. Therefore, based on the outcome, the initial two-bit message is distinguishable. In deede, under the superdense coding process, a classical two-bit message is encoded by one of the four Bell states, by the sender, and the receiver decodes this quantum state by a suitable measurement and achieves the original information. Superdense Coding in Non-Inertial Frame Alice and Bob, as two inertial observers, start the process by sharing a maximally entangled two-qubit state, an EPR pair, such as Eq. (1). Consider Alice remains at rest and Bob starts to accelerate uniformly. Now, he is named Rob. As is seen in Figure 1, Rob's trajectory in Minkowski coordinates is a hyperbola, indicating in terms of Rindler coordinates as follows. Where is the Rob's proper time, is an arbitrary reference acceleration and is the proper acceleration of Rob. The horizons ± are obtained by lines of 45 degree, represent proper times = ±∞ in the limit → −∞. The right and left half of Mikowski planne are Rindler wedge I and II, such that Rob and the fictitious observer, anti-Rob, are constrained to move in these regions, respectively. No information can propagate between these regions because of causally disconnection. The Minkowski vacuum and one particle modes from Rob's view point are expanded in terms of the corresponding Rindler vacuum and one particle modes in the regions I and II [1,4], as follows. |0〉 = cos |0〉 |0〉 + sin |1〉 |1〉 , where = tan −1 (exp(− )) is the parameter equal to acceleration with ≡ /( / ), which is the ratio of the frequency observed by observers, , to the naturally occurring frequency in the problem, . and , the presence probability of particle in right or left Rindler regions, respectively, are the complex numbers that satisfy 2 + l 2 = 1 and ∈ [0,1]. For simplicity, we only choose the cases that and are real. Therefore, the initially shared state, Eq. (1) is rewritten in terms of left and right Rindler regions such as follows where | 〉 = | 〉 | 〉 | 〉 . Alice applies one of the operators, �I, , , �, to her qubit. Bob performs a Bell-basis measurement to decode the classical information, and gets the following results Table 1 shows that the probability of success for superdense coding is = 1 4 ( + cos ) 2 . Thus Bob's measurement on the density matrix after tracing out the region II depends on the acceleration parameter, r, and the presence probability of particle in the left or in the right Rindler regions. The single mode approximation is recovered by = 1. The probability of success for superdense coding as a function of the acceleration parameter, r, and the presence probability of particle in the right Rindler region, is represented in Figure. 2. This function is decreasing with respect to r and increasing with respect to . We calculate the mutual information to quantify the effects of acceleration on the quantum and classical correlations [7]. The mutual information is given by where ( ) = −Tr log 2 denotes the von Neumann entropy of , and and are reduced density matrices for subsystem A and I, respectively. The mutual information can be calculated analytically. Since the corresponding expressions are quite long and not very enlightening, we give the plots of ( ) as a function of the acceleration, , and the presence probability of particle in right Rindler region, , in Figure 3. When the acceleration is zero the correlations decrease with respect to the presence probability of particle in left Rindler region. In single mode approximation, = 1, the mutual information decreases with respect to the acceleration. It, from maximum value of 2 in absent of acceleration, approaches the value of 1 in the limit of infinite acceleration. Thus, the classical and quantum correlations become degraded with respect to the acceleration. In the beyond single mode approximation, the situation is more complicated. The mutual information stays a decreasing function of the acceleration until the presence probability of particle in right Rindler region is more than the left one. But, it becomes an increasing function for more presence probability in left Rindler region. It is seem the correlations increase when the presence probability of particle in left Rindler region is more than the right one. However, in the limit of infinite acceleration, the mutual information approaches the value of 1 for all values of . We employ the concurrence to provide further insight into the entanglement of the state 00 , in Eq. (5), [8]. The concurrence is defined by where ≥ +1 , are the eigenvalues of the density matrix , � ⊗ � , * � ⊗ � , 2 is the second Pauli matrix, and the asterisk is complex conjugation. Thus the concurrence for Eq. (5), is given by Figure 4. 142 Challenges of Superdense Coding with Accelerated Fermions Figure 4 shows the concurrence as a function of the acceleration parameter, r, and the presence probability of particle in the right Rindler region, . In the single mode approximation, = 1 , the concurrence is a descending function of acceleration, r. The variations of entanglement in the interval = [0, /4] is from 1 to 1 √2 . We saw the mutual information also decreases with respect to the acceleration. But, it varies from 2 to1 in the interval = [0, /4] . Thus the classical correlations also become degraded with increasing acceleration. In beyond single mode approximation, so long as the particle is in the right Rindler region, i.e. it has a small probability to be in the left Rindler region, and for small values of acceleration, the concurrence has similar behaviours to the probability of success with respect to the acceleration and with respect to the presence probability of particle in the right Rindler region. It is clear that the concurrence is decreasing with increasing r and with decreasing . However, if the particle has larger probability of being in the left Rindler region and for large value of the acceleration, the concurrence has different behaviours with respect to the probability of success. The concurrence is an increasing function of r, and a descending function of . The analysis of the three above functions in the limit of infinite acceleration shows that with decreasing q r , the probability of success is always a decreasing function, while the mutual information is a constant function with the value of 1, and the concurrence first decreases and then becomes increased. Although, in this limit, the entanglement with respect to q r first show a decreasing behavior and then an increasing behavior, the total of quantum and classical correlations have no variation. However, the probability of success always stays a decreasing function of q r . Therefore, this correlation is not invariably suitable for superdense coding. Results and Discussion In the present work, we investigated the superdense coding by an accelerated observer in the beyond single mode approximation. We observed that the probability of success depends on the acceleration of references frames. In Single mode approximation, q r =1, and for r=0, probability of success is maximum, i. e. P=1, where is recovered the original superdense coding [9]. In this situation the mutual information and the concurrence are also maximum, i.e., = and C=1. By increasing the acceleration, the probability of success, the mutual information and the concurrence are decreasing functions of the acceleration, as expected. In the beyond single mode approximation, by increasing the acceleration, the probability of success always decreases with respect to increasing the acceleration and also with respect to decreasing the presence probability of particle in right Rindler region. The mutual information and the concurrence behave in the same way for small values of acceleration and large values of presence probability of particle in right Rindler region. Nevertheless, when the particle has larger probability of being in the left Rindler region and for the large values of the acceleration, the mutual information and the concurrence have different behaviours as compared to the probability of success with respect to the acceleration and with respect to the presence probability of particle in the right Rindler region. Therefore, the mutual information and the concurrence are not reliable measures for these ranges of the acceleration and the presence probability of particle in right Rindler region. In fact, the concurrence may not be served as a suitable entanglement for superdense coding, at least, for the special ranges of the acceleration and the presence probability of particle in the right Rindler region. Recently, it has been shown that the relativistic effects entering the description of the dynamics such as frame dependence, time dilation, and Doppler shift, already existent in inertial motion can compete with or even overwhelm the effect due to uniform acceleration in a quantum field [12]. In fact, the relativistic effects (frame dependence, time dilation, and Doppler shift), the environmental influences (quantum decoherence, entanglement dynamics) and another issues, discussed in [12] can be took account into superdense coding in a fully relativistic system, in the interaction region of localized objects and quantum fields. Thus, the results of this paper can be checked for estimation of the efficiency of superdense coding in relativistic quantum systems under environmental influences. These issues would be interesting but lies beyond the scope of the present work. Also, the transmission of both classical and quantum information between two arbitrary observers has been investigated in globally hyperbolic spacetimes using a quantum field as a communication channel [13]. In this situation, study of the superdense coding is desirable. We will return to the issue raised by this paper with a detailed discussion of such a situation and to address to which the superdense coding can be affected by a globally hyperbolic spacetime.
2,986.2
2017-01-01T00:00:00.000
[ "Physics", "Computer Science" ]
Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks Semantic role labeling (SRL) involves extracting propositions (i.e. predicates and their typed arguments) from natural language sentences. State-of-the-art SRL models rely on powerful encoders (e.g., LSTMs) and do not model non-local interaction between arguments. We propose a new approach to modeling these interactions while maintaining efficient inference. Specifically, we use Capsule Networks (Sabour et al., 2017): each proposition is encoded as a tuple of capsules, one capsule per argument type (i.e. role). These tuples serve as embeddings of entire propositions. In every network layer, the capsules interact with each other and with representations of words in the sentence. Each iteration results in updated proposition embeddings and updated predictions about the SRL structure. Our model substantially outperforms the non-refinement baseline model on all 7 CoNLL-2019 languages and achieves state-of-the-art results on 5 languages (including English) for dependency SRL. We analyze the types of mistakes corrected by the refinement procedure. For example, each role is typically (but not always) filled with at most one argument. Whereas enforcing this approximate constraint is not useful with the modern SRL system, iterative procedure corrects the mistakes by capturing this intuition in a flexible and context-sensitive way. Introduction The task of semantic role labeling (SRL) involves the prediction of predicate-argument structure, i.e., both the identification of arguments and labeling them with underlying semantic roles. The shallow semantic structures have been shown beneficial in many natural language processing (NLP) applications, including information extrac-1 Code: https://github.com/DalstonChen/CapNetSRL. tion (Christensen et al., 2011), question answering (Eckert and Neves, 2018) and machine translation (Marcheggiani et al., 2018). In this work, we focus on the dependency version of the SRL task (Surdeanu et al., 2008). An example of a dependency semantic-role structure is shown in Figure 1. Edges in the graph are marked with semantic roles, whereas predicates (typically verbs or nominalized verbs) are annotated with their senses. Intuitively, there are many restrictions on potential predicate-argument structures. Consider the 'role uniqueness' constraint: each role is typically, but not always, realized at most once. For example, predicates have at most one agent. Similarly, depending on a verb class, only certain subcategorization patterns are licensed. Nevertheless, rather than modeling the interaction between argument labeling decisions, state-of-the-art semantic role labeling models Li et al., 2019b) rely on powerful sentence encoders (e.g., multi-layer Bi-LSTMs (Zhou and Xu, 2015;Qian et al., 2017;). This contrasts with earlier work on SRL, which hinged on global declarative constraints on the labeling decisions (FitzGerald et al., 2015;Das et al., 2012). Modern SRL systems are much more accurate and hence enforcing such hard and often approximate constraints is unlikely to be as beneficial (see our experiments in Section 7.2). Instead of using hard constraints, we propose to use a simple iterative structure-refinement procedure. It starts with independent predictions and refines them in every subsequent iteration. When refining a role prediction for a given candidate argument, information about the assignment of roles to other arguments gets integrated. Our intuition is that modeling interactions through the output spaces rather than hoping that the encoder somehow learns to capture them, provides a useful inductive bias to the model. In other words, capturing this interaction explicitly should mitigate overfitting. This may be especially useful in a lowerresource setting but, as we will see in experiments, it appears beneficial even in a high-resource setting. We think of semantic role labeling as extracting propositions, i.e. predicates and argument-role tuples. In our example, the proposition is sign.01(Arg0:signer = Hiti, Arg1:document = contract). Across the iterations, we maintain and refine not only predictions about propositions but also their embeddings. Each 'slot' (e.g., correspond to role Arg0:signer) is represented with a vector, encoding information about arguments assumed to be filling this slot. For example, if Hiti is predicted to fill slot Arg0 in the first iteration then the slot representation will be computed based on the contextualized representation of that word and hence reflect this information. The combination of all slot embeddings (one per semantic role) constitutes the embedding of the proposition. The proposition embedding, along with the prediction about the semantic role structure, gets refined in every iteration of the algorithm. Note that, in practice, the predictions in every iteration are soft, and hence proposition embeddings will encode current beliefs about the predicate-argument structure. The distributed representation of propositions provides an alternative ("dense-embedding") view on the currently considered semantic structure, i.e. information extracted from the sentence. Intuitively, this representation can be readily tested to see how well current predictions satisfy selection restrictions (e.g., contract is a very natural filler for Arg1:document) and check if the arguments are compatible. To get an intuition how the refinement mechanism may work, imagine that both Hiti and injury are labeled as Arg0 for sign.01 in the first iteration. Hence, the representation of the slot Arg0 will encode information about both predicted arguments. At the second iteration, the word injury will be aware that there is a much better candidate for filling the signer role, and the probability of assigning injury to this role will drop. As we will see in our experiments, whereas enforcing the hard uniqueness constraint is not useful, our iterative procedure corrects the mistakes of the base model by capturing interrelations between arguments in a flexible and context-sensitive way. In order to operationalize the above idea, we take inspiration from the Capsule Networks (CNs) (Sabour et al., 2017). Note that we are not simply replacing Bi-LSTMs with generic CNs. Instead, we use CN to encode the structure of the refinement procedure sketched above. Each slot embedding is now a capsule and each proposition embedding is a tuple of capsules, one capsule per role. In every iteration (i.e. CN network layer), the capsules interact with each other and with representations of words in the sentence. We experiment with our model on standard benchmarks for 7 languages from CoNLL-2009. Compared with the non-refinement baseline model, we observe substantial improvements from using the iterative procedure. The model achieves state-of-the-art performance in 5 languages, including English. Base Dependency SRL Model In dependency SRL, for each predicate p of a given sentence x = {x 1 , x 2 , · · · , x n } with n words, the model needs to predict roles y = {y 1 , y 2 , · · · , y n } for every word. The role can be none, signifying that the corresponding word is not an argument of the predicate. We start with describing the factorized baseline model which is similar to that of . It consists of three components: (1) an embedding layer; (2) an encoding layer and (3) an inference layer. Embedding Layer The first step is to map symbolic sentence x and predicate p into continuous embedded space: where e i ∈ R de and p ∈ R de . Encoding Layer The encoding layer extracts features from input sentence x and predicate p. We extract features from input sentence x using stacked bidirectional LSTMs (Hochreiter and Schmidhuber, 1997): where x i ∈ R d l . Then we represent each role logit of each word by a bi-linear operation with the target predicate: where W ∈ R d l ×de is a trainable parameter and b j|i ∈ R is a scalar representing the logit of role j for word x i . Inference Layer The probability P (y|x, p) is then computed as where b ·|i = {b 1|i , b 2|i , · · · , b |T ||i } and |T | denotes the number of role types. Dependency based Semantic Role Labeling using Capsule Networks Inspired by capsule networks, we use capsule structure for each role state to maintain information across iterations and employ the dynamic routing mechanism to derive the role logits b j|i iteratively. Figure 2 illustrates the architecture of the proposed model. Capsule Structure We start by introducing two capsule layers: (1) the word capsule layer and (2) the role capsule layer. Word Capsule Layer The word capsule layer is comprised of capsules representing the roles of each word. Given sentence representation x i and predicate embedding p, the word capsule layer is derived as: where W k j ∈ R d l ×de and u j|i ∈ R K is the capsule vector for role j of word x i . K denotes the capsule size. Intuitively, the capsule encodes the argument-specific information relevant to deciding if role j is suitable for the word. These capsules do not get iteratively updated. Role Capsule Layer As discussed in the introduction, the role capsule layer could be viewed as an embedding of a proposition. The capsule network generates the capsules in the layer using "routing-by-agreement". This process can be regarded as a pooling operation. Capsules in the role capsule layer at t-th iteration are derived as: j is generated with the linear combination of capsules in the word capsule layer with weights c (t) ij : Here, c ij are coupling coefficients, calculated by "softmax" function over role logits b ij can be interpreted as the probability that word x i is assigned role j: The role logits b (t) j|i are decided by the iterative dynamic routing process. The Squash operation will deactive capsules receiving small input s (t) j (i.e. roles not predicted in the sentence) by pushing them further to the 0 vector. Algorithm 1: Dynamic routing algorithm. l w and l r denote word capsule layer and role capsule layer, respectively. Dynamic Routing The dynamic routing process involves T iterations. The role logits b j|i before first iteration are all set to zeros: b (0) j|i = 0. Then, the dynamic routing process updates the role logits b j|i by modeling agreement between capsules in two layers: where W ∈ R K×K . The whole dynamic routing process is shown in Algorithm 1. The dynamic routing process can be regarded as the role refinement procedure (see Section 5 for details). Incorporating Global Information When computing the j-th role capsule representation (Eq 9), the information originating from an i-th word (i.e. u j|i ) is weighted by the probability of assigning role j to word x i (i.e. c ij ). In other words, the role capsule receives messages only from words assigned to its role. This implies that the only interaction the capsule network can model is competition between arguments for a role. 2 Note though that this is different from imposing the hard role-uniqueness constraint, as the network does this dynamically in a specific context. Still, this is a strong restriction. In order to make the model more expressive, we further introduce a global node g (t) to incorporate global information about all arguments at the current iterations. The global node is a compressed representation of the entire proposition, and used in the prediction of all arguments, thus permitting arbitrary interaction across arguments. The global 2 In principle, it can model the opposite, i.e. collaboration / attraction but it is unlikely to be useful in SRL. node g (t) at t-th iteration is derived as: where |T | ∈ R K·|T | is the concatenation of all capsules in the role capsule layer. We append an additional term for the role logits update in Eq (11) where W ∈ R K×K and W g ∈ R K×K . Refinement The dynamic routing process can be seen as iterative role refinement. Concretely, the coupling coefficients c (t) i in Eq (10) can be interpreted as the predicted distribution of the semantic roles for word x i in Eq (5) at t-th iteration: Since dynamic routing is an iterative process, semantic role distribution c (t) i in iteration t will affect the semantic role distribution c (t+1) i in next iteration t + 1: where f (·) denotes the refinement function defined by the operations in each dynamic routing iteration. Training We minimize the following loss function L(θ): where λ is a hyper-parameter for the regularization term and P (T ) (y i |x, i, p; θ) = c (T ) i . Unlike standard refinement methods (Belanger et al., 2017; which sum losses across all refinement iterations, our loss is only based on the prediction made at the last iteration. This encourages the model to rely on the refinement process rather than do it in one shot. Our baseline model is trained analogously, but using the cross-entropy for the independent classifiers (Eq (5)). Uniqueness Constraint Assumption As we discussed earlier, for a given target predicate, each semantic role will typically appear at most once. To encode this intuition, we propose another loss term L u (θ): where b (T ) j|· are the semantic role logits in T -th iteration. Thus, the final loss L * (θ) is the combination of the two losses: where η is a discount coefficient. for all other languages on both the baseline model and the proposed CapsuleNet. LSTM state dimension d l is 500. Capsule size K is 16. Batch size is 32. The coefficient for the regularization term λ is 0.0004. We employ Adam (Kingma and Ba, 2015) as the optimizer and the initial learning rate α is set to 0.0001. Syntactic information is not utilized in our model. Table 1 shows the performance of our model trained with loss L * (θ) for different values of discount coefficient η on the English development set. The model achieves the best performance when η equals to 0. It implies that adding uniqueness constraint on loss actually hurts the performance. Thus, we use the loss L * (θ) with η equals to 0 in the rest of the experiments, which is equivalent to the loss L(θ) in Eq (16). We also observe that the model with 2 refinement iterations performs the best (89.92% F1). (2015) 87.70 75.50 Foland and Martin (2015) 86.00 75.90 Roth and Lapata (2016) 87.90 76.50 Swayamdipta et al. (2016) 85.00 - 87.70 77.70 89.10 78.90 89.50 79.30 89.80 79.80 89.60 79.00 Mulcaire et al. (2018) 87 . Overall Results Table 2 compares our model with previous stateof-the-art SRL systems on English. Some of the systems (Lei et al., 2015; only use local features, whereas others (Swayamdipta et al., 2016) incorporate global information at the expense of greedy decoding. Additionally, a number of systems exploit syntactic information (Roth and Lapata, 2016;. Some of the results are obtained with ensemble systems (FitzGerald et al., 2015;Roth and Lapata, 2016). As we observe, the baseline model (see Section 2) is quite strong, and outperforms the previous state-of-the-art systems on both in-domain and out-of-domain sets on English. The proposed Cap-suleNet outperforms the baseline model on English (e.g. obtaining 91.06% F1 on the English test set), which shows the effectiveness of the cap- sule network framework. The improvement on the out-of-domain set implies that our model is robust to domain variations. Table 3 gives the performance of models with ablation on some key components, which shows the contribution of each component in our model. Ablation • The model without the global node is described in Section 3. • The model that further removes the role capsule layer takes the mean of capsules u j|i in Eq (7) of word capsule layer as the semantic role logits b j|i : where K denotes the capsule size. • The model that additionally removes the word capsule layer is exactly equivalent to the baseline model described in Section 2. As we observe, CapsuleNet with all components performs the best on both development and test sets on English. The model without using the global node performs well too. It obtains 91.05% F1 on the English test set, almost the same performance as full CapsuleNet. But on the English outof-domain set, without the global node, the performance drops from 82.72% F1 to 82.36% F1. It implies that the global node helps in model generalization. Further, once the role capsule layer is removed, the performance drops sharply. Note that the model without the role capsule layer does not use refinements and hence does not model argument interaction. It takes the mean of capsules u j|i in the word capsule layer as the semantic role logits b j|i (see Eq (19)), and hence could be viewed as an enhanced ('ensembled') version of the baseline model. Note that we only introduced a very limited number of parameters for the dynamic routing mechanism (see Eq (11-13)). This suggests that the dynamic routing mechanism does genuinely captures interactions between argument labeling decisions and the performance benefits from the refinement process. Error Analysis The performance of our model while varying the sentence length and the number of arguments per proposition is shown in Figure 3. The statistics of the subsets are in Table 4. Our model consistently outperforms the baseline model, except on sentences of between 50 and 59 words. Note that the subset is small (only 391 sentences), so the effect may be random. node reduces the precision as the number of iteration grows. This model is less transparent, so it harder to see why it chooses this refinement strategy. As expected, the F1 scores for both models peak at the second iteration, the iteration number used in the training phase. The trend for the exact match score is consistent with the F1 score. Duplicate Arguments We measure the degree to which the role uniqueness constraint is violated by both models. We plot the number of violations as the function of the number of iterations ( Figure 5). Recall that the violations do not imply errors in predictions as even the gold-standard data does not always satisfy the constraint (see the orange line in the figure). As expected, CapsuleNet without the global node captures competition and focuses on enforcing this constraint. Interestingly, it converges to keeping the same fraction of violations as the one observed in the gold standard data. In contrast, the full model increasingly ignores the constraint in later iterations. This is consistent with the overgeneration trend evident from the precision and recall plots discussed above. Figure 6 illustrates how many roles get changed between consecutive iterations of CapsuleNet, with and without the global node. Green indicates how many correct changes have been made, while red shows how many errors have been introduced. Since the number of changes is very large, we represent the non-zero number q in the (a) Capsule Network w/o Global Node (b) Capsule Network with Global Node Figure 6: Changes in labeling between consecutive iterations on the English development set. Only argument types appearing more than 50 times are listed. "None" type denotes "NOT an argument". Green and red nodes denote the numbers of correct and wrong role transitions have been made respectively. The numbers are transformed into log space. Roth and Lapata (2016), Czech (Cz) is from Henderson et al. (2013), Chinese (Zh) is from and English (En) is from Li et al. (2019b). log spaceq = sign(q) log(|q|). As we expected, for both models, the majority of changes are correct, leading to better overall performance. We can again see the same trends. CapsuleNet without the global node tends to filter our arguments by changing them to "None". The reverse is true for the full model. Table 5 gives the results of the proposed Capsu-leNet SRL (with global node) on the in-domain test sets of all languages from CoNLL-2009. As shown in Table 5, the proposed model consistently outperforms the non-refinement baseline model and achieves state-of-the-art performance on Catalan (Ca), Czech (Cz), English (En), Japanese (Jp) and Spanish (Es). Interestingly, the effectiveness of the refinement method does not seem to be dependent on the dataset size: the improvements on the smallest (Japanese) and the largest datasets (English) are among the largest. Additional Related Work The capsule networks have been recently applied to a number of NLP tasks (Xiao et al., 2018;Gong et al., 2018). In particular, Yang et al. (2018) represent text classification labels by a layer of capsules, and take the capsule actions as the classification probability. Using a similar method, Xia et al. (2018) perform intent detection with the capsule networks. and Li et al. (2019a) use capsule networks to capture rich features for machine translation. More closely to our work, and Zhang et al. (2019) adopt the capsule networks for relation extraction. The previous models apply the capsule networks to problems that have a fixed number of components in the output. Their approach cannot be directly applied to our setting. Conclusions & Future Work State-of-the-art dependency SRL methods do not account for any interaction between role labeling decisions. We propose an iterative approach to SRL. In each iteration, we refine both predictions of the semantic structure (i.e. a discrete structure) and also a distributed representation of the proposition (i.e. the predicate and its predicted arguments). The iterative refinement process lets the model capture interactions between the decisions. We relied on the capsule networks to operationalize this intuition. We demonstrate that our model is effective, and results in improvements over a strong factorized baseline and state-of-theart results on standard benchmarks for 5 languages (Catalan, Czech, English, Japanese and Spanish) from CoNLL-2009. In future work, we would like to extend the approach to modeling interaction between multiple predicate-argument structures in a sentence as well as to other semantic formalisms (e.g., abstract meaning representations (Banarescu et al., 2013)).
4,965.8
2019-10-07T00:00:00.000
[ "Computer Science" ]
Decentralized green energy transition promotes peace Department of Economics, Université de Lausanne, Lausanne, Switzerland, Laboratory of Cryospheric Sciences, Swiss Federal Institute of Technology Lausanne, Lausanne, Switzerland, WSL Institute for Snow and Avalanche Research, Davos, Switzerland, Center for Climate Impact and Action, Swiss Federal Institute of Technology Lausanne, University of Lausanne, Lausanne, Switzerland, Institute of Geosciences and Sustainability, Université de Lausanne, Lausanne, Switzerland, Renewable energy systems group, University of Geneva, Geneva, Switzerland Introduction Two of the biggest challenges faced by humanity, climate change and major episodes of political violence, have one thing in common: They are fuelled by oil and gas to which the world has become increasingly dependent over the last century. The upside of this dilemma is that if the green energy transition succeeds, it can "kill two birds with one stone", as there is an environmental and geopolitical double dividend of avoiding fossil fuels. The tragedy in Ukraine unfolding currently in front of our eyes has increased the political urgency to reduce fossil-fuel dependency. Fossil fuel prices that have been relatively high in the past months compared to recent years, especially for fuel types particularly affected by the war, such as, e.g., natural gas prices in Europe (Federal Research Bank of St, 2020, Garicano et al., 2022, and ever-cheaper renewable energy technologies provide powerful economic incentives to finally invest in green energy. Lessons learnt from the COVID-19 pandemic can inform incentives for behaviour change too. Put differently, the current moment is a unique window of opportunity to engage in a radical transition towards green energy. In what follows, this piece will first highlight through what mechanisms fossil fuels threaten sustainability and peace, and, subsequently, outline in detail how the green energy transition can concretely be achieved, stressing both key factors of reducing energy demand and boosting green energy supply. Several promising green energy policies can be implemented at a local, decentralized scale, helping to avoid the fatal concentration of resource rents and political power that has led to oil and gas hollowing out democracy, fuelling corruption and triggering civil and interstate wars. Fossil-fuelled climate change Rapid climate change is a generally accepted reality, and it is unequivocal that it is a direct consequence of human-led fossil fuel burning and poor land management. The atmosphere, ocean and land have already warmed an average of 1.1°C compared to preindustrial time. Today, Greenhouse Gases (GHGs) from coal, gas and oil continue to accumulate and increase global average temperatures at an alarming rate. What does it mean? Human-induced climate change is already affecting many weather and climate extremes in every region across the globe. Sea levels increase as well as heatwaves, floods, droughts, and tropical cyclones are expected to increase in severity and frequency as temperatures keep rising. Is it too late? It will never be "too late" to avoid some level of impacts and destruction by reducing emissions to zerobut right now, we can still act to avoid the vast bulk of impacts by limiting the global average temperature increase to 1.5°C. To do so, we need to halve our GHGs emissions by 2030 and reach global net zero emissions by 2050. How can we do it? The first and most obvious step to reach this goal is to commit, starting today, to stop extracting, trading, transforming and using fossil fuels (IPCC et al., 2022). Bad politics Decades of research in political science and economics have assembled a long list of detrimental effects of fossil fuel and mineral extraction. Petrostates are characterised by lower levels of democracy, more corruption, and more economic instability (Ross, 2012). One of the underlying mechanisms at work is that the appeal of grabbing the windfalls of mother nature's riches attracts a type of rent-seeking politician and leads to lower state accountability and a rentier state (as much of the fiscal budget is financed by royalties of natural resources rather than by income taxes, the regime in place has lower incentives to content citizens and invest in public services, infrastructure, education and sectors beyond the extractive one). The lucrative fossil-fuel rents also represent an attractive "prize" to be appropriated by rebel leaders aiming to get their hands on the precious resources. This has led to a strong statistical relationship of oil or mineral discoveries or prize spikes fuelling the risk of civil wars (Ross, 2012), (Watts, 2004), (Heinberg, 2005), (Berman et al., 2017). In many circumstances multinationals also contribute to the institutionalised theft of a country's riches by kleptocrats. In particular, mineral extraction has a specially strong detrimental impact on peace when mines are owned by companies with low corporate social responsibility and when the sector escapes traceability and transparency initiatives (Berman et al., 2017). Further, dictatorships built on petrodollars also have a greater tendency to commit mass atrocities targeted against their citizens. As shown in the game-theoretic setting and empirical analysis of (Esteban et al., 2015), when a cynical dictator draws riches mainly from lucrative oil contracts that do not require much local labour, her/his incentives to physically eliminate opposition groups are larger than when the economy hinges on complex, human-capital intensive production outside the commodity sector. Finally, there is also strong statistical evidence that interstate wars are fuelled by the "black gold" (Caselli et al., 2015). First of all, petrostates over-proportionally give birth to dictatorships, which are more likely to start wars against democracies and autocracies alike. Indeed, as shown by the famous "democratic peace" result, democracies are extremely unlikely to attack other democracies militarily. Put differently, a decisive way in which fossil fuels push our world towards Armageddon is by increasing the share of non-democracies in the international system. Beyond this mechanism, fossil fuels are drivers of interstate wars by providing countries with incentives for trying to capture a neighbouring country's resources (Caselli et al., 2015). More with less: Curbing energy demand One argument against abandoning (or, more actively, banning) fossil fuels is the outdated assumption that more energy is equivalent to higher wellbeing: in this view, growth in energy use is tautological with human progress. This assumption is widespread, but does not hold up to empirical scrutiny. Several important facts have emerged in recent research: First, at any given point in time, the international energy use per capita associated with human development exhibits saturation behaviour. Beyond that point, there are diminishing or no returns observed (Martinez and Ebenhack, 2008), (Steinberger and Timmons Roberts, 2010). Second, the level of international energy use per capita associated with high levels of human development has been decreasing drastically over time (Steinberger and Timmons Roberts, 2010). Third, growth in primary energy use can statistically account for only one-quarter of the improvement in life expectancy observed internationally since the 1970s (Steinberger et al., 2020). In contrast, residential electricity can account for almost two-thirds (Steinberger et al., 2020). This means that it is not the quantity of energy which matters, so much as its quality and the purpose of its use. This is especially important for fossil fuels, since up to 2/3 of fossil energy is lost between extraction and use (in transport and electricity generation especially), while in space heating, where conversion could be expected to be more efficient, technological alternatives such as heat pumps exist which are energy positive (supply more than they consume thanks to use of temperature differentials in the environment): for the vast majority of uses, electrification and renewable supply would be far, far more efficient. Fourth, even within developing countries, households with low energy footprints and high levels of wellbeing can be observed. Wellbeing for these households depends far more on access to clean and modern energy vectors (especially electricity), and proximity to public services and infrastructure (markets, transport, health, etc.) than on total energy use (Baltruszewicz et al., 2021). Fifth, the importance of the socio-economic context of energy provision and use can be observed at the international level. Several factors have been identified as beneficial to achieving high levels of human need satisfaction at lower energy use: high quality public services and infrastructure, democratic governance, electricity access, economic equality (Vogel et al., 2021). At the same time, extractivism (the dependence of an economy on resource extraction, such as fossil fuels) is identified as a highly negative factor in achieving human needs at lower energy use (Vogel et al., 2021). Sixth, new research directions include modelling based on "decent living energy" (a concept pioneered by Narasimha Rao of Yale), with several regional and two global models indicating that universal decent living standards (with no under-or overconsumption) could be achieved at an annual final energy demand level less than half of what we current use, despite forecasted population growth (Kikstra et al., 2021). Moreover, the infrastructure build-out required for enabling low-energy decent living standards would be equivalent, globally, to less than 1 year of Frontiers in Environmental Science frontiersin.org 02 current energy use. Such models are clearly idealized, but at the same time the lack of investment in mass deployment of demand-oriented solutions means that we are still at the infancy of many areas of technological and social learning. It is quite probable that even more could be achieved with even less. Together, these results form part of the reason that the 3rd working group of IPCC's 6th assessment report concluded that "demand-side measures and new ways of end-use service provision can reduce global GHG emissions in end use sectors by 40%-70% by 2050 compared to baseline scenarios" (IPCC et al., 2022). The possibility of universal wellbeing while significantly reducing energy use is another reason it is not only necessary, but beneficial, to fully abandon fossil fuels. Boosting green energy supply Climate mitigation for 1.5°C requires high quality energy, in particular, electricity (Davis et al., 2018). The sustainable and low-impact generation of electricity is therefore a major challenge to be addressed. Renewable resources include wind, solar, geothermal, modern biomass and hydropower plants. Wind energy installations have seen a major technological advancement and cost reductions in the last 2 decades and this is even more true for solar photovoltaics (PV). The urgency of the required transition makes it mandatory that we deploy existing technologies, which are quite advanced in the field of wind energy generation and solar photovoltaics. While solar energy is well accepted for deployment on existing infrastructures (e.g., rooftop), larger-scale solar and wind installations face acceptance problems in many countries (Cousse, 2021), where the aspects of landscape protection are prominent or installations are costly because of high labour costs. Most of renewable generation is decentralized in nature -and hence can "kill two birds with one stone": First, the local to regional communities profit from installations and therefore have a motivation to move installations forward; Second, the concentration of power, control and revenue, which has led to environmental, geopolitical and security problems, is avoided. The political task is therefore to facilitate the build-up of such local to regional units. PV and wind have a particularly high potential in that context, especially if they are combined (Dujardin et al., 2021). Once produced, renewable electricity can be used for other energy services too, through electrification of transport, heating, and production of renewable fuels for other sectors. Multiple studies showed that 100% renewable electricity systems are feasible and economically viable, even at global or national scales (Brown et al., 2018). When combined with demand reduction and energy efficiency improvements, fully renewable whole energy systems could be designed in a longer term too (Grubler et al., 2018). Additional bridging technologies that do not use fossil fuels are in principle available (Davis et al., 2018) and need firm policies to ensure market uptake. Escaping mad max: Conclusion As discussed above, fossil fuels jeopardise not only the environment but also prospects for peace. Thankfully, there are various promising avenues to free humanity from this perilous addiction. As detailed above, these include curbing energy demand by focusing on less, but better suited and more efficient energy types and boosting green energy supply through innovation and incentives for adoption. A first key challenge is political willingness, but the geopolitical implications of the current war in Ukraine lead to a window of opportunity to finally seriously engage in the green transition. Times of (energy) crises are also times for building back better: If not now, then when? A second critical challenge is that several low-carbon technologies require metals and minerals, such as lithium, cobalt, copper, and rare earths, some of them being concentrated in geopolitically vulnerable countries (Berman et al., 2017). Transition from fossil fuels to green energy supply should aim to forego a new geopolitical trap by decentralising and diversifying mining sources and locations and improving traceability schemes (Sovacool et al., 2020). In a nutshell, in any energy transition, we believe that power of control should remain with the local society and this is a very important component to help the longevity of democracy, which is under pressure at many places now. We find that decentralised renewable energy offers great opportunities in this respect. Author contributions All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. Funding Open access funding by University of Lausanne. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Frontiers in Environmental Science frontiersin.org
3,185
2023-03-03T00:00:00.000
[ "Environmental Science", "Economics" ]
Research on Reference Indicators for Sustainable Pavement Maintenance Cost Control through Data Mining Maintenance management has become increasingly important in the development of highways and government investment, but the shortage of funds is still a serious problem. When the administrative department reviews expense, the existing evaluation methodology cannot be applied to the current national condition and its calculation process is too complicated. Therefore, in order to improve this situation, this paper analyses various factors affecting maintenance costs, and obtains the quantitative relationship between the six main influencing factors such as traffic volume, using time, location, the number of lanes, overlays, and major rehabilitation. Based on regression analysis, an accuracy-based and cost-oriented control methodology is proposed, which can be dynamically updated according to the market conditions. This method is built on the data of 18 typical highways in Guangdong Province, China. The control reference indicators consist of a set of models and confidence intervals, and the actual cost needs to meet the corresponding requirements. In addition, the expenditure characteristics of rehabilitation and reconstruction in China are summarized. Experiments showed that this methodology can be used to guide cost planning and capital allocation in sustainable maintenance and achieved good results in application, making it worthwhile to promote them in other areas. Introduction With growing significant traffic and limited resources, highway maintenance has become increasingly complicated and valued.The limited funds are the main problem which legislators, budget planners, and superior managers control are facing today [1,2].A highway routine maintenance plan is generally established by subordinate or private maintenance companies then audited by management, so that intuitive auditing standards are need.The highway networks of a developed area are mature and market-oriented.The main research in these areas has the following steps: establishing the maintenance estimation model based on PMS (Pavement Management System) or LCCA [2,3], optimizing the allocation of maintenance cost rationally, finally determining the plan of preventive maintenance based on a standard which is usually a model.In fact, all purposes of most existing PMSs are developed at minimum cost [4,5].In the last decade, this has been the true goal of pavement maintenance-a goal that the Federal Highway Administration (FHWA) [6], in partnership with states, industry organizations, and other interested stakeholders, have been committed to achieving [1]. Currently, in the US, the FHWA signs fee contracts with private maintenance companies [7,8].The increased expenses resulting from uncontrollable factors are at the expense of private companies.This method is not conducive to the promotion of new processes and new materials.In other countries, this responsibility is undertaken by the management department.To control the fee better, the price list is updated annually by the relevant agencies [9], leading to complexity and diversity due to differences in the regions while a large amount of historical maintenance cost data, during the maintenance process, is stored and not utilized properly. Highway maintenance in China has the same dilemma.By the end of 2017, the highway mileage had reached 4.77 million kilometers.Maintenance had become the focus of metropolitan highways development and government investment [10].In China, it is expected that the demand for maintenance cost will exceed 1.5 trillion CNY during the 13th Five-Year Plan period (the year 2016 to the year 2020).However, the maintenance management market in China is still immature.There are still problems such as shortage of funds, imperfect mechanisms, irregular management of cost, and a closed management market.A large body of work to solve these problems has been conducted on practical projects in the last twenty years, while most of them are determined through the Pavement index (Pi) and the Pavement Condition Index (PCI) [11] rather than through data mining for historical maintenance.Models based on these studies are still being adopted by the Chinese government (the British asphalt pavement management system around the 1990s, the model established by Tongji University in the 1990s, the World Bank HDM-III model and the model established by Chang'an University [11]).Additionally, predictive analysis methods for data mining can usually be divided into two categories: dependency based on time series and causal relationship based on indicators, mainly including regression analysis, the moving average method, the exponential smoothing method, periodic variation analysis, and random variation analysis [12].However, these models and data mining methods lack timeliness, generalization ability, sample capacity [13], accuracy, and research on intermediate maintenance and major rehabilitation.The process of data acquisition and modelling is complex [14] simultaneously because in engineering practices, many factors affect pavement preservation in the long term as well as maintenance cost, such as traffic diversifications, regional differences, pavement types, inflation and discount rate fluctuations, etc.These factors inevitably generate potential sources of uncertainty.The factors are not independent from each other and the combined effects of the factors need to be investigated critically [10].In addition, due to market uncertainty and lack of standards, routine maintenance cost cannot be controlled through the list quotation.As such, a dynamic-based and multifactor-based cost control methodology has become a significant research content in the fund-allocating process for the current market. The main purpose of this paper is to propose a cost-oriented and experience-based control methodology, which integrated with data mining and market performance can address the dynamic nature of control indexes.In this methodology, two reference indicators, either predicted values or confidence intervals, are used as trigger values to judge rationalization of actual cost values, determine maintenance treatment planning, and guide capital allocation during the pavement service life span.The influencing factors associated with maintenance cost are analyzed comprehensively.The relevancy degree between maintenance costs and primary influencing factors, i.e. traffic volumes, using time, the number of lanes, location, major rehabilitation, and overlay is calculated in order to investigate the combined effects of different variables and derive the regional maintenance habits.Next, multiple regression analysis and retail price index are used to establish the prediction model of routine maintenance cost and normal distribution is used to calculate the cost interval. The remainder of this paper analyses the characteristics of major rehabilitation and reconstruction in China in Section 3. Section 4 discusses and calculates the actual case of 18 typical freeways in Guangdong Province, the area with the most developed economy and the most complicated road network in China.All data of this paper are investigated from research processes of the past seven years.The reliability of this methodology has been fully discussed in this part.Section 5 concludes the paper with the main findings.The ultimate goals are to realize the scientific, legalization, standardization, and informatization of maintenance management, and to alleviate the shortage of provincial highway maintenance cost in ordinary countries indirectly. Control Model of Routine Maintenance Cost In China, routine maintenance is more regular and has intuitive influencing factors, so it can the calculation model can be used to control the cost.On the contrary, the intermediate maintenance and major rehabilitation determined by road conditions cannot be controlled only from historical data.To ensure the model can meet the cost control requirements and provide scientific and effective reference and guidance for cost management, especially practical, the selected forecasting method should follow the following principles: 1. The principle of simplicity.The selected indicators should satisfy the simple relationship between the conforming variable and the dependent variable, with no cross relationship and common problem between these indicators.The index data should be easy to obtain.The calculation should be simple and intuitive. 2. The principle of flexibility.It can be flexibly changed according to the forecast inflation rate and material price in the current year, and then promoted to all provinces in the country, and dynamically adjusted according to the local economic level. 3. The principle of reliability.Considering the worst condition, the forecasting traffic volume is used as the basis for the establishment of the model.The forecasting cost curve is reliable enough to basically cover the actual cost curve, to achieve the purpose of controlling the cost. Evaluation of Factors Affecting Routine Maintenance In the integrated environment of people, vehicles and roads, there are plenty of complex factors affecting highway maintenance cost.Therefore, to manage the cost reasonably, it is necessary to clarify the characteristics of the influencing factors of maintenance cost. The existing methods for predicting maintenance cost are as follows: structural relationship estimation, causal relationship estimation, time-series relationship estimation, and the estimation method based on uncertainty mathematical theory [12].According to the above principles, in this paper, the causal relationship estimation is selected as the modelling method, combined with qualitative analysis to summarize the influencing factors and establish the functional relationship among these factors.This estimation has fast calculation speed and high precision.In this paper, these factors [15][16][17] are divided into four categories: human factors, natural factors, road factors, and policy factors, as shown in Figure 1. Control Model of Routine Maintenance Cost In China, routine maintenance is more regular and has intuitive influencing factors, so it can the calculation model can be used to control the cost.On the contrary, the intermediate maintenance and major rehabilitation determined by road conditions cannot be controlled only from historical data.To ensure the model can meet the cost control requirements and provide scientific and effective reference and guidance for cost management, especially practical, the selected forecasting method should follow the following principles: 1.The principle of simplicity.The selected indicators should satisfy the simple relationship between the conforming variable and the dependent variable, with no cross relationship and common problem between these indicators.The index data should be easy to obtain.The calculation should be simple and intuitive.2. The principle of flexibility.It can be flexibly changed according to the forecast inflation rate and material price in the current year, and then promoted to all provinces in the country, and dynamically adjusted according to the local economic level.3. The principle of reliability.Considering the worst condition, the forecasting traffic volume is used as the basis for the establishment of the model.The forecasting cost curve is reliable enough to basically cover the actual cost curve, to achieve the purpose of controlling the cost. Evaluation of Factors Affecting Routine Maintenance In the integrated environment of people, vehicles and roads, there are plenty of complex factors affecting highway maintenance cost.Therefore, to manage the cost reasonably, it is necessary to clarify the characteristics of the influencing factors of maintenance cost. The existing methods for predicting maintenance cost are as follows: structural relationship estimation, causal relationship estimation, time-series relationship estimation, and the estimation method based on uncertainty mathematical theory [12].According to the above principles, in this paper, the causal relationship estimation is selected as the modelling method, combined with qualitative analysis to summarize the influencing factors and establish the functional relationship among these factors.This estimation has fast calculation speed and high precision.In this paper, these factors [15][16][17] are divided into four categories: human factors, natural factors, road factors, and policy factors, as shown in Figure 1.The above factors are numerous and correlate with some others.In order to reduce the dimensions of variables, the important relevant factors serving as variables should be screened out.In this paper, the questionnaire method is adopted, combined with previous studies [18].All questionnaires are aimed at professionals in road maintenance and are used to select the most important influencing factors.At least one factor in each category is selected.Through the above research, the following five factors are selected: traffic volumes, geographic location, the number of lanes, using time, and maintenance. Based on the results of qualitative analysis, differences exist in the correlations between the factors and maintenance cost.Using time, traffic volumes, and the number of lanes are positively correlated with the increasing trend of routine maintenance cost.After intermediate maintenance and major rehabilitation, pavement performance is restored and the routine maintenance cost has been reduced.Besides, differences in economic structure and social status due to geographical location lead to differences in the allocation of funds for maintenance cost. Model Boundary Condition Because some data has a long history and has not been recorded clearly by the pricing list, in the analysis, the real reliability is doubtful.National inspection (comprehensive measuring every five years in China) and geological disasters also have great impacts on maintenance cost.Therefore, the model should satisfy the boundary conditions. First, the proportional relationship in the model should be consistent with the qualitative analysis results. Second, in order to simplify the model, improve the accuracy of the model, and avoid interference from other factors, only one factor is different at a time when quantitatively analyzing the proportional relationship between the actual routine maintenance cost and a single factor. Third, the cost is reduced after a slight decrease within three years after major rehabilitation or within two years after pavement overlays. Data that does not meet the above boundary conditions is doubtful.If the cost increases abnormally, or fluctuates greatly and gratuitously, such data will cause large errors as well as poor stability and fitting degree in the resulting model, and thus should be verified and excluded. Model Boundary Condition In order to construct a precise and reliable model, the modelling process is classified into three steps: Independent Variable Selection To simplify the calculation, the multiple linear regression and nonlinear regression are combined, and the routine maintenance cost is used as the dependent variable Y to establish the first type of nonlinear regression model. where X 1 represents traffic volumes.The age of some highways has not yet reached the design working life, but the actual traffic volumes are far greater than the predicted traffic volumes for the rapid development of the economy in China.In this paper, actual traffic volumes are used.X 2 represents using time.X 3 represents the number of lanes.X 4 represents the major rehabilitation coefficient.This coefficient is considered within three years.Highway expansion and construction are treated as major rehabilitation. X 5 represents the overlay coefficient.This coefficient is considered within two years.K represents regional coefficient.This coefficient is related to the location and function of the highway. Data processing flow Follow the calculation steps below to process the data: First, filter data according to the model boundary conditions.Second, convert the maintenance cost to 2016 based on the existing retail price index (RPI) [11].The fixed asset investment price index has a great effect on the cost [14,19].In this paper, the base year is 2016. Third, group all data by the using time.The highway maintenance period and the national inspection every five years have significant influence, and the quantity of sample data is large.Therefore, the data is divided into four groups: five years or less, five to 10 years, 10 to 15 years, more than 15 years. Fourth, sensitivity analysis is used to study and establish the functional relationship between influencing factors and cost. Finally, determine the regional coefficients of the different highways on the grounds of the region division and traffic density distribution. Parameter Calculation The parameter values are determined by regression analysis in SPSS.The R 2 is used as the determination coefficient which is close to one, the better the fitting effect of the model. Model Verification and Application It is doubtful whether this model can be applied to other regions because it was established on the historical data resulting from a few specific roads.Therefore, another highway maintenance cost is selected and substituted into the model for verification.The predicted results must be greater than or equal to the actual result, otherwise the relevant coefficients need to be adjusted. Maintenance Cost Control Interval The above estimation model for the routine maintenance cost is based on the analysis of a large amount of data, but in practice, the maintenance cost is often increased due to certain unpredictable factors.Therefore, it is necessary to control the cost according to the control interval.In this paper, the W-test method (Shapiro-Wilk Test) [20] with high sensitivity is used to perform the normality test on the routine maintenance cost and the intermediate maintenance cost, which requires that the samples are of normal distribution to improve the sampling efficiency further: Among them, If the observed value of W calculated for the sample value of any distribution with n ≤ 50 satisfies W α < W < 1, it obeys the normal distribution law x ~N(µ,δ 2 ), then the value range of x is: where µ α 2 represents the Guarantee rate coefficient, when the freeway confidence factor is 95%, µ α 2 = 1.96; µ represents the average of samples; n represents the number of samples; δ represents the mean deviation of samples; a k and W α can be referred to the Shapiro-Wilk Test list. After the above test, if the maintenance cost meets the requirements of normal distribution, the interval is estimated to be µ − The actual cost should be included in this range [21]. Major Rehabilitation Cost Analysis In China, asphalt pavement structure is usually applied to freeways.The design period of asphalt pavement structure is generally 15 years, and the design period of cement concrete pavement is 30 years [22,23].However, the service life of the road surface is not up to the design life because of overload transportation, reflection cracks, surface quality problems, etc.The road capacity has become saturated so that it cannot meet the demand with the traffic volumes increasing sharply.The cost of solving atraffic jam is high, which seriously restricts regional economic development.In general, the cost of major rehabilitation which can improve road performance and extend the service life of the freeway is high. The cost of major rehabilitation is mainly composed of construction and installation fees, equipment and tools purchase expenditure, and other construction costs, including subgrade engineering, pavement engineering, bridge and culvert works, etc.In this paper, only the relationship between the major rehabilitation cost of typical highways and maintenance investment and the average cost level are analyzed because the maintenance cost investment is related to the highway road operation status and road characteristics, and the regularity is poor. Analysis of Maintenance Cost in Guangdong Province By the end of 2017, the total mileage of freeways opened to traffic in Guangdong Province was 8338 km and the density was 4.64 km/100 km 2 .The maintenance mode-"A company is responsible for the construction and maintenance of a highway"-is widely used.The administration needs to invest a large amount of funds and this shows an increasing trend for maintenance every year.According to research data, the financing gap of trunk highways in Guangdong Province reached 2.47 billion CNY.The average cost level in 2011-2015 was 801,700 CNY/km per year. Guangdong Province which has about 11 major outbound provinces is divided into four major regions-the Pearl River Delta, West, East and North.Among the four regions, the traffic volumes of freeways in the Pearl River Delta are significantly higher than that of other regions.Therefore, the difficulty, cost investment, and unit mileage of the maintenance work on the same scale are significantly larger than those in the other regions of Guangdong. Routine Maintenance Cost Control Model Establishment Routine maintenance in Guangdong Province includes daily cleaning and minor repairs.Daily cleaning refers to regular cleaning and daily inspection work.The minor repair project refers to the treatment of various minor ailments and supporting facilities, mainly based on artificial consumption, with a small amount of material consumption. The values of the RPI in Guangdong Province are shown in Table 1.The maintenance cost of 18 freeways is shown in Table 2 after screening and conversion.The specific parameters are shown in Table 3.The traffic volumes over the years are shown in Table 4. Traffic growth rates in the last 7 years are 66.5% to 30.2%, it is necessary to rule out the impact of road network changes on costs. 1 The space indicates that the highway has not been overhauled or overlaid since it was opened to traffic.The model was established based on the modelling principle, which is: Multi-factor sensitivity analysis needs to consider various combinations of various factors and different degrees of change, which is more complicated.Therefore, this paper adopts the grey correlation analysis.Based on the above data, the relevancy degree is calculated and shown in Table 5.The Tornado Diagram shown in Figure 2 is further used to indicate the relevance degree.The influence level of each factor is sorted as follows: traffic volumes ≈ using time > the number of lanes > location ≈ major rehabilitation ≈ overlay.Multi-factor sensitivity analysis needs to consider various combinations of various factors and different degrees of change, which is more complicated.Therefore, this paper adopts the grey correlation analysis.Based on the above data, the relevancy degree is calculated and shown in Table 5.The Tornado Diagram shown in Figure 2 is further used to indicate the relevance degree.The influence level of each factor is sorted as follows: traffic volumes ≈ using time > the number of lanes > location ≈ major rehabilitation ≈ overlay.The coefficient values are as shown in Table 6: X4 Major rehabilitation coefficient One year after major rehabilitation 1 Two year after major rehabilitation 0.9 Three year after major rehabilitation 0.55 X5 Overlay coefficient One year after overlay 1 Two year after overlay 0.75 K Regional coefficient The general highway with poor geographical location and environment 1.1-1.2 The general trunk highway 1.2-1.3The trunk highway with poor geographical location environment 1.5-1.7 Note.The above values are based on existing research [21] and statistical analysis of large amounts of data.If the major rehabilitation or overlay is carried out for consecutive years, the coefficient of the next year is superimposed while the values can be adjusted according to the actual conditions of each province.The coefficient values are as shown in Table 6: Major rehabilitation coefficient One year after major rehabilitation 1 Two year after major rehabilitation 0.9 Three year after major rehabilitation 0.55 X 5 Overlay coefficient One year after overlay 1 Two year after overlay 0.75 K Regional coefficient The general highway with poor geographical location and environment 1.1-1.2 The general trunk highway 1.2-1.3The trunk highway with poor geographical location environment 1.5-1.7 Note.The above values are based on existing research [21] and statistical analysis of large amounts of data.If the major rehabilitation or overlay is carried out for consecutive years, the coefficient of the next year is superimposed while the values can be adjusted according to the actual conditions of each province. After removing the abnormal point with the normalized residual absolute value greater than three (>3) under 95% confidence, the SPSS software is used to obtain the routine maintenance cost estimation model of the freeway as follows: 5 years or less. more than 15 years. The calculated R 2 = 0.982, 0.782, 0.652, and 0.998 respectively.The R 2 is not big enough because of the large amount and doubtful authenticity of the data.The regional coefficients used in this model can be adjusted according to the relevant divisions in the planning of routine maintenance areas in Guangdong Province. Maintenance Cost Control Interval Based on these historical rates, routine and intermediate maintenance costs are assumed to have a normal distribution.By analyzing each type of maintenance cost and the proportion of the sum of routine maintenance and intermediate maintenance cost, the maintenance cost control intervals can be shown as in Table 7.The largest proportion of routine maintenance costs between different regions reached 118.02%, and that of intermediate maintenance costs 268.11%.This difference proves the rationality of calculating each road separately.This finding can be used as a basis for funding allocation. Methodology Verification The relationship between the predicted value and the actual value calculated according to the above model is shown in Figure 3, which shows that the predicted value curve is basically the envelope of the actual value curve. To further test the applicability of the model to other freeways in Guangdong Province, the data of other two freeways in the region was investigated (Table 8).It can be seen that the model basically meets the actual requirements after testing the model of this group.To further test the applicability of the model to other freeways in Guangdong Province, the data of other two freeways in the region was investigated (Table 8).It can be seen that the model basically meets the actual requirements after testing the model of this group.In the above verification process, the predicted cost can cover the actual cost.The actual cost of highway 17 is within the confidence interval, while highway 18 is not.The reasons are as follows: 1) The using time of highway 18 is extremely long, so the road performance is seriously attenuated; 2) the traffic volume is very large; 3) lane expansion has been carried out, which is one of the few eightlane highways in Guangdong Province.In the above verification process, the predicted cost can cover the actual cost.The actual cost of highway 17 is within the confidence interval, while highway 18 is not.The reasons are as follows: (1) The using time of highway 18 is extremely long, so the road performance is seriously attenuated; (2) the traffic volume is very large; (3) lane expansion has been carried out, which is one of the few eight-lane highways in Guangdong Province. For economic verification, the methodology was applied to three trunk highways in Guangdong Province.The economic and social benefits obtained are as follows: (1) Saving of about 3% of maintenance costs, increasing investment in construction of other projects and promoting local development; (2) increased the road capacity and toll income due to the timely and reasonable maintenance measures; (3) reduced the circulation time of personnel and goods, so industry development is accelerated, production and sales costs are reduced, and the market is expanded for products and human resources; (4) improved the technical level of cost management so that the benefits of maintenance funds are maximized. Major Rehabilitation Cost Analysis The freeways in Guangdong Province, especially built during "The eighth Five-Year Plan" (the year of 1991 to the year of 1995) and "The ninth Five-Year Plan" (the year of 1996 to the year of 2000), began to enter the peak period of maintenance [24], reconstruction and expansion.The following three typical freeways were opened to traffic in 1996 and their quality levels of major rehabilitation acceptance were all qualified.Therefore, these three freeways are comparable.The specific details are shown in Tables 9 and 10.Through analysis and comparison between the costs of rehabilitation, reconstruction, and expansion, the conclusions are as follows: 1. The proportion of major rehabilitation costs to infrastructure investment is 77.93% and 111.65% respectively, where the excess is less than 15%.So, infrastructure investment can basically meet the major rehabilitation cost requirements. 2. The range of major rehabilitation costs is [363.77,680.62].However, due to the small number of samples and the individuality of the major rehabilitation, existing data is not enough to determine the interval of major rehabilitation cost. 3. The freeways strictly enforced the contract terms, carried out the bidding system better, controlled the changes of engineering quantity and unit price effectively, proposed an optimization plan and adopted other measures in the whole process of the project.Therefore, the total cost of the final audit was within the approved design estimates and was balanced. 4. The average cost level of the renovation and expansion was significantly higher than that of the major rehabilitation. Comparison with Existing Methodology Based on the discussion and calculation from the Guangdong Province highway case, a control index with high fitting degree is being performed and having remarkable economic and social benefits.However, it is too early to select this methodology as the best strategy to control maintenance costs only according to the qualitative analysis results.As such, this makes a comparison with the latest available methods indispensable.As for previous assumption, three elements are selected in this comparison as the criteria: sample capacity, accuracy, dynamics.The comparison results are described in detail as follows: 1. Sample capacity.It is observed that small sample capacity causes regression analysis results to be less resistant to risk.Models established by Chang'an University [11] and Wu's [12] only used the cost data of the highway for a certain year and a certain region, while this paper investigated and screened 18 highways in different geographical locations for nearly seven years. 2. Accuracy.The fitting degrees R 2 of all existing models are higher than 0.85 (11)(12)(13)(15)(16)(17), but neither of them validated the calculation results and actual benefits.As such, this makes the accuracy suspect.Based on the above research, this paper analyzed the correlation and further expressed it with Tornado Diagrams.At the same time, the calculation results were verified in depth. 3. Dynamics.The existing analysis of maintenance cost control in China did not consider the discount rate or convert the cost through a fixed value rather than market fluctuation.In contrast, this paper proposes to convert the RPI over the years and determine the threshold and principle of the corresponding coefficients of the main influencing factors. Conclusions and Recommendations This paper analyzed a comprehensive situation of sustainable pavement maintenance cost in China, dynamically and empirically.In conclusion, the major findings from this study can be described as follows: • Multiple statistical methods are conducted to reduce the dimension of influencing factors of maintenance cost and a sensitivity analysis is conducted to calculate the impact levels of different factors.The influence level of each factor is sorted as follows: traffic volumes ≈ using time > the number of lanes > location ≈ major rehabilitation ≈ overlay.Based on this calculation, it is serviceable for the administrative department to develop countermeasures and allocate funds reasonably in order to mitigate pavement management risks. • This paper proposes how to establish control reference indicators of routine and intermediate maintenance costs for the case of the highways in Guangdong.The coefficients of six major factors mentioned above are summarized through the analysis of historical data and can change dynamically with market conditions.The cost model and confidence interval are determined in the best fitting perspective.The basic model of routine maintenance costs is as follows: Y = K × (a × X 1 + b × X 2 + c × X 3 + d) × e ˆ(X 4 ) × f ˆ(X 5 ).The control intervals are (8.73, 11.91) and (41.12, 59.32) respectively.These findings have been proved to be beneficial for social and economy benefits when applied to the practical application of the highways in Guangdong Province. • Through the analysis of typical highways, the contract terms and bidding system should be strictly implemented, in order to guarantee that the total cost meets the design budget requirements."Construction is more important than maintenance" is still the dominant ideology of highway construction in China.The major rehabilitation cost range cannot be determined, in this paper, due to its uniqueness and individuality, which needs to be studied in depth. During the investigation, some construction and maintenance costs for rural roads in China were found to be self-raised by residents.The existing PMS is not widely used for practical application while data of road maintenance are still unavailable and opaque in China at present.It is strongly suggested to conduct a further study on the maintenance costs of rural roads and national roads and establish a road open information platform with the ability of popularization. Patents The patents and software copyrights generated by the work of this research are under review. Figure 1 . Figure 1.The influence factors of highway maintenance cost. Figure 1 . Figure 1.The influence factors of highway maintenance cost. Figure 3 . Figure 3. Verified model.(a) Using time is five years or less; (b) using time is between five years to 10 years; (c) using time is between 10 years to 15 years; (d) using time is 15 years or more. Figure 3 . Figure 3. Verified model.(a) Using time is five years or less; (b) using time is between five years to 10 years; (c) using time is between 10 years to 15 years; (d) using time is 15 years or more. Table 1 . Retail price index (RPI) in Guangdong Province. Table 2 . Routine maintenance cost of freeways in Guangdong Province (0.01 million CNY/km). 1The spaces in the above table are missing parts of the investigation. Table 3 . Freeway model parameter value. Table 5 . Calculated relevancy degree of correlation coefficient. Table 5 . Calculated relevancy degree of correlation coefficient. Table 8 . Freeway model parameter value for verification Table 8 . Freeway model parameter value for verification Table 9 . General situation of typical freeway major rehabilitation in Guangdong Province (0.01 million CNY) . Table 10 . General situation of reconstruction and expansion in Guangdong Province.
7,436.8
2019-02-08T00:00:00.000
[ "Engineering", "Environmental Science" ]
Parallel sequence tagging for concept recognition Background Named Entity Recognition (NER) and Normalisation (NEN) are core components of any text-mining system for biomedical texts. In a traditional concept-recognition pipeline, these tasks are combined in a serial way, which is inherently prone to error propagation from NER to NEN. We propose a parallel architecture, where both NER and NEN are modeled as a sequence-labeling task, operating directly on the source text. We examine different harmonisation strategies for merging the predictions of the two classifiers into a single output sequence. Results We test our approach on the recent Version 4 of the CRAFT corpus. In all 20 annotation sets of the concept-annotation task, our system outperforms the pipeline system reported as a baseline in the CRAFT shared task, a competition of the BioNLP Open Shared Tasks 2019. We further refine the systems from the shared task by optimising the harmonisation strategy separately for each annotation set. Conclusions Our analysis shows that the strengths of the two classifiers can be combined in a fruitful way. However, prediction harmonisation requires individual calibration on a development set for each annotation set. This allows achieving a good trade-off between established knowledge (training set) and novel information (unseen concepts). Supplementary Information The online version contains supplementary material available at 10.1186/s12859-021-04511-y. (NEN), linking, or grounding. Typically, the two steps are performed in a sequential manner, using a sequence classifier for NER and a ranking-or rule-based module for NEN. While this approach allows focusing on different methods for the individual steps, it suffers from error propagation, an inherent drawback of any pipeline architecture. For example, a certain NEN system might have excellent accuracy when using ground-truth spans as input, but its performance will decrease when operating on the imperfect output of a span tagger. In particular, a normaliser might be inclined or even forced to predict a concept ID for spurious spans, and it cannot recover from cases where a span is missing. In this work, we investigate an alternative architecture for concept recognition, which alleviates the problem of error propagation: parallel sequence tagging for NER and NEN. In this architecture, NEN is modeled as a sequence-classification problem (like NER) and applied to the input text independently of the span tagger. The predictions of the two taggers are harmonised using different strategies, the choice of which is a hyperparameter of the complete system. We test our approach with a manually annotated dataset for biomedical concepts, the CRAFT corpus, continuing the efforts from our participation in the CRAFT shared task 2019. Related Work Concept recognition has often been approached as a pipeline of NER+NEN. For NER, sequence labeling with conditional random fields (CRF) has dominated the field to present, be it pure CRF as in Gimli [1] or DTMiner [2], on top of a recurrent neural network as in HUNER [3], Saber [4], or DTranNER [5], or even as the head of a BERTbased system as in SciBERT [6]. BERN [7] performs NER by fine-tuning BioBERT alone, even though [8] report improved results when stacking CRF atop BioBERT. Different approaches have been taken to NEN, where extracted mentions are mapped to a vocabulary: exact match as in Neji [9], expert-written rules [10], learning-to-rank as in DNorm [11], linking through an ontology using word embeddings and syntactic re-ranking [12], or sequence-to-sequence prediction [13]. Knowledge-based concept-recognition systems like Jensen tagger [14] or NOBLE coder [15] do not allow for a clear separation between NER and NEN, as span detection and linking happens at once, even if machine-learning components are added for improving accuracy, like for OGER++ [16] or RysannMD [17]. Joint approaches like TaggerOne [18], JLink [19], and others [20,21], however, have separate modules for NER and NEN, which are trained simultaneously. The multi-task sequence labeling architecture for NER and NEN in [21] has been highly inspirational for the present work, although we were unable to reproduce their results, even using the code that the authors made publicly available. CRAFT corpus and shared task The Colorado Richly Annotated Full-Text (CRAFT) corpus [22,23] is a collection of 97 scientific articles from the biomedical domain. It is manually annotated for syntactic structure, coreferences, and bio-concepts (entities), the last of which are used in the present study. In the latest release (Version 4), the concept annotations are divided into 10 sets of different entity types, which are provided in two versions each (proper and The extended annotations are referred to by appending EXT to the abbreviations for the proper annotations (CHEBI_EXT, CL_EXT etc.). The CRAFT corpus has been used in a range of studies. Through repeated improvements and extensions over time, the corpus has become a high-quality resource with rich annotations, but it also led to the situation that most experiments are not directly comparable to each other, as their setup differs in many ways. In the first release of the CRAFT corpus, only 67 articles were available. The remaining 30 documents were not released until the evaluation period of the CRAFT shared task 2019 [32], where they served as a test set. This competition was part of the BioNLP Open Shared Tasks and comprised three core NLP tasks, where participating systems were evaluated against the ground-truth annotations of Version 4 of the CRAFT corpus. However, most prior work on concept recognition was carried out with an older version of CRAFT, i. e. using a different test set, possibly an earlier stage of annotations and a different evaluation method, which means that results are not directly comparable. While the majority of studies is concerned with concept recognition (i. e. systems that predict IDs), some are restricted to NER, e. g. [4,33,34]. Methodologically, the approaches range from pure dictionary-based [15,35] to entirely example-based systems [36], even though the NEN step almost always includes dictionary lookup. Since no official test set was available prior to Version 4, many experiments use an arbitrary train/ test split [37] or apply evaluation to the entire corpus [9]. The metrics used are consistently precision, recall and F-score, but differences exist with respect to considering partial matches. Also, many studies do not cover the full set of annotations, but rather focus on a small selection of entity types, such as Gene Ontology [38] or gene mentions [33]. Furrer et al. BMC Bioinformatics (2021) 22:623 Methods We propose a paradigm for biomedical concept recognition where named entity recognition (NER) and normalisation (NEN) are tackled in parallel. In a traditional NER+NEN pipeline, the NEN module is restricted to predict concept labels (IDs) for the spans identified by the NER tagger. In order to avoid the error propagation inherent to this serial approach, we drop this restriction and provide the full input sequence to the normaliser. As such, we cast the normalisation task as a sequence-tagging problem -very much like an NER tagger, but with a considerably larger tag set, consisting of all concept IDs of the training data. Design implications Modeling concept normalisation as sequence tagging has a number of drawbacks. As discussed in the next section, the CoNLL representation of the data enforces exactly one label for each token, which disallows learning and predicting annotations with overlapping and discontinuous spans. This representation also entails that the model has to produce a consistent series of individual predictions in order to correctly label a multi-word expression. This often means that highly ambiguous tokens like prepositions, numbers, or single letters must be interpreted correctly in context (e. g. "of " in "inhibitor of calpain", "I" in "hexokinase I"). As the most serious limitation, a sequence tagger can only ever predict labels it has seen during training, which restricts the label set of the trained system to a fraction of the target label set (the ontology) in many cases. Since many concepts occur extremely rarely in the biomedical literature (cf. Fig. 1), this limitation might not critically reduce performance measured on a typical evaluation data set. However, it is highly undesirable to have a tagger that is completely incapable of predicting labels beyond the training set. On the other hand, the ID-tagging architecture is technically an end-to-end conceptrecognition system, i. e. it does not depend on any span predictions, which means that the NER step could potentially be skipped entirely. However, due to the small number of tags, span tagging is far more robust with respect to ambiguous tokens and unseen concepts. By adding span predictions, we might thus be able to overcome the limitations of direct ID tagging. Therefore, we chose to combine the strengths of span and ID tagging by applying both in parallel and merging the results in postprocessing. Data preparation Our system processes documents in a variant of the CoNLL format, i. e. a verticalised format where each text token is assigned exactly one label. Based on our architecture with two sequence classifiers, we employed two different label sets. For the span tagger, the text is tagged with IOBES labels, i. e. each token is assigned one of the five labels I, O, B, E, or S. Entities spanning only a single token are annotated with S. For multi-word entities, the first and last token are tagged with B and E, respectively, and any intervening tokens with I. The rest of the text (i. e. all tokens outside of an entity) are annotated with O. For the ID tagger, all tokens of an entity are tagged with the respective concept ID. We added a NIL label to mark non-entity tokens, analogously to the O tag of the span tagger. This representation does not have the same expressiveness as the stand-off format used in CRAFT, which offers great flexibility for anchoring annotations in the text. In particular, the CRAFT corpus contains discontinuous annotations (multiple non-adjacent text spans for the same annotation), overlapping annotations (words shared by multiple annotations) and sub-word spans (annotation refers to part of a word). Since these complex annotations cannot be represented with token-level labels, their structure needs to be simplified. In order to measure the performance impact of this simplification, we converted the reference annotations of the training set to CoNLL format and back to stand-off using the standoff2conll suite [39]. This utility offers two strategies for unifying discontinuous annotations (full-span and last-span), to which we added a third option (first-span) [40]. For unnesting overlapping annotations, two strategies are available as well (keeplonger and keep-shorter). The effect of unifying and unnesting annotations is illustrated in Fig. 2. Sub-word annotations are extended to span entire tokens. After this round-trip conversion, the annotations are run through the official evaluation suite provided by CRAFT [41]. Table 1 shows the results for different combinations of unification and unnesting strategies on the non-extended annotation sets. These numbers mark the upper limit for a system trained on input data in CoNLL format. For all annotation sets, using the first-span and keep-longer strategies achieved the highest F-score. Architecture The sequence taggers used in our experiments are built atop a pretrained languagerepresentation model, BioBERT [42], which in turn extends BERT [43]. BERT is an attention-based multi-layer neural network which learns context-dependent word-vector representations. It creates bidirectional contextual representations of a token from unlabeled text conditioned on the left and the right context. BERT is trained to solve two tasks: first, to predict whether two sentences follow each other, and second, to predict a randomly masked token from its context. After a slight modification to its architecture, training of BERT can be continued on a different task like NER; this process is referred to as fine-tuning with a task-specific head. For our experiments, we downloaded BioBERT v1.1, which includes code, configuration and pretrained parameters. BioBERT is based on BERT BASE , which was pretrained for 1M steps by Devlin et al. [43] on a 3.3B-word corpus from the general domain To perform NER and NEN in parallel, we used two different tag sets for fine-tuning, as described in the previous section: IOBES labels for the span tagger and the set of all concept IDs for the ID tagger. In addition to that, both taggers used a small set of tags inherited from the original BERT implementation, which flag tokens with a special function, such as padding, sub-word unit and sentence boundary. We trained a pair of span and ID tagger for each annotation set, which resulted in a total of 40 individual models. The predictions of the span tagger are always aligned with the IDs produced by a dictionary-based concept-recognition system, OGER [16,44]. OGER detects mentions of ontology terms in running text through efficient fuzzy-matching. We manually optimised OGER's configuration on the CRAFT training set. We used no additional terminology resources besides the ontologies provided with the corpus. However, we manually added a handful of synonyms for GO_MF. This combined system resembles a classical NER+NEN pipeline, where the high-recall output of the dictionary-based system is combined with the context-aware span detection using an example based classification model. Hyperparameter tuning In order to determine the best hyperparameters for each annotation set, we performed extensive grid search in cross-validation over the training set. In particular, we investigated the following configurations: ontology pretraining: enable/disable abbreviation expansion: enable/disable prediction harmonisation: 6 strategies If ontology pretraining is enabled, the ID classifier is trained on synonym-ID pairs from the terminology for 20 epochs before switching to the actual training corpus. For abbreviation expansion, we first used Ab3P [45] to detect abbreviation definitions, then replaced occurrences of short forms with the corresponding long form. For harmonising the predictions of the two classifiers, we compared six different strategies; these are described in the next section. From previous experiments [46], we knew that ontology pretraining has a positive effect for some, but a negative effect for other annotation sets. We therefore concluded that hyperparameters had to be tuned individually for each of the 20 annotation sets. In order to obtain reliable figures, we performed 6-fold cross-validation with up to 3 runs for each combination. As we expected, ontology pretraining yielded a mixed picture. In many cases, a clear decision was not possible, as repeated runs gave contradictory results. Unexpectedly, abbreviation expansion showed a clear improvement only for CL and a slight improvement for GO_MF; in all other cases (including CL_EXT and GO_MF_EXT) the results decreased. We decided to disable both ontology pretraining and abbreviation expansion, as the isolated merits do not justify the added complexity. For prediction harmonisation, the best strategy for each annotation set is given in Table 2 and discussed in the following section. The full results for the whole tuning phase are included in Additional file 1. Harmonising predictions The predictions of the span and ID classifier are not guaranteed to agree, even if trained jointly. Disagreement occurs if the span classifier predicts a relevant tag (B, I, E, S) for a particular token while the ID classifier predicts NIL, or, conversely, if the ID classifier predicts a specific concept for a token tagged as irrelevant (O) by the span classifier. In addition, the dictionary feature of the knowledge-based entity recogniser might or might not agree with the neural predictions. This results in 2 × 2 × 2 = 8 prediction patterns concerning the relevance of a given token. We considered four different strategies for harmonising conflicting predictions: spansonly, ids-only, spans-first, and ids-first (cf. Fig. 3). These strategies are heuristics with a predetermined bias towards one of the two classifiers. Two additional strategies (mutual and override), which use the confidence scores for balancing the classifiers, consistently produced worse results compared to the simpler bias strategies. The score-based strategies are thus not discussed here; however, we used and described the mutual strategy when participating in the CRAFT shared task [46, p. 188]. The systematic application of different harmonisation strategies is one of the major differences of this work compared to the work presented at the shared-task workshop. With the spans-only strategy, the ID predictions are completely ignored. In order to provide a concept label, the span predictions are combined with the dictionary feature provided by OGER; in case of multiple features, an arbitrary decision is taken (lexically lowest ID). Since a concept label is always required, span predictions without a supporting feature have to be dropped. With the ids-only strategy, the predictions are based primarily on the ID predictions, whereas the span predictions are overridden (e. g. the span tag cannot be O when the ID classifier predicts a non-NIL concept). The dictionary feature is ignored in the decision. The spans-first and ids-first strategies are combinations of the previous two. With the former, the spans-only strategy is applied first, backing off to the ids-only strategy if the outcome is O-NIL. Analogously, the ids-first strategy gives preference to ids-only. An example with partially disagreeing predictions is given in Figure 4. We compared the effect of the different strategies in a 6-fold cross-validation over the training set. For each annotation set, we determined the best harmonisation strategy based on F-score according to the official evaluation suite. As shown in Table 2, using both span and ID predictions was beneficiary most of the time. In many cases, the same strategy worked best for the proper and extended classes. Intuitively, the choice of spans-only for proteins makes sense, as PR[_EXT] shows an exceedingly high number of different concepts with a small overlap between training and test data, which is a tough scenario for the ID tagger. Conversely, entity types with a limited number of distinct concepts in the corpus like sequences and organisms rely more heavily on the ID tagger. The choice of harmonisation strategy was fixed as a hyperparameter for the testset predictions. Fig. 4 Predictions for PR on a short phrase, harmonised with the ids-first strategy. Using the spans-only or spans-first strategy would yield the same result in this example, since the ID and span predictions are identical for "Hexokinase I" Results and discussion We evaluated our concept-recognition system using the official evaluation suite [41]. Performance is measured in terms of F-score, i. e. the harmonic mean of precision and recall, and slot error rate (SER) [47]. Both metrics are based on the counts of matches (true positives), substitutions (partial errors), insertions (false positives), and deletions (false negatives). Partially correct predictions are assigned a similarity score m in the range [0, 1], which measures the accurateness of the predicted spans and concept labels [48]. The similarity score incorporates a notion of textual overlap (Jaccard index at the character level) and a weighted measure of shared ancestors in the ontology hierarchy, as introduced in [49]. The fractional value m is added to the match count, whereas the remainder 1 − m is counted as a substitution. While precision, recall, and F-score are figures of merit ranging from 0 (worst) to 1 (best), SER is a measure of error that assigns 0 to a perfect system and higher values to lower performance. Even though the values for SER and F-score often correlate, they are not guaranteed to produce identical rankings. In particular, SER is more sensitive to false-positive errors than F-score, and low precision has a stronger impact on SER than low recall. Please note that perfect scores cannot be reached by our systems due to limitations in the input representation, as explained in the Data preparation section. The results for our parallel NER+NEN system are given in Table 3. The scores are compared to our systems developed for the shared task [46] and to the official baseline published in the workshop overview [32]. Our system consistently achieves better scores than the baseline, which is a pipeline with a CRF-based span tagger and a BiLSTM-based concept classifier that were also trained on the CRAFT corpus alone. For most annotation sets, our current system performed better than the best system presented in the shared-task paper, with the exception of GO_MF_EXT and PR_EXT. For NCBITaxon_ EXT and PR, the comparison is inconclusive, as SER and F-score give contradictory rankings. Unfortunately comparison with other systems is difficult due to the fact that the complete CRAFT corpus was not available before the shared task. Previous published results on the CRAFT corpus (such as [50]) are based on a different (and smaller) version of the corpus. Effect of harmonisation In order to measure the effect of the different harmonisation strategies, we evaluated all four strategies on the test set, as shown in Fig. 5. This study also serves as a validation for our hyperparameter-tuning approach, i. e. whether cross-validation on the training set can be used for reliably picking the best-suited harmonisation strategy. For the majority of the annotation sets, the picked strategy also worked best for the test set. Where the picked strategy was not the best (GO_MF_EXT, MOP[_EXT]), the difference to the topperforming strategy was comparatively small. Unseen concepts As stated above, a major limitation of trained sequence labeling for IDs is the inability to predict concepts not seen among the training examples. An important goal of combining the ID tagger with a span tagger and dictionary-based predictions is to overcome this limitation. To study the effect of the different harmonisation strategies on unseen concepts, we performed another evaluation on a subset of the annotations. To this end, we filtered both ground truth and predictions of the test set to contain only annotations with concept labels that are not used in the training set. Table 4 shows precision and recall scores as well as annotation counts for the subset of unseen concepts. The ids-only strategy is omitted in the table, as this configuration can never predict unseen concepts. The spans-only and spans-first strategies systematically yield identical results, as they only differ in cases where the latter backs off to ID predictions, which have been filtered out in this evaluation. With the ids-first strategy, many span predictions for unseen concepts are shadowed by an ID prediction for a concept known from the training set (which is then ignored in this specific evaluation). For some annotation types (e. g. CHEBI[_EXT], GO_BP[_EXT], SO[_EXT]), the removal of known concepts improves precision, i. e. more false positives than true positives were removed. In other cases, precision suffers from the removal. Recall decreases in all cases, as is to be expected for an evaluation that focuses on more difficult examples. Interpretation Tackling concept recognition for multiple entity types with a single architecture is very challenging, even if a separate model is trained for every annotation set. The comparative results for the different harmonisation strategies ( Figure 5) illustrate well how some annotation sets profit more from the span tagger (blue, left-most bars), others more from the ID tagger (red, right-most bars). In many cases, merging predictions from the two taggers (middle bars) yields better results than relying on a single tagger (outer bars). This preference does not directly correlate with ontology size: the two annotation sets with the largest ontologies (NCBITaxon and PR) show quite distinct result patterns. However, it is possible to empirically determine how well each harmonisation strategy suits the characteristics of a given annotation set. Using cross-validation over the training set resulted in robust estimations for ranking the harmonisation strategies. The diversity of the individual annotation sets shows even more clearly when it comes to predicting unseen concepts. In general, the level of precision and recall for unseen concepts varies greatly across annotation sets, as does the number of unseen concepts in the reference (cf. Table 4). There is a loose negative correlation to the performance on the entire test set: annotation sets like NCBITaxon[_EXT] and SO[_ EXT] show high overall scores and low scores for unseen concepts, whereas more Table 3 Results for our current BioBERT system, best system reported in the shared-task paper [46], and the official baseline In case of the shared-task systems, the results were selected independently for SER and F-score, i. e. the two scores for a given annotation set do not necessarily come from the same system. For the baseline and the current BioBERT system, however, only one system was evaluated for each annotation set A possible explanation is that the former annotation sets have little variability and a high overlap between training and test set, leading to a strong bias for known concepts (overfitting tendency), which is beneficiary for the test set as a whole, but not for the subset of unseen concepts. The latter annotation sets show great variability of concept labels and surface names in the training data, which makes the task harder but also leads to better generalisation, as the classifier cannot achieve good performance by only learning a few frequent concepts. Error analysis We performed an analysis of prediction errors in order to find potential weaknesses or systematic mistakes. As expected, many errors are false negatives due to missing training examples. There are several cases where spelled-out mentions are matched, whereas their abbreviated versions are missed. For example, "olfactory tubercle" is correctly linked thanks to the dictionary-based predictions, while the ad-hoc acronym "OT" is missed. False positive predictions are also frequently seen among abbreviations, which have an increased likelihood of being ambiguous. For example, the short-hand "NF" denotes either "neurofilament" or "nuclear factor" in the training set, which cannot always be correctly distinguished by the classifier. Table 4 Precision and recall for unseen concepts in the test set For each annotation set, the number of annotations (ref. count) in the test set is given, counting both occurrences (occ.) and unique labels (unique). A dash for precision and recall means that the corresponding system did not predict any unseen concept at all (neither true nor false positive) At first sight, it seems like abbreviation expansion should be able to alleviate errors like these. Replacing short forms with their corresponding long forms increases chances for a dictionary match and, since it is performed within document scope, potentially reduces ambiguity. However, abbreviation expansion is not guaranteed to work perfectly and can be a source of confusion even if it does. For example, "OT" was correctly expanded to "olfactory tubercle". Unfortunately, this misguided the classifier to label the term as olfactory bulb, as the first token was only used for this concept in the training data. In our experiments, the net effect of abbreviation expansion was negative, as stated above in the Hyperparameter tuning section. Sometimes, spurious predictions are caused by a substring shared with a training example. Since the WordPiece tokeniser used in (Bio)BERT cuts unknown words into sub-word segments, the classifier sometimes associates a concept label with the fraction of a word, which might trigger false positives in unexpected contexts. As an extreme example, mentions of "PDGFR", "PFK", "PKD", "PI3K", and "PFKD" are erroneously linked to phosphoglycerate kinase (abbreviated "PGK"). This is most likely due to the shared initial letter, as the terms do not refer to semantically similar concepts (even though PFK and PI3K are also kinases). Similarly, "forkhead" is linked to fork, "polymorphonuclear" is linked to nucleus and "prosensory" is linked to forebrain (after the synonym "prosencephalon" seen in training data). In some cases, the chosen harmonisation strategy prefers an erroneous label over a correct one. For example, the term "monkey" is linked to mouse by the ID tagger due to context (training: "mouse kidney", test: "monkey kidney"). Since the NCBITaxon systems are harmonised with the ids-first strategy, this erroneous prediction overrides the correct annotation from the dictionary-based tagger. Conversely, the dictionary predictions for "insulin" always link to PR:000009054, a specific protein. In the ground truth, however, the more general concept PR:000045358 is used throughout the corpus, which denotes a family of proteins. Even though the ID tagger produces correct labels, the spans-first strategy used for PR gives precedence to the dictionary predictions in these cases. Another interesting category of errors are the ones that were amended through the system improvements, i. e. spurious and missing annotations from the shared-task system that are correctly predicted by the current system. A frequent case are short spans by the shared-task system, such as "Ephrin" instead of "Ephrin-B1" for PR or "X" instead of "X-Gal" for CHEBI, which are now correctly recognised. Another re-occurring pattern are incorrect IDs, such as "benzodiazepine" linked to CHEBI:16150 (benzoate) rather than CHEBI:22720 (correct ID by the current system). Furthermore, coverage of frequent terms has improved, for example the shared-task system found "Staphylococcus Aureus" in some context but missed it in others which were correctly identified by the current system. Conclusions In this work, we present a concept-recognition architecture for parallel NER and NEN. Compared to a sequential NER+NEN pipeline, our approach avoids error propagation from the span-detection to the normalisation step. Modeling NEN as a sequence-labeling task allows it to operate directly on running text, at the cost of restricting the label set of the normaliser to the concepts from the training set. We counter these limitations by fusing its predictions with the output of a span detector and a knowledge-based concept recogniser. In the CRAFT shared task and in the current study, we have shown that parallel concept recognition can outperform a pipeline system created specifically for the CRAFT corpus. Merging the predictions of a span and an ID tagger is a fruitful way of combining the complementary strengths of both of them. However, the specifics of interpolating between span and ID predictions is subject to further research. We took an empiric approach to pick the best harmonisation strategy for each annotation set. For future work, we intend to test our approach on other datasets. Even though the CRAFT corpus allows validating systems on a broad range of entity types, there is only little opportunity for direct comparison to competing approaches at the time of writing -to the best of our knowledge, there are no published results for the latest version (Version 4) of CRAFT besides the shared task.
6,899.2
2020-03-16T00:00:00.000
[ "Computer Science" ]
Voltammetric determination of itopride using carbon paste electrode modified with Gd doped TiO2 nanotubes In the present work TiO2 nanotubes (TNT) have been synthesized by alkaline hydrothermal transformation. Then they have been doped with Gd element. Characterizations of doped and undoped TNT have been done with TEM and SEM. The chemical composition was analyzed by EDX, Raman and FTIR spectroscopy. The crystal structure was characterized by XRD. Carbon paste electrode has been fabricated and mixed with Gd doped and undoped TNT to form a nanocomposite working electrode. Comparison of bare carbon paste electrode and Gd doped and undoped TNT carbon paste electrode for 1.0 ×10−3 M K4 [Fe(CN)6] voltammetric analysis; it was observed that Gd doped TNT modified electrode has advantage of high sensitivity. Gd doped TNT modified electrode has been used as working electrode for itopride assay in a pharmaceutical formulation. Cyclic voltammetry analysis showed high correlation coefficient of 0.9973 for itopride (0.04–0.2 mg/mL) with a limit of detection (LOD) and limit of quantitation values (LOQ) of 2.9 and 23.0 μg.mL−1 respectively. Electrochemical analysis methods have grown in recent years as alternative to other analytical methods [18][19][20]. Electroanalytical methods have some advantages making them better alternative such as high sensitivity, selectivity, low instrumentation and running cost, easy to handle and short analysis time. Several parameters have played a role in the performance of the electroanalytical methods; one of them is working electrode. Several studies have been done to enhance the performance of working electrodes. In this field carbon paste working electrode achieved a special importance; this importance comes from its simple fabrication steps, in addition to low cost and wide potential window. Various materials such as nanoparticles could be added to carbon paste mixture during preparation to enhance the sensitivity and selectivity of the electrodes [21,22]. Titanium dioxide (TiO 2 ) is belonging to metal oxide semiconductor that considered as the perfect materials in widespread environmental and medical applications [23][24][25][26]. TiO 2 -based nanomaterials as nanotubes have been intensively studied and widely used due to their excellent electrolytic and electrolysis performance, high chemical stability and efficiency, nontoxicity and low cost [27][28][29][30]. The high cation exchange capacity on titanium dioxide nanotubes (TNT) provides the possibility of achieving a high loading of active compound on it, which makes it one on the best sensor. Otherwise, the high specific surface area and absence of micropores in TNT which facilitate transport of reagent the active sites. The bandwidth between the valence and conduction bands limits its activity [31]. By addressing this issue, doping with elements as rare-earth that have a large atomic number have been devoted [32][33][34][35]. The electronic energy levels in rare-earth elements are rich and improve its photocatalytic and electrocatalytic activity. In this study, TiO 2 nanotubes have been prepared then doped with Gd, after the characterization Gd doped TiO 2 nanotubes. Carbon paste electrode has been mixed with different doses of TiO 2 nanotubes and Gd doped TiO 2 nanotubes, cyclic voltammetry has been established to study the performance of each fabricated electrode, then the electrode exhibited the best performance that has been used in voltammetric analysis of itopride in a pharmaceutical formulation. Materials and reagents The standard pharmaceutical formulation of itopride hydrochloride was obtained from Trium pharma (Jordan), sodium sulphate anhydrous Na 2 SO 4 from Janssen Chemica. The supporting electrolyte 1.0 M Na 2 SO 4 was prepared using Milli-Q water. 1.0 M Na 2 SO 4 supporting electrolyte was used for the preparation of stock solutions and standard working solutions. K 4 Fe(CN) 6 .3H 2 O was obtained from (Sigma Aldrich), Graphite powder from (BDH), and Paraffin liquid light BP from (Pacegrove). Synthesis of TNTs The method of preparation of TNT and doped Gd-TNT was based on alkaline hydrothermal transformation. A weighted amount of TiO 2 powder [P25, (99.5%, 21 nm), Sigma-Aldrich, USA] was added to 30 mL of 10 mol dm −3 potassium hydroxide [KOH,Sigma-Aldrich, USA] solution. After stirring for 30 min and the mixture was transferred into a Teflon-lined stainless-steel autoclave and was heated for 24 h at 150°C. The white Powderly precipitate was thoroughly washed with deionized water then with dilute HCl until the pH of washing solution reached 6.5 then with deionized water again, followed by drying for 10 h at 90°C, and calcinating at 400°C for 2 h. Gd-TNT was synthesized by adding Gd(NO 3 ) 3 to the TiO 2 in the KOH solution followed by the hydrothermal and postsynthetic treatments as described above for the undoped TNTs [35]. Characterizations The morphology of undoped TNTs and doped Gd-TNTs was examined by Transmission Electron Microscope (TEM, JEOL JEM 1400, Japan) and scanning electron microscopy equipped (SEM, Superscan SS-550, Shimadzu, Japan). The chemical composition was analyzed by energy dispersive X-ray analysis equipped (EDX, Superscan SS-550, Shimadzu, Japan). The crystal structure of the as-prepared sensors were characterized by a X-ray diffraction (XRD, Shimadzu, XRD-7000, Japan) at 40 kV and 30 mA, using CuK α incident beam ( λ = 0.154nm). Raman spectroscopy was performed on a Raman microscope (Raman, Sentrarra, Bruker, USA) from 50 cm −1 to 1200 cm −1 . Infrared (FT-IR) absorption spectra of the KCl disks containing powder samples were recorded on a Thermo IS-10 instrument FT-IR spectrometer (Thermo Fisher Scientific Inc., Madison, WI, USA) at a resolution of 4 cm −1 in the range of 400-4000 cm −1 . Modified carbon paste electrode fabrication For fabrication of carbon paste modified electrodes, graphite powder, TNT, and Gd-TNT have been mixed as in the Table 1. After that mixture powder was dispersed in 1.0 mL dimethyl formamide (DMF) then homogenized for 20 min in ultrasonic bath. After that DMF was vaporized from the mixture using oven at 80°C overnight. Dry mixture was mixed with 100 µL paraffin oil using spatula. Micropipette tip of 2 mm end was filled with mixture paste. For electrical connection copper wire connection was made passed through the edge of the tip. Voltammetric analysis apparatus Potentiostat (Metrohm Autolab) PGSTAT 204 was used for voltammetric measurements. All measurements were carried out using a 3 electrodes system; where Ag/AgCl (3 M KCl) was used as reference electrode, platinum (Pt) sheet as counter electrode, and fabricated carbon paste as working electrode. were attributed to anatase TiO 2 which agree with previous studies [35,36] and reveal the characteristics of the anatase phase of the sensors. This result is also an evidence to verify that Gd introduced into the lattice or interstitial site of TiO 2 . FTIR analysis IR spectra of TNT (a) and Gd-TNT (b) are depicted in Figure 4. Both spectra display the broad band at around at 3432.53 cm −1 , corresponding to the surface adsorbed water and hydroxyl groups in tubular structure sensors. The large amount of hydroxyl groups on sensors wall enhance their performance for the photo excited electrons capture and profiled the holes to produce the reactive oxygen species in the photocatalytic [40]. The Electrochemical performance of fabricated electrodes Cyclic voltammetry was carried out to study the electrochemical performance of fabricated electrodes. Figure 5 shows voltammograms of 1.0 ×10 −3 M K 4 [Fe(CN) 6 ] with fabricated working electrodes. It could be concluded that doping TiO 2 with Gd enhances both anodic and cathodic peak currents, where G2 electrode anodic peak current reaches 56 µA compared to 31 µA for bare C paste electrode. Furthermore, G2 cathodic peak current reaches 52 µA which is the highest compared to other studied electrodes. According to the voltammograms of G1 and G2 of Figure 5, it can be concluded that increasing Gd-TNT portion in the fabricated electrodes has a positive impact on electrode sensitivity. A comparison has been established between fabricated working electrodes for itopride pharmaceutical formulation assay. The voltammograms in Figure 6 show significant difference in performance between studied working electrodes. Figure 6 shows drastic increase in the anodic peak current of G2 working electrode for itopride compared to G1, bare C paste, and (F) electrodes. When G2 was used as working electrode it showed ∆E p of 1.1 V; where ( ∆E p = E pa -E pc ) , which is greater than the value of 59/n mV expected for a reversible system [42] suggesting that itopride with G2 working electrode has irreversible behavior in aqueous medium. Influence of scan rate ( υ ): Itopride oxidation mechanism was investigated by study the effect of scan rate on the electrode response. Applied scan rate is ranging from 40 to 180 mV/s. Results are summarized in Figure 7, which indicated that as scan rate increases, anodic peak current increases (Figures 7a and 7b), furthermore it shifts the anodic peak potential (Epa) positively (Figure 7a), which interprets the irreversibility of the electrode process. Figure 7c shows log (ip) vs. log ( υ ) plot which verified linear relationship with slope 0.693, slope value is closer to 0.5, the theoretical value indicated redox process controlled typically by diffusion mass transport only, rather than 1.0 value which typically indicated redox processes controlled by adsorption [42,43]. Analytical performance To evaluate the performance of fabricated sensor (G2), a calibration curve was established in the acquired optimum conditions for itopride assay in a pharmaceutical formulation. Figure 8 shows cyclic voltammograms of itopride in a pharmaceutical formulation (0.04-0.2 mg/mL). Standard calibration curve illustrates high correlation (R² = 0.9973) in addition to high sensitivity. Each concentration has been done triplicate with relative standard deviation (RSD) of all concentrations less than 1%. Limit of detection (LOD) and limit of quantitation values (LOQ) found to be 2.9 and 23.0 µg.mL −1 , respectively. Where LOD and LOQ of itopride were determined based on signal-to-noise ratio of 3 and 10, respectively. Table 2 shows a comparison between present work and other methods used for itopride determination. This comparison includes precession and LOD. Data in Table 2 indicate that CV analysis of itopride using Gd-TNT electrode has a comparable LOD and precession with chromatographic and spectroscopic methods. Conclusions In the present work, TiO 2 nanotubes have been synthesized and doped with Gd element, and then it has been fully characterized. A composite of carbon paste modified with Gd doped TiO 2 nanotubes electrode have shown higher sensitivity compared to bare and undoped TiO 2 nanotubes carbon paste electrode. When Gd doped TiO 2 nanotubes electrode has been applied for cyclic voltammetry of itopride in a pharmaceutical formulation, it has shown high performance compared to commercially available electrode.
2,420.6
2020-08-18T00:00:00.000
[ "Chemistry", "Materials Science" ]
Complex Bosonic Many-Body Models: Overview of the Small Field Parabolic Flow This paper is a contribution to a program to see symmetry breaking in a weakly interacting many boson system on a three-dimensional lattice at low temperature. It provides an overview of the analysis, given in Balaban et al. (The small field parabolic flow for bosonic many-body models: part 1—main results and algebra, arXiv:1609.01745, 2016, The small field parabolic flow for bosonic many-body models: part 2—fluctuation integral and renormalization, arXiv:1609.01746, 2016), of the ‘small field’ approximation to the ‘parabolic flow’ which exhibits the formation of a ‘Mexican hat’ potential well. It is our long-term goal to rigorously demonstrate symmetry breaking in a gas of bosons hopping on a three-dimensional lattice. Technically, to show that the correlation functions decay at a nonintegrable rate when the chemical potential is sufficiently positive, the nonintegrability reflecting the presence of a long range Goldstone boson mediating the interaction between quasiparticles in the superfluid condensate. It is already known [19,20] that the correlation functions are exponentially decreasing when the chemical potential is sufficiently negative. See, for example, [22] and [30, §19] for an introduction to symmetry breaking in general, and [1,18,23,28] as general references to Bose-Einstein condensation. See [17,21,26,29] for other mathematically rigorous work on the subject. We start with a brief, formula free, summary of the program and its current state. Then we'll provide a more precise, but still simplified, discussion of the portion of the program that controls the small field parabolic flow. The program was initiated in [3,4], where we expressed the positive temperature partition function and thermodynamic correlation functions in a periodic box (a discrete three-dimensional torus) as 'temporal' ultraviolet limits of four-dimensional (coherent state) lattice functional integrals (see also [27]). By a lattice functional integral, we mean an integral with one (in this case complex) integration variable for each point of the lattice. By a 'temporal' ultraviolet limit, we mean a limit in which the lattice spacing in the inverse temperature direction (imaginary time direction) is sent to zero while the lattice spacing in the three spatial directions is held fixed. In [7], 1 by a complete large field/small field renormalization group analysis, we expressed the temporal ultraviolet limit for the partition function, 2 still in a periodic box, as a four-dimensional lattice functional integral with the lattice spacing in all four directions being of the order one, preparing the way for an infrared renormalization group analysis of the thermodynamic limit. This overview concerns the next stage of the program, which is contained in [13,14] and the supporting papers [9][10][11][12]15,16]. There we initiate the infrared analysis by tracking, in the small field region, the evolution of the effective interaction generated by the iteration of a renormalization group map that is taylored to a parabolic covariance 3 : in each renormalization group step the spatial lattice directions expand by a factor 4 L > 1, the inverse temperature direction expands by a factor L 2 and the running chemical potential grows by a factor of L 2 , while the running coupling constant decreases by a factor of L −1 . Consequently, the effective potential, initially close to a paraboloid, develops into a Mexican hat with a moderately large radius and a moderately deep circular well of minima. [13,14] ends after a finite number (of the order of the magnitude of the logarithm of the coupling constant) of steps once the chemical potential, which initially was of the order of the coupling constant, has grown to a small ' ' power of the coupling constant. Then we can no longer base our analysis on expansions about zero field, because the renormalization group iterations have moved the effective model away from the trivial noninteracting fixed point. In the next stage of the construction, we plan to continue the parabolic evolution in the small field regime, but expanding around fields concentrated at the bottom of the (Mexican hat shaped) potential well rather around zero (much as is done in the Bogoliubov Ansatz) and track it through an additional finite number of steps until the running chemical potential is sufficiently larger than one. At that point, we will turn to a renormalization group map with a scaling taylored to an elliptic covariance that expands both the temporal (inverse temperature) and spatial lattice directions by the same factor L. It is expected that the elliptic evolution can be controlled through infinitely many steps, all the way to the symmetry broken fixed point. The system is superrenormalizable in the entire parabolic regime because the running coupling constant is geometrically decreasing. However in the elliptic regime, the system is only strictly renormalizable. The final stage(s) of the program concern the control of the large field contributions in both the parabolic and elliptic regimes. The technical implementation of the parabolic renormalization group in [13,14] proceeds much as in [6,7], except that we are restricting our attention to the small field regime and • we use 1 + 3-dimensional block spin averages, as in [2,24,25]. In [7], we had used decimation, which was suited to the effectively one-dimensional problem of evaluating the temporal ultraviolet limit. • Otherwise, the stationary phase calculation that controls oscillations is similar, but technically more elaborate. • The essential complication is that the critical fields and background fields are now solutions to (weakly) nonlinear systems of parabolic equations. • The Stokes' argument that allows us to shift the multidimensional integration contour to the 'reals' and • the evaluation of the fluctuation integrals is similar. • However, there is an important new feature: The chemical potential has to be renormalized. To analyze the output of the block spin convolution (a single renormalization group step), it is de rigueur for the small field/large field style of renormalization group implementations to introduce local small field conditions on the integrand and then decompose the integral into the sum over all partitions of the discrete torus into small and large field regions on which the conditions are satisfied and violated, respectively. Small field contributions are to be controlled by powers of the coupling constant v 0 (a suitable norm of the two body interaction) uniformly in the volume of the small field region. Large field contributions are to be controlled by a factor e −1/v ε 0 , ε > 0 , raised to the volume of the large field region. Morally, in small field regions, perturbation expansions in the coupling constant converge and exhibit all physical phenomena. Large field regions give multiplicative corrections that are smaller than any power of the coupling constant. So, in the leading terms, every point is small field. If the actions in our functional integrals were sums of positive terms (as in a Euclidean O(n) model), it would be routine to extract an exponentially small factor per point of a large field region. They are not. There are explicit purely imaginary terms. In [13,14], we analyze the parabolic flow of the leading term, in which all points are small field, as long as it is possible to expand around zero field. Nevertheless, we show (see, [15]) that our actions do have positivity properties and consequently there is at least one factor e −1/v ε 0 whenever there is a large field region. A stronger bound of a factor per point of a large field region is reasonable and would be the main ingredient for controlling the full parabolic renormalization group flow in this regime. We now formally introduce the main objects of discussion and enough machinery to allow technical (but simplified) statements of the main results of [13,14] and the methods used to establish them. One conclusion of our previous work in [7] is that the purely small field contribution to the partition function for a gas of bosons hopping on a threedimensional discrete torus X = Z 3 /L sp Z 3 (where L sp , a power of L, is the spatial infrared regulator which will ultimately be sent to infinity) takes the form where Here, L tp ≈ 1 kT , also a power of L, is the inverse temperature infrared regulator, which can ultimately be sent to infinity to get the temperature zero limit. • ψ ∈ C X0 is a complex valued field on X 0 , ψ * is the complex conjugate field and, for each is the standard Lebesgue measure on C. where the small 'coupling constant' v 0 is an exponentially, tree length weighted L 1 -L ∞ -norm (see the discussion of norms at the end of this overview or [13, Definition 1.9]) of an effective interaction V 0 (see [13,Proposition D.1]). Here, ∂ ν , ν = 0, 1, 2, 3 , is the forward difference operator in the x ν direction. • Let ψ * be another arbitrary element of C X0 . ( ψ * is not to be confused with the complex conjugate ψ * of ψ .) determining the partition function is the restriction is the natural real inner product on C X0 • h is a nonnegative, second-order, elliptic (lattice) pseudodifferential operator acting on X -for example, a constant times minus the spatial discrete Laplacian quartic monomial whose kernel V 0 is translation invariant with Vol. 18 (2017) Complex Bosonic Many-Body Models 2877 • μ 0 is essentially the chemical potential. • Let ψ * ν , ψ ν , ν = 0, 1, 2, 3 , be the names of new arbitrary elements of C X0 . The perturbative correction p 0 ψ * , ψ, {ψ * ν } 3 ν=0 , {ψ ν } 3 ν=0 , to the principal contribution −A 0 , in A 0 , is a power series in the ten variables ψ * , ψ, {ψ * ν , ψ ν } 3 ν=0 , with no ψ * (x)ψ(y) terms, such that each nonzero term has as many factors with asterisks as factors without asterisks. That is, p 0 conserves particle number. It converges on where '( * )' means 'either with * or without * .' See [13, Proposition D.1] for more details. For convenience, set With this notation, the partition function is It is natural to study the partition function using a steepest descent or stationary phase analysis. The exponential e ψ * , ∂0ψ is purely oscillatory because the quadratic form ψ * , ∂ 0 ψ is pure imaginary. Fortunately, our partition function, Z , has the essential feature that there is an analytic function A 0 (ψ * , ψ) on a neighborhood of the origin in C X0 × C X0 whose restriction to the real subspace is the 'small field' action. Our renormalization group analysis of the oscillating integral defining Z is based on the critical points of A 0 (ψ * , ψ) = ψ * , (−∂ 0 + h)ψ + V 0 (ψ * , ψ) − μ 0 ψ * , ψ in C X0 × C X0 that typically do not lie in the real subspace, and a multidimensional Stokes' contour shifting construction that is only possible because p 0 (ψ * , ψ) is analytic. We now formally introduce the 'block spin' renormalization group transformations that are used in this paper. Let X −1 be the subgroup L 2 Z/L tp Z × LZ 3 /L sp Z 3 of X 0 . Observe that the distance between points of X −1 on the inverse temperature axis is L 2 and on the spatial axes is L , and that |X −1 | = L −5 |X 0 | . Also, let Q (0) : C X0 → C X−1 be a linear operator that commutes with complex conjugation. We will make a specific choice of Q (0) later. It will be a 'block spin averaging' operator with, for each y ∈ X −1 , Q (0) ψ (y) being 'morally' the average value of ψ in the L 2 × L × L × L block centered on y. Insert into the integral of (2) where f , g −1 = L 5 y ∈ X−1 f (y)g(y) is the natural real inner product on C X−1 and N (0) is a normalization constant. Then exchange the order of the 2878 T. Balaban et al. Ann. Henri Poincaré ψ and θ integrals. This gives where, by definition, the block spin transform of F 0 (ψ * , ψ) associated with Q (0) with external fields θ and θ * is Here θ , θ * are two arbitrary elements of C X−1 . It can be awkward to compare functions defined on discrete tori with different lattice spacings. So, we scale X −1 down to the unit discrete torus which is an isomorphism of Abelian groups. Abusing notation, we consciously use the symbol ψ(x) as the name of a field on the unit torus X (1) 0 even though it was used before as the name of a field on the unit torus X 0 . By definition, the block spin renormalization group transform of F 0 (ψ * , ψ) associated with the original small field part of the partition function. Repeat the construction. be a linear 'block averaging' operator that commutes with complex conjugation. • Introduce the unit discrete torus X As before, integrate against the normalized Gaussian to obtain the block spin transform of F 1 associated with Q (1) Similarly, for spatial difference operators. Vol. 18 (2017) Complex Bosonic Many-Body Models 2879 and then rescale to obtain the block spin renormalization group transform . Interchanging the order of integration, We keep repeating the construction to generate a sequence F n (ψ * , ψ) , n ≥ 1 , of functions defined on spaces C X (n) . Balaban et al. [13,14] concern a sequence F (SF ) n (ψ * , ψ) of 'small field' approximations to the F n 's. We expect and provide some supporting motivation for, but do not prove, that For the precise definition, see [13, §1.2 and, in particular, Definition 1.6]. For the supporting motivation see [15]. To make a specific choice for the, to this point arbitrary, sequence Q (0) , . . . , Q (n) , . . . of block averaging operators, let q(x) be a nonnegative, compactly supported, even function on Z × Z 3 and Q the associated convolution operator 6 to be the convolution of the indicator function of the (discrete) rectangle and normalized so that its sum over Z×Z 3 is one. In [13,14], the basic objects are the 'small field' block spin renormalization iterates F (SF ) n (ψ * , ψ) , where at each step Q is chosen to be convolution with the fixed kernel q . If we had defined Q by convolving just with the indicator function of the rectangle itself, properly normalized, then (Qψ)(y) would be the usual average of ψ(x) over the rectangular box in X (n) 0 centered at y with sides L 2 and L . We work with the smoothed averaging kernel rather than the sharp one for technical reasons: Commutators [∂ ν , Q] are routinely generated and are small enough when Q is smooth enough. For the rest of this overview, we will pretend that q is just the indicator function of the rectangle and formulate our results as if this were the case. We will also pretend that the operator h on X appearing in the action A 0 (ψ * , ψ) is (minus) the lattice Laplacian. Full, technically complete, statements are in [13, §1.6]. 6 By abuse of notation, we use the same symbol Q for the convolution operator acting on all of the spaces C X and zero on its complement. Here, • you can think of the radii κ n and κ n as being roughly L • φ * n (ψ * , ψ) and φ n (ψ * , ψ) are (nonlinear) maps from an open neighbor- to C Xn , where X n is the discrete torus, isomorphic to X 0 , but scaled down to have lattice spacing L −2n in the time direction and L −n in the spatial directions. 9 We say more about them in the last of this sequence of bullets. Given 'external fields' ψ * , ψ , the functions φ * n (ψ * , ψ)(u) , φ n (ψ * , ψ)(u) on X n are referred to as the 'background fields' at scale n . u ∈ Xn f (u)g(u) are the natural real inner products on C X (n) 0 and C Xn . 7 An explicit formula for μ * is given in [13, (1.19)]. 8 We are weakening some of the statements, for pedagogical reasons. In particular, the sets of allowed μ 0 's and n's are a bit larger than the sets specified here. Lsp L n Z 3 and the map u ∈ Xn → x = (L 2n u 0 , L n u) ∈ X 0 is an isomorphism of Abelian groups. Vol. 18 (2017) Complex Bosonic Many-Body Models 2881 • The perturbative correction p n ψ * , ψ, is a power series in the ten variables ψ * , ψ, , with no ψ * (x)ψ(y) or constant terms, such that each nonzero term has as many factors with asterisks as factors without asterisks. It converges 10 when • Z n is a normalization constant. 11 • μ n is the 'renormalized' chemical potential. 12 It is close to L 2n μ 0 . • For each pair in the polydisc The maps φ * n (ψ * , ψ), φ n (ψ * , ψ) are holomorphic on that polydisc. In practical terms, what have we achieved? If ψ = z is a constant field on X 0 , then the dominant part of the initial effective potential is over the complex plane z = x 1 + ıx 2 is a surface of revolution around the x 3 -axis with the circular well of absolute minima |z| = μ0 v0 . Our hypothesis on μ 0 implies that the radius and depth of the well are of order one and order v 0 , respectively. After n renormalization group steps, the effective potential becomes 10 It is necessary to measure the size of pn by introducing an appropriate norm. See the last paragraphs of this overview. 11 When we take logarithms and ultimately differentiate with respect to an external field to obtain correlation functions, it will disappear. 12 We will describe the inductive construction of μn later on in this overview. The dependence of pn on the derivatives of the fields arises because of the renormalization of the chemical potential. The graph is again a surface of revolution with the circular well of absolute minima |z| = μn v 0/L n , but now the radius and depth are of order L 3 2 n and order L 5n v 0 , respectively; the well is developing. We stop the flow when the well becomes so wide and so deep that we can no longer construct background fields by expanding around ψ * , ψ = 0 . This happens as μ n approaches order one. If the power series expansion of the perturbative correction p n had a quadratic part x,y∈X (n) 0 K(x, y) ψ * (x)ψ(y) the discussion of the evolving well in the last paragraph would be misleading, because the minimum of the total action A n − p n would not be close enough to the minimum of the dominant part A n . The requirement that p n must not contain quadratic terms is the renormalization condition for the chemical potential. (See, Step 9 below.) Under the scaling map (3), the local monomials are relevant, and the local monomials The parabolic renormalization group flow drives the system away from the trivial (noninteracting) fixed point. To continue, we will have to construct background fields by expanding about configurations supported near the bottom of the developing well, analogously to the 'Bogoliubov Ansatz.' At present, we expect to continue the parabolic flow, but expanding about configurations supported near the bottom of the well, through a transition regime (which overlaps with the regime of [13,14]) until μ n becomes large enough (but still of order one), and then switch to a new 'elliptic' renormalization group flow for the push to the symmetry broken, superfluid fixed point. In Appendix A, below, we perform several model computations that contrast the parabolic nature of the early renormalization group steps with the elliptical nature of the late renormalization group steps. The next part of this overview is an outline, in nine steps, of the inductive construction that uses a steepest descent/stationary phase calculation to build the desired form for F n+1 (ψ * , ψ) = B n+1 S −1 ψ * , S −1 ψ , from that of F n (ψ * , ψ) , n ≥ 0 , where We are expecting that, by induction, We emphasize that Steps 1 and 6, which control the difference between F n+1 (ψ * , ψ) and its, dominant, 'small field,' part F (SF ) n+1 (ψ * , ψ), have not been proven, though we do supply some motivation in [15]. Step 2 (Holomorphic form representation). We wish to analyze the integral in (6) by a steepest descent/stationary phase argument. Recall that a critical point of a function f (z) of one complex variable z = x+iy, that is not analytic in z, is a point where both partial derivatives ∂f ∂x and ∂f ∂y , or equivalently, both partial derivatives ∂f ∂z = 1 2 ∂ ∂x − i ∂ ∂y f and ∂f ∂z = 1 2 ∂ ∂x + i ∂ ∂y f vanish. We prefer the latter formulation. So we rewrite the integral in (6) in a form that allows us to treat ψ and its complex conjugate as independent fields. For each fixed (θ * , θ) , the 'action' is a holomorphic function of (ψ * , ψ) on S n × S n . By design, the Dominant Part of B n+1 (θ * , θ) in (6) is expressed as (a constant times) the integral of the holomorphic form x ∈ X (n) 0 of degree 2|X (n) 0 | over the real subspace in S n × S n given by ψ * = ψ * . We shall see below that, typically, the critical point does not lie in the real subspace and so is not in the domain of integration. This representation permits us to use Stokes' theorem, 13 to shift the contour of integration to a non-real contour that does contain the critical point of (the principal terms of) the action. The shift will be implemented in Step 6. Step 3 (Critical Points). Our next task is to find critical points. In (7), above, we wrote the exponent, A n (θ * , θ, ψ * , ψ), as the sum of a very explicit, main, part-A n,eff and a not very explicit, smaller, part p n . We just find the critical points of A n,eff rather than the full A n . Indeed, there is a unique pair of holomorphic maps 14 to S n such that the gradient ∇ ψ * ∇ ψ of A n,eff (θ * , θ, ψ * , ψ) vanishes when ψ * = ψ * cr (θ * , θ) , ψ = ψ cr (θ * , θ) . This pair of 'critical field maps' can be constructed by solving the critical point equations, a nonlinear parabolic system of (discrete) partial difference equations, using the natural contraction mapping argument to perturb off of the linearized equations. 15 The analysis of the linearized equations is based on a careful examination of some linear operators given in [10]. Beware that, in general, ψ * cr (θ * , θ) = ψ cr (θ * , θ) * . To start the stationary phase calculation, we factor the integral of the holomorphic form (8) over the real subspace (ψ * , ψ) ∈ S n ×S n ψ * = ψ * as the product of e An(θ * ,θ, ψ * cr(θ * ,θ),ψcr(θ * ,θ) ) and the 'fluctuation integral' real subspace of Sn ×Sn e An(θ * ,θ, ψ * ,ψ) − An(θ * ,θ, ψ * cr(θ * ,θ),ψcr(θ * ,θ) ) x ∈ X (n) 0 Step 4 (The Value of the Action at the Critical Point). We would expect that the biggest contribution to the integral would come from simply evaluating the exponent at the critical point, and that the biggest contribution to the value of the exponent A n at the critical point would come from evaluating −A n,eff Vol. 18 (2017) Complex Bosonic Many-Body Models 2885 at the critical point. By [13,Proposition 3.4 Step 7 (The Logarithm of the Fluctuation Integral). In [5], we developed a simple variant of the polymer expansion that can be directly applied to the integral in (11) to obtain the logarithm Log Step 8 (Rescaling). To this point, we have determined that the small field part of B n+1 (θ * , θ) is a constant times the exponential of the sum of • the contribution which comes from simply evaluating A n at the critical point-in Step 4, we saw that this was −Ǎ n+1 (θ * , θ,φ * n+1 (θ * , θ),φ n+1 (θ * , θ)) + p n (ψ * cr , ψ cr , ∇ψ * cr , ∇ψ cr ) • and an analytic function that came, in Step 7, from the fluctuation integral. Vol. 18 (2017) Complex Bosonic Many-Body Models 2887 We are now ready to scale to get the small field part of Using that and if the kernel V n of V n were exactly the V (u) n of (4), then the kernel of V n+1 would be exactly V Renormalization is going to tweak, for example, the value of the chemical potential. As a result, A n+1 is not quite A n+1 , and φ ( * )n+1 is not quite φ ( * )n+1 . That's the reason for putting the primes on. Ann. Henri Poincaré left in p n+1 it would, by the third line of (12), grow by roughly a factor of L 2 in each future renormalization group step. So we need to move (at least the local part of) this term out of p n+1 and into A n+1 . By the discrete fundamental theorem of calculus, for any translation invariant K, where K ∈ C and K ν , ν = 0, 1, 2, 3 , are linear operators on C X (n+1) 0 . See [14, Corollary B.2]. By reflection invariance, K is real and So we would like to move K ψ * , ψ 0 out of p n+1 into A n+1 . There are two factors that complicate (but not seriously) this move. • The chemical potential term in A n+1 (ψ * , ψ , φ * n+1 (ψ * , ψ) , φ n+1 (ψ * , ψ)) is It is expressed in terms of φ ( * )n+1 (ψ * , ψ) rather than directly in terms of ψ ( * ) . • The prime fields φ * n+1 (ψ * , ψ), φ n+1 (ψ * , ψ) are background fields with chemical potential L 2 μ n , not with the chemical potential μ n+1 that we are going to end up with (and which we do not yet know). To deal with the first complication, we use that φ ( * )n+1 (ψ * , ψ) = B ( * ) ψ ( * ) plus terms of degree at least three in (ψ * , ψ) (see [16, Proposition 2.1.a]). Because the linear operators B ( * ) have left inverses (see [10,Lemma 5.7] and the beginning of the proof of [14, Lemma 6.3]), one can show that 17 K ψ * , ψ 0 = K φ * n+1 (ψ * , ψ), φ n+1 (ψ * , ψ) n+1 plus a power series in ψ * , ψ, ∇ψ * , ∇ψ that converges on the desired domain of analyticity and that does not contain any relevant contributions. See [14,Lemma 6.3]. Thus, But we are still not done-we still have the second complication to deal with. The prime fields φ * n+1 (ψ * , ψ), φ n+1 (ψ * , ψ) are background fields for Vol. 18 (2017) Complex Bosonic Many-Body Models 2889 chemical potential L 2 μ n , and not for chemical potential L 2 μ n + K . That is, the prime fields are critical for f * , f → A n+1 ψ * , ψ, f * , f and not for f * , f → A n+1 ψ * , ψ, f * , f , as they must be to have A n+1 = A n+1 . The way out of this is of course a (straightforward) fixed point argument that yields a self consistent μ n+1 ≈ L 2 μ n . See [14, Lemmas 6.2 and 6.6]. So far we have skirted the issue of bounding the perturbative correction p n in our main result. To measure the size of p n , we introduce a norm whose finiteness implies that all the kernels in its power series representation are small with v 0 and decay exponentially as their arguments separate in X (n) 0 . For pedagogical simplicity pretend that p n is a function of only two fields-ψ and one derivative field ψ ν . It has a power series expansion x1,x2,x3,x4) where τ (x 1 , x 2 , x 3 , x 4 ) is the minimal length of a tree graph in X 0 that has x 1 , x 2 , x 3 , x 4 among its vertices, and m ≥ 0 is a fixed decay rate. (The small 'coupling constant' v 0 = 2 V 0 2m .) The norm w m of a kernel w with an arbitrary number of arguments is defined in much the same way. For details see [13,§1.4 and Definition A.3]. Ideally, p n (n) would be bounded (and in fact small) uniformly in n . Unfortunately, such a bound is too naive to achieve the upper limit on n stated in our main result. The reason is that while the coefficient of an irrelevant monomial decreases as the scale n increases, the maximum allowed size of fields in the domain S n also increases, so the monomial as a whole can be relatively large. So we have chosen • to move all quartic ψ * ψ) 2 monomials out of p n into A n , i.e., to also renormalize the interaction V n , and • to split p n into two parts, • one, called E n (ψ * , ψ), is an analytic function whose size is measured in terms of a norm-like · (n) and is small (and decreasing with n) and • the other, called R n , is a polynomial of fixed degree, the size of whose coefficient kernels are measured in terms of a norm-like · m . The details are stated in our main result, [13,Theorem 1.17]. Vol. 18 (2017) Complex Bosonic Many-Body Models 2891 They obey the background field equations A.1. Constant Field Background Fields To start getting a feel for the background field equations (A.2), we consider the case that ψ * and ψ are constant fields with ψ * = ψ * . We'll look for solutions φ ( * ) which are also constant fields with φ * = φ * . Since both Q n and Q * n map the constant function 1 to the constant function 1, the constant field background fields obey This is of the form 'real number times φ equals real number times ψ' so the phase of φ and ψ will be the same (modulo π). So it suffices to consider the case that ψ and φ are both real and obey there is always exactly one solution when μ n ≤ 1, but the solution can be nonunique when μ n > 1. For example, when μ n > 1 and ψ = 0, the solutions are φ = 0 and φ = ± μn−1 vn . A.2. The Background Field in the Parabolic Regime Imagine that we wish to solve the background field equations (A.2) for φ ( * ) as analytic functions of ψ ( * ) , in the parabolic regime, when μ n is small, so that the minimum of the effective potential is still near the origin-see (5). Then We are interested in small ψ ( * ) , so the O ψ 3 ( * ) corrections are unimportant. We here see the parabolic (discrete) differential operators d n ∂ ( * ) 0 + Δ. A.3. The Background Field in the Elliptic Regime Imagine that we again wish to solve the background field equations (A.2), but this time in the elliptic regime when μ n is large, v n is small and the effective potential has a deep well, whose minima form a circle in the complex plane of radius r n = μn vn . We are interested in ψ ( * ) and φ ( * ) near the minimum of the effective potential. That is, with ψ ( * ) , φ ( * ) ≈ r n . We write ψ = r n e R+iΘ ψ * = r n e R−iΘ φ = r n e X+iH φ * = r n e X−iH (A.4) and look for solutions when R, Θ are small. Substitute into (A.2) and divide by r n . This gives Expand the exponentials, keeping only terms to first order in R, Θ, X, H , to get Now simplify, by adding together the two equations of (A.5) and dividing by 2, and then subtracting the second equation of (A.5) from the first and dividing by 2i. Pretend that ∂ 0 is a continuum partial derivative rather than a discrete forward derivative. Then The Q * n Q n provides a mass which makes boundedly invertible. But, the presence of this mass is a consequence of our having rescaled the original unit lattice down to the very fine lattice Y n . To invert , ignoring the Q * n Q n , we have to divide, essentially, by • In the parabolic regime, μ n is small and d n is essentially one so that the operator in the curly brackets is approximately ∂ * 0 ∂ 0 + (−Δ) 2 , which is parabolic. • In the elliptic regime, μ n and d n are both very large with μn d 2 n > 0 being essentially independent of n. So the operator in the curly brackets is approximately ∂ * 0 ∂ 0 + +2 μn d 2 n (−Δ), which is elliptic. A.4. The Quadratic Approximation to the Action For the remaining model computations, we study the quadratic approximation to the action (A.1). A.4.b. Expanding Around the Bottom of the Effective Potential. For all μ n = 0, it is appropriate to expand the action about the bottom of the effective potential, rather than about the origin. That is, rather than in powers of ψ ( * ) . So we rewrite the action (A.1) 1, 1 n and then substitute the representations (A.4) of ψ ( * ) and φ ( * ) in terms of radial and tangential fields. Note that when R = Θ = X = H = 0, the field magnitudes |ψ ( * ) | = |φ ( * ) | = r n and ψ ( * ) and φ ( * ) are at the bottom of the effective potential. Still pretending that ∂ 0 is a continuous derivative, and using
8,471.4
2017-05-18T00:00:00.000
[ "Physics" ]
The Thermoelectric Properties of Spongy PEDOT Films and 3D-Nanonetworks by Electropolymerization Recently, polymers have been attracted great attention because of their thermoelectric materials’ excellent mechanical properties, specifically their cost-effectiveness and scalability at the industrial level. In this study, the electropolymerization conditions (applied potential and deposition time) of PEDOT films were investigated to improve their thermoelectric properties. The morphology and Raman spectroscopy of the PEDOT films were analyzed according to their applied potential and deposition time. The best thermoelectric properties were found in films grown at 1.3 V for 10 min, with an electrical conductivity of 158 ± 8 S/cm, a Seebeck coefficient of 33 ± 1 µV/K, and a power factor of 17 ± 2 µW/K·m2. This power factor value is three times higher than the value reported in the literature for electropolymerized PEDOT films in acetonitrile using lithium perchlorate as a counter-ion. The thermal conductivity was found to be (1.3 ± 0.3) × 10−1 W/m·K. The highest figure of merit obtained at room temperature was (3.9 ± 1.0) × 10−2 using lithium perchlorate as a counter-ion. In addition, three-dimensional (3D) PEDOT nanonetworks were electropolymerized inside 3D anodic aluminum oxide (3D AAO), obtaining lower values in their thermoelectric properties. PEDOT:PSS (poly(3,4-ethylenedioxythiophene)-polystyrene sulfonate) is the most studied form of PEDOT in thermoelectric applications [13]. This is because, compared with other organic materials, it offers high thermoelectric performance. Normally, these films are obtained using a commercial suspension of PEDOT:PSS, which is then deposited by drop coating or spin coating. To improve the thermoelectric properties of films, different solvents were used, including DMSO (dimethyl sulfoxide) [4], ethylene glycol [4], formic acid [5], sulfuric acid [8], deionized water, isopropanol, or acetone [6]. In addition, the thermoelectric properties were also improved by different posttreatments, such as obtaining a hybrid material with tellurium [8] or measuring via temperature [6,7], as shown in Table 1. The values of the power factor obtained in these studies were around 50 [6,8], 80 [5] or 95 [7] µW/m·K 2 . The highest power factor, 470 µW/m·K 2 , with a zT of 0.42, was claimed by Kim et al. while using ethylene glycol [4] as a solvent. Table 1. Power factor of PEDOT:PSS films obtained by different solvents or treatments, as reported in the literature. Different Solvents or Treatments Power Factor (µW/m·K 2 ) Reference Acetone 50 [6] Sulfuric acid 50 [8] Formic acid 80 [5] At 150 • C 95 [7] Ethylene glycol 470 [4] Electropolymerization is an excellent growth method thanks to its high control over the structural and morphological properties of the materials. In the literature, several studies can be found on the fabrication of PEDOT films by electropolymerization [14][15][16][17][18]. However, only a few studies have been published covering both the growth of PEDOT films by electropolymerization and their transport or thermoelectric properties. In 2014, Castagnola et al. [10] reported the effect of the different electrochemical routes on the morphology and electrical conductivity of the films by using an aqueous solution of EDOT (ethylenedioxythiophene) and NaPSS. The electrical conductivity increased when the films were grown in the potentiodynamic mode compared with the potentiostatic mode, and the lowest value was found for the galvanostatic mode. Culebras et al. [11] also published a paper in 2014 on the effect of counter-ions (ClO 4 , PF 6 and BTFMSI (bis(trifluoromethylsulfonyl)imide)) on the thermoelectric properties of the polymer. The solvent used was acetonitrile and the electropolymerization conditions were the same (galvanostatic mode at −3 mA vs. Ag/AgCl for 2 min), independent of the counter-ion used. The highest thermoelectric figure of merit of 0.22 at room temperature (RT), with a power factor of 147 µW/m·K 2 and a thermal conductivity of 0.19 ± 0.02 W/m·K, was obtained when BTFMSI, which is very expensive, was used as a counter-ion and reduced with hydrazine after the growth. A comparison of thermoelectric properties, on the basis of the solvents (acetonitrile and a mixture of water and methanol) used for electropolymerization and film thicknesses, was published in 2019 [12]. The best power factor, of 41.3 µW/m·K 2 , was measured when water and methanol mixtures were used as the electrolyte and galvanostatic mode was also applied. Because different electrical conductivities were obtained, depending on the electropolymerization potential [10], it is essential to study the effects of the applied potential on thermoelectric properties. Furthermore, as proven in inorganic materials such as bismuth telluride [19,20], the thermoelectric properties can be enhanced by reducing thermal conductivity through nanostructuration. In the literature, nanowires embedded in PEDOT films grown by chemical oxidation [21][22][23], one-dimensional (1D) and three-dimensional (3D) [24] PEDOT nanowires obtained by electropolymerization inside the lithographic substrate [25], mesoporous silica [26], polycarbonate membranes [27], or anodic aluminum oxide templates [28] can be found. With regard to the thermoelectric properties of these nanostructures, the highest power factor was 92 µW/m·K 2 for PEDOT nanowires electropolymerized inside a lithographic substrate. In this work, PEDOT films were obtained by electropolymerization in the potentiostatic mode, changing the deposition conditions of applied potential and electropolymerization time. The influence of the growth conditions on the chemical structure and morphological properties of the films was studied. Moreover, the thermoelectric properties of the PEDOT films grown under these different conditions, their applied potential, and the electrodeposition time were analyzed. Three-dimensional (3D) PEDOT nanostructures were also electropolymerized inside the 3D anodic aluminum oxide (3D AAO), and their thermoelectric properties were evaluated. In our work, the influence of the potentiostatic mode in electropolymerized PEDOT films on thermoelectric properties was studied for the first time. Therefore, this work opens a new way to improve the thermoelectric properties of PEDOT films by using a growth technique that is cost-effective and industrially scalable and that can be applied to the development of flexible thermoelectric devices. Fabrication Method of Electropolymerized PEDOT Films The fabrication of the films was performed by electropolymerization using a solution in acetonitrile (99.8% from Sigma Aldrich, Darmstadt, Germany) of 0.01 M EDOT (97% from Sigma Aldrich) and 0.1 M LiClO 4 (99.99% from Sigma Aldrich), following what has already been reported [11,12]. Lithium perchlorate was used as a counter-ion for the electropolymerization of EDOT in PEDOT. Electropolymerization was carried out using a conventional three-electrode electrochemical cell at RT controlled by an Autolab PG-STAT101 bi-potentiostat. A Pt mesh, Ag/AgCl, and 150 nm Au/5 nm Cr/Si were used as the counter, reference, and working electrodes, respectively. In addition, to obtain 3D PEDOT nanonetworks, 3D AAO templates were prepared by applying a pulsed currentdensity method. In this process, a periodic current-density profile was applied to generate a modulated layered structure. The combination of this approach with the acidic nature of the electrolyte led to a 3D architecture, presenting both longitudinal (nanopores) and transversal connections. A detailed description of this method, including Al (advent research materials, 99.999%) substrate cleaning and polishing, is available in [29,30]. After the pulsed profile, an added layer of standard nanopores was grown, conferring additional mechanical support and allowing for the posterior separation of the template from the aluminum substrate and its preparation for electropolymerization. The anodization was performed with a 1.1 M sulfuric acid electrolyte (25% v/v ethanol), at −1 • C. After anodization, the AAO template was cleaned with deionized water and then dried. Kapton tape was used to clean the template surface, exfoliating the overetched upper layers. The aluminum substrate was chemically etched with a CuCl 2 /HCl aqueous solution, after which the AAO barrier layer was removed with an aqueous phosphoric acid solution at 30 • C. Before electropolymerization, 150 nm Au/5 nm Cr layers were deposited on the AAO side that had the 3D AAO structure, ensuring electric contact but also limiting the polymerization in the 3D zone of the template. The electropolymerization inside the 3D AAO was carried out in similar conditions to those for the PEDOT films. Morphological Characterization and Chemical Structure of PEDOT Films The morphology of the films and nanostructures was analyzed using field emissionscanning electron microscopy (FE-SEM, FEI VERIOS 460, Thermo Fisher Scientific, Waltham, MA, USA) with a 2 kV accelerating voltage. The thickness of the films was measured using a stylus profiler system (Vecco ® Dektak, Bruker, Billerica, MA, USA). Raman spectroscopy was performed to analyze the vibrational modes of PEDOT films prepared to vary the deposition conditions: applied potential and electropolymerization time. These measurements were performed using a high-resolution Raman spectrometer (Horiba Jobin Yvon, Kioto, Japan) with a 532 m Nd:YAG laser (8.5 mW) from 200 to 1700 cm −1 in air at RT. Thermoelectric Characterization of PEDOT Films To measure the thermoelectric properties of PEDOT films, it was necessary to detach the films from the conductive substrate, which in our case was a Si substrate coated with gold and chromium layers. The detachment process consisted of immersing the films in ethanol to separate them from the conductive substrate, and then the film was transferred to a glass substrate. The electrical conductivity was measured using an Ecopia Hall measurement system, Toronto, ON, Canada (HMS-5500) at RT. The Seebeck coefficient was measured using a lab-made system at RT by applying different temperature gradients between 1 • C and 5 • C. The slope between the generated Seebeck voltage and the temperature gradient gave the Seebeck coefficient. Because of the measurement systems used, both electrical conductivity and Seebeck coefficient were obtained in-plane. The errors of the electrical conductivity and Seebeck coefficient were 5% and 10%, respectively. The thermo-electric power factor was calculated by using the electrical conductivity and the Seebeck coefficient in the in-plane direction. Finally, the thermal conductivity was measured by the photoacoustic method [2,19,[31][32][33][34][35]. The photoacoustic (PA) technique was calibrated, and it was explained in the supporting information of some of the mentioned works. This method involved the periodic illumination of the samples' surface, once it has been coated with a 80 nm titanium layer, with a modulated laser (in our case, a fiber coupled from Alphalas of 980 nm in wavelength and a maximum intensity of 260 mW). The absorption of the light produced periodic heating in the sample, creating acoustic waves due to the heating of the surrounding air. These acoustic waves were detected by a microphone (40 BL 1/4 CCP pressure type, with a 26 CB, 1/4 preamplifier, both from G.R.A.S. Sound & Vibration). The layer's thermal diffusivity was calculated by using a multilayer model developed by Hu et al. [36], which used the phase shift between the incident light and the recorded sound. A reference sample consisting of a quartz substrate was also used. The thermal conductivity, k, can then be calculated using the equation k = α·ρ·Cp, where ρ is the theoretical density and Cp is the specific heat of the layer of interes. In this case, the density was 1.47 g/cm 3 [37], and the specific heat was 0.95 J/g.K [38]. The error rate associated with the thermal conductivity obtained by the photoacoustic technique was approximately 20%. Fabrication of PEDOT Films Cyclic voltammetry (see Figure 1) was performed from the open-circuit potential (OCP) of −0.25 V toward the oxidation state (1.8 V), then to a reduction state (−1.3 V), and finally back to the OCP vs. Ag/AgCl with a scan rate of 0.01 V/s to determine the onset oxidation potential and the appropriate potential range for the electropolymerization of PEDOT on the gold substrate to fabricate PEDOT films. The solution used was 0.01 M EDOT and 0.1 M LiClO 4 , which was a supporting electrode, in acetonitrile. fine-tune the structural, morphological, and thermoelectric properties of PEDOT films. In addition, three electropolymerization times (5, 10, and 15 min) were also studied to determine the influence of the film thicknesses on the thermoelectric properties. Additionally, when applying lower potentials, such as 1.2 V vs. Ag/AgCl, no film was obtained, and for higher potentials, such as 1.5 V vs. Ag/AgCl, the transport properties were too low to be considered of interest. Morphological Characterization and Chemical Structure of PEDOT Films The influence of the oxidation potential and polymerization time on the morphological properties of the films was analyzed using FE-SEM. Figure 2 shows the top view of FE-SEM images for the different electropolymerization conditions. The top view at higher magnification is an inset in all FE-SEM images. The onset potential was taken at the intersection of the tangents drawn at the baseline current and the oxidation current slope in the cyclic voltammetry. In our case, the onset potential was observed at 1.15 V. Another important feature was the crossover between the forward and reverse scans. This is called the "nucleation loop" and is attributed to the initial stage of nucleation processes for conductive polymer films. In our case, this peak was found at~1.3 V. The oxidation region went from 1.15 V to 1.4 V. In this region, the EDOT monomer was oxidized, becoming a polymer through a diffusion process. These EDOT monomers were oxidized to radical cations and then dimerized and deprotonated. After this step, the dimer was oxidized, and the formation of oligomeric radical cation species occurred. This oligomer bound with other EDOT •+ , forming the PEDOT polymer [12,14]. The oxidation of EDOT is an irreversible process up to 2 V [39], and therefore, in this study, the cyclic voltammetry was performed up to 1.8 V. In addition, a reduction peak was observed at −0.5 V (see Figure 1), showing that the electropolymerization process of PEDOT was reversible. The selection of the potential to perform the electropolymerization is important in the growth of PEDOT films, and it can affect the structural and morphological properties of the films and, consequently, their thermoelectric properties. According to the cyclic voltammetry, the potential region of interest ranged from 1.15 V vs. Ag/AgCl to 1.5 V vs. Ag/AgCl. Therefore, the electropolymerization potentials of 1.3 V vs. Ag/AgCl and 1.4 V vs. Ag/AgCl were selected as the electropolymerization potentials in the first stage of the nucleation processes and the final stage of the oxidation region, respectively, to grow and fine-tune the structural, morphological, and thermoelectric properties of PEDOT films. In addition, three electropolymerization times (5, 10, and 15 min) were also studied to determine the influence of the film thicknesses on the thermoelectric properties. Additionally, when applying lower potentials, such as 1.2 V vs. Ag/AgCl, no film was obtained, and for higher potentials, such as 1.5 V vs. Ag/AgCl, the transport properties were too low to be considered of interest. Morphological Characterization and Chemical Structure of PEDOT Films The influence of the oxidation potential and polymerization time on the morphological properties of the films was analyzed using FE-SEM. Figure 2 shows the top view of FE-SEM images for the different electropolymerization conditions. The top view at higher magnification is an inset in all FE-SEM images. fine-tune the structural, morphological, and thermoelectric properties of PEDOT films. In addition, three electropolymerization times (5, 10, and 15 min) were also studied to determine the influence of the film thicknesses on the thermoelectric properties. Additionally, when applying lower potentials, such as 1.2 V vs. Ag/AgCl, no film was obtained, and for higher potentials, such as 1.5 V vs. Ag/AgCl, the transport properties were too low to be considered of interest. Morphological Characterization and Chemical Structure of PEDOT Films The influence of the oxidation potential and polymerization time on the morphological properties of the films was analyzed using FE-SEM. Figure 2 shows the top view of FE-SEM images for the different electropolymerization conditions. The top view at higher magnification is an inset in all FE-SEM images. Reduction peak=-0.5V A comparison of the PEDOT films obtained with the two applied potentials (1.3 V and 1.4 V) shows that their morphologies are different. The morphology observed for the PEDOT films grown at 1.3 V presents a similar morphology to the cauliflower structure. The size of these cauliflowers increases when the deposition time increases. When the electropolymerization time is increased from 5 min to 15 min, the cauliflower size increases from 1 µm to 2 µm. The morphologies observed in the films deposited at 1.4 V with 5 min and 15 min of deposition times are similar, where the respective sizes increase as the deposition times increase. In this case, the morphology resembles a mesh of small cauliflowers, whereas the films grown at 1.4 V for 10 min have a slightly different morphology, a network-like morphology. This morphology is like the morphology observed in previous works [11,12] where PEDOT films were grown by electropolymerization. The film thickness is an important parameter for electrical conductivity, which was measured using a profilometer to get the exact value. The thicknesses for the film deposited at 1.3 V were found to be 4.1, 6.0, and 7.3 µm for electropolymerization times of 5, 10, and 15 min, respectively. For the films electropolymerized at 1.4 V, the thicknesses were 4.3, 10.4, and 23.2 µm for 5, 10, and 15 min deposited times, respectively. As expected, the films became thicker as the electropolymerization time increased. The vibrational modes of PEDOT films were studied using Raman spectroscopy. Figure 3 shows the Raman spectra of the films performed under an applied potential of 1.3 V and 1.4 V and different electropolymerization times, of 5, 10, and 15 min. As reported in [40], all the peaks can be identified with the vibrational modes of PEDOT, in all the films. An assignment of the active Raman bands is shown in Figure 3, according to [38]. The Raman modes found at 440, 579, 855, and 989 cm −1 were due to oxyethylene ring deformation. Symmetric C-S-C deformation was found at 706 cm −1 , C-O-C deformation was observed at 1129 cm −1 , the peak at 1262 cm −1 was identified as C α -C α (inter-ring) deformation, C β -C β stretching was located at 1367 cm −1 , symmetric C α = C β (-O) stretching was found at 1445 cm −1 , and asymmetric stretching of C = C appeared at 1505 and 1571 cm −1 . To compare the different spongy PEDOT films grown in this study, the Raman spectra were normalized to the most intense band, the one corresponding to the symmetric C α = C β (-O) stretching at 1445 cm −1 . The position, the intensity, and the full width half maximum (FWHM) of the different Raman vibration modes for the PEDOT films were determined by Lorentzian functions, and they are shown in Table 2. As shown in Figure 3 and Table 2, similar Raman spectra were obtained for all the PEDOT films, regardless of the electropolymerization potential and growth time, identifying all the Raman bands associated with PEDOT. However, changes in position, FWHM, and relative intensity were identified depending on the electropolymerization conditions. In general, with respect to the Raman position of the bands, a blue shift is observed for most Raman modes as electropolymerization time increases (see Figure 3 and Table 2), independent of the applied potential. This behavior may be associated with the chemical bond length of molecules [41] due to changes in the structures induced by the growth conditions. The greater Raman shift is observed toward lower wavenumbers in the symmetric C α = C β (-O) stretching band of the film prepared at 1.4 V for 5 min. According to Mengistie et al. [5], the red shift of this vibrational mode could indicate that the chain in the resonant structure of PEDOT changes from a benzoid to a quinoid structure [42]. This change in the PEDOT structure increases electrical conductivity, but in our case, this change into quinoid was not enough to modify the whole polymer structure (the Raman shift is very low). Therefore, the electrical conductivity should not be affected by this structural change. Concerning the FWHM of the Raman bands, a trend was not noted, and variations may relate to modifications in the structures and structural defects generated during the electropolymerization. The strongest changes in the Raman spectra of PEDOT films were shown in the relative intensity of the Raman shifts between 1300 and 1600 cm −1 (Figure 3). In this region, the double bonds of PEDOT are found, and for thicker films (structures prepared at higher electropolymerization times), a higher intensity of the signal of these Raman modes was observed. This was due to the neutral PEDOT segments' being more active for a green laser than the doped segments were [11]. Thermoelectric Characterization of PEDOT Films The figure of merit of the thermoelectric materials depends on the Seebeck coefficient (S), electrical conductivity (σ), thermal conductivity (κ), and absolute temperature, T, as shown in Equation (1): The power factor is given by the electrical conductivity times the Seebeck coefficient to the power of two. Figure 4 depicts the transport properties (S and σ) of electropolymerized PEDOT films at various applied potentials and deposition times. The electrical conductivity increased as the electropolymerization times increased (see Figure 4A) for the films grown at 1.3 V. The values obtained were 109 ± 6, 158 ± 8, and 174 ± 9 S/cm for 5, 10, and 15 min, respectively, while for those grown at 1.4 V, the conductivity was observed at half of that obtained at 1.3 V and decreased even more with the deposition time. The values of the electrical conductivity of the films obtained at 1.4 V were 57 ± 3, 49 ± 3, and 28 ± 1 S/cm for 5, 10, and 15 min, respectively. The difference in electrical conductivity between the films obtained at 1.3 V and at 1.4 V can be explained by the red shift observed in the Raman spectrum. As was already explained, the red shift of the symmetric C α = C β (-O) stretching Raman band indicates a change in the PEDOT structure from a benzoid to a quinoid structure [42] (see Figure 5). The highest electrical conductivity was measured on the PEDOT film electropolymerized at 1.3 V for 15 min, where the benzoid structure was maximized. For the electrical conductivity, the value found in the literature for films grown in acetonitrile and lithium perchlorate was 200 S/cm [11,12], and the value found in films electropolymerized using Na:PSS in water was 150 S/cm [10]. Then, the maximum electrical conductivity obtained in this study was of the same order of magnitude as those of the values found in the literature. The electrical conductivity increased as the electropolymerization times increased (see Figure 4A) for the films grown at 1.3 V. The values obtained were 109 ± 6, 158 ± 8, and 174 ± 9 S/cm for 5, 10, and 15 min, respectively, while for those grown at 1.4 V, the conductivity was observed at half of that obtained at 1.3 V and decreased even more with the deposition time. The values of the electrical conductivity of the films obtained at 1.4 V were 57 ± 3, 49 ± 3, and 28 ± 1 S/cm for 5, 10, and 15 min, respectively. The difference in electrical conductivity between the films obtained at 1.3 V and at 1.4 V can be explained by the red shift observed in the Raman spectrum. As was already explained, the red shift of the symmetric Cα = Cβ (-O) stretching Raman band indicates a change in the PEDOT structure from a benzoid to a quinoid structure [42] (see Figure 5). The electrical conductivity increased as the electropolymerization times increased (see Figure 4A) for the films grown at 1.3 V. The values obtained were 109 ± 6, 158 ± 8, and 174 ± 9 S/cm for 5, 10, and 15 min, respectively, while for those grown at 1.4 V, the conductivity was observed at half of that obtained at 1.3 V and decreased even more with the deposition time. The values of the electrical conductivity of the films obtained at 1.4 V were 57 ± 3, 49 ± 3, and 28 ± 1 S/cm for 5, 10, and 15 min, respectively. The difference in electrical conductivity between the films obtained at 1.3 V and at 1.4 V can be explained by the red shift observed in the Raman spectrum. As was already explained, the red shift of the symmetric Cα = Cβ (-O) stretching Raman band indicates a change in the PEDOT structure from a benzoid to a quinoid structure [42] (see Figure 5). The Seebeck coefficients were positive in all the films measured, showing that PEDOT is a p-type semiconductor (see Figure 4B). This magnitude decreased as the deposition time increased, except for the film grown at 1.4 V for 5 min. The Seebeck coefficients for the films grown at 1.3 V were found to be 31 ± 2, 33 ± 1, and 28 ± 1 µV/K for 5, 10, and 15 min, respectively. The values of this magnitude for the films deposited at 1.4 V were 27 ± 2, 21 ± 2, and 13 ± 4 µV/K for 5, 10, and 15 min, respectively. Finally, the Seebeck coefficient was higher for films deposited at 1.3 volts, with the highest value discovered in films deposited for 10 min. The Seebeck coefficients found in the literature for films grown under similar conditions (acetonitrile, lithium perchlorate, and electropolymerization) were 9 µV/K [11] and 18.9 µV/K [12]. The PEDOT film grown in this work with the highest Seebeck coefficient exhibits a value 1.8 times higher than the highest reported. Because of the values obtained for the electrical conductivity and the Seebeck coefficient, the highest power factor of 17 ± 2 µW/K·m 2 can be calculated for film grown at 1.3 V for 10 min. The power factor value obtained in this study is more than two times higher than that obtained in previous studies [11,12]. Then, we showed that by studying only the electropolymerization parameters, it was possible to improve the power factor of PEDOT films without using a post-treatment after the growth. According to the SEM images, the spongy PEDOT film presents a porous structure. That is something that must be considered when studying thermal conductivity. In this work, the photoacoustic (PA) technique was calibrated and used to extract the thermal conductivity of the film at RT, as we have done in previous works [2,19,[31][32][33][34][35]. The PA measurements can be perfectly fitted with a film composed of 90% of polymer and 10% air. Then, the thermal conductivity values of the spongy film are 0.19 ± 0.04, 0.13 ± 0.03, and 0.15 ± 0.03 W/m·K for the film growth at 1.3 V for the different deposition times of 5, 10, and 15 min, respectively. Taking into account the experimental error of the technique, all of these values are comparable. By comparing the obtained values with the values in the literature, it can be concluded that our values are similar for the material or even lower for the spongy film to those in the literature, which are 0.16 W/m·K or 0.19 W/m·K for PEDOT films grown by oxidative chemical vapor deposition (oCVD) [43] or electropolymerized PEDOT films using BTFMSI as the counter-ion [11], respectively. The thermal conductivity values measured in this study cannot be compared with the values of other PEDOT films electropolymerized using LiClO 4 as a counter-ion, because they have not been measured until now. Regarding the zT calculation, according to Cappai et al. [44], the anisotropy in the thermal conductivity depends on the PEDOT chain length. Because the PEDOT chain length is a difficult parameter to determine experimentally, because it is necessary to conduct the measurements in the liquid state and because PEDOT films cannot be dissolved in any solvent, it unclear whether the zT can be calculated. If the figure of merit at room temperature can be estimated in this particular case, it will be of (1.6 ± 0.6) × 10 −2 , (3.9 ± 1.0) × 10 −2 and (2.8 ± 0.8) × 10 −2 for films grown at 1.3 V and deposition times of 5, 10, and 15 min, respectively. The maximum value of the figure of merit calculated under previous assumptions is of (3.6 ± 0.6) × 10 −2 . This value is lower than the value (0.22) obtained by Culebras et al. [24] when BTFMSI was used as a counter-ion, but in our case, the process has a much lower cost. The power factor values measured in this study cannot be compared with the values of another PEDOT film electropolymerized using LiClO 4 as a counter-ion, because they have not been measured until now. However, this is the first time that the figure of merit of electropolymerized PEDOT films was obtained in acetonitrile using lithium perchlorate as a counter-ion. 3D-PEDOT Nanonetworks To determine whether better-controlled nanostructuration and a controlled porosity confer an improvement on the thermoelectric properties of PEDOT, the material was electropolymerized inside one 3D AAO template. This 3D AAO template was prepared according to our previous work [29,30,34,[45][46][47][48][49][50][51]. For that, the electropolymerization potential of 1.3 V was chosen because it yielded the maximum power factor in films. An electropolymerization time of 2 h was used, resulting in a thickness of 3.5 µm, as observed in the FE-SEM images of Figure 6A. The number of transversal channels was 15. To determine whether better-controlled nanostructuration and a controlled porosity confer an improvement on the thermoelectric properties of PEDOT, the material was electropolymerized inside one 3D AAO template. This 3D AAO template was prepared according to our previous work [29,30,34,[45][46][47][48][49][50][51]. For that, the electropolymerization potential of 1.3 V was chosen because it yielded the maximum power factor in films. An electropolymerization time of 2 h was used, resulting in a thickness of 3.5 μm, as observed in the FE-SEM images of Figure 6A. The number of transversal channels was 15. Once the 3D PEDOT nanostructures had been prepared, their thermoelectric properties, electrical conductivity, and Seebeck coefficient were measured. The electrical conductivity was calculated by using the sheet resistance ( ), the number of the transversal channels (N), and the height of these transversal channels (l) (see Figure 6B), according to the study that was used for the 3D bismuth telluride nanonetwork [20], following Equation (2): The sheet resistance, the number of channels, and the transversal channels height were 1.2 × 10 2 Ω/  15 nm, and 45 nm, respectively, yielding an electrical conductivity of 124 ± 6 S/cm. The Seebeck coefficient was found to be 19 ± 2 μV/K. The main advantage of growing these nanostructures was that they could be measured as normal films, without dissolving the AAO template. The power factor of the 3D-PEDOT nanostructures calculated from the electrical conductivity and Seebeck coefficient was 4.3 ± 0.9 μW/K•m 2 . On Once the 3D PEDOT nanostructures had been prepared, their thermoelectric properties, electrical conductivity, and Seebeck coefficient were measured. The electrical conductivity was calculated by using the sheet resistance (R sheet ), the number of the transversal channels (N), and the height of these transversal channels (l) (see Figure 6B), according to the study that was used for the 3D bismuth telluride nanonetwork [20], following Equation (2): The sheet resistance, the number of channels, and the transversal channels height were 1.2 × 10 2 Ω/ , 15 nm, and 45 nm, respectively, yielding an electrical conductivity of 124 ± 6 S/cm. The Seebeck coefficient was found to be 19 ± 2 µV/K. The main advantage of growing these nanostructures was that they could be measured as normal films, without dissolving the AAO template. The power factor of the 3D-PEDOT nanostructures calculated from the electrical conductivity and Seebeck coefficient was 4.3 ± 0.9 µW/K·m 2 . On the basis of these findings, we can conclude that controlling the air gaps of the PEDOT spongy material yields no improvement in their thermoelectric performance. These results are different from what can be observed in inorganic materials such as 3D-Bi 2 Te 3 , where a strong improvement in the thermoelectric performance has been measured [19,20]. Conclusions PEDOT films were grown using a low-cost technique based on EDOT electropolymerization in acetonitrile with lithium perchlorate as a counter-ion. The effect of the electropolymerization conditions (applied potential and deposition time) on the chemical structure and morphological properties of the films was investigated. It was shown that the vibrational modes of PEDOT films were not affected by the applied potential or the electropolymerization time, given that all the obtained films presented the typical vibrational modes of PEDOT. The morphology of PEDOT films depends on the applied potential. In the case of 1.3 V, a cauliflower structure was observed, and the size of these cauliflowers grew as the deposition time increased. In contrast, the morphology of PE-DOT films grown at 1.4 V looked network-like. With regard to the Raman position of the bands, a blue shift was observed for most Raman modes as electropolymerization time increased independently of the applied potential. This behavior may be associated with the chemical bond length of molecules, due to changes in the structures induced by the growth conditions. The greatest Raman shift was observed in the symmetric C α = C β (-O) stretching band, the red shift of this vibrational mode may indicate that the chain in the resonant structure of PEDOT changed from a benzoid to a quinoid structure. The highest values for electrical conductivity, Seebeck coefficient, and power factor were found to be 158 ± 8 S/cm, 33 ± 1 µV/K, and 17 ± 2 µW/K·m 2 , respectively, for the PEDOT film grown at 1.3 V for 10 min. This power factor value was more than two times higher than the best values reported in the literature for electropolymerized PEDOT films in acetonitrile using lithium perchlorate as a counter-ion. The thermal conductivity of the film grown at 1.3 V for 10 min was estimated to be 0.13 ± 0.03 W/m·K. These resulted (in the case the PEDOT chains were not too long) in a figure of merit at room temperature of zT = (3.9 ± 1.0) × 10 −2 . In addition, 3D PEDOT nanostructures were electropolymerized inside 3D AAO, and their thermoelectric properties were measured; where they ended up being of the same order of magnitude as those of the PEDOT films. No further improvement was observed upon controlling the 3D nanostructuration; the opposite of what has been found in inorganic 3D networks was found instead. As a whole, an alternative approach to improve the thermoelectric performance of flexible and scalable PEDOT films was reported.
7,960.4
2022-12-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Neutron Dark Matter Decays and Correlation Coefficients of Neutron Beta Decays As we have pointed out in (arXiv:1806.10107 [hep-ph]), the existence of neutron dark matter decay modes"n ->chi + anything", where"chi"is a dark matter fermion, for the solution of the neutron lifetime problem changes priorities and demands to describe the neutron lifetime"tau_n = 888.0(2.0)s", measured in beam experiments and defined by the decay modes"n ->p + anything", in the Standard Model (SM). The latter requires the axial coupling constant"lambda"to be equal to"lambda = - 1.2690"(arXiv:1806.10107 [hep-ph]). Since such an axial coupling constant is excluded by experimental data reported by the PERKEO II and UCNA Collaborations, the neutron lifetime"tau_n = 888.0(2.0)s"can be explained only by virtue of interactions beyond the SM, namely, by the Fierz interference term of order"b ~ - 10^{-2}"dependent on scalar and tensor coupling constants. We give a complete analysis of all correlation coefficients of the neutron beta decays with polarized neutron, taking into account the contributions of scalar and tensor interactions beyond the SM with the Fierz interference term"b ~ - 10^{-2}". We show that the obtained results agree well with the contemporary experimental data that does not prevent the neutron with the rate of the decay modes"n ->p + anything", measured in beam experiments, to have dark matter decay modes"n ->chi + anything". I. INTRODUCTION Recently Fornal and Grinstein [1] have proposed to explain the neutron lifetime anomaly, related to a discrepancy between experimental values of the neutron lifetime measured in bottle and beam experiments, through contributions of the neutron dark matter decay modes n → χ + γ an n → χ + γ * → χ + e − + e + , where χ is a dark matter fermion, γ and γ * are real and virtual photons, and (e − e + ) is the electron-positron pair. However, according to recent experimental data [2,3], the decay modes n → χ + γ and n → χ + e − + e + are suppressed. In [4] the experimental data on the decay mode n → χ + e − + e + in [3] have been interpreted as follows. An unobservability of the decay mode n → χ+ e − + e + , which is not mediated by a virtual photon, may also mean that the production of the electronpositron pair in such a decay is below the reaction threshold, i.e. a mass m χ of dark matter fermions χ obeys the constraint m χ > m n − 2m e , where m n and m e are masses of the neutron and electron (positron), respectively. Then, we have proposed that the neutron lifetime anomaly can be explained by the decay mode n → χ+ν e +ν e , where (ν eνe ) is a neutrino-antineutrino pair [4]. Since neutrino ν e and electron e − belong to the same doublet in the Standard Electroweak Model (SEM) [5], neutrino-antineutrino (ν eνe ) pairs couple to the neutron-dark matter current with the same strength as electron-positron (e − e + ) pairs [4]. For the UV completion of the effective interaction (nχℓl), where ℓ(l) is electron (positron) or neutrino(antineutrino), we have proposed a gauge invariant quantum field theory model with SU L (2) × U R (1) × U ′ R (1) × U ′′ L (1) gauge symmetry. Such a quantum field theory model contains the sector of the SEM (or the Standard Model (SM) sector) [5] with SU L (2) × U R (1) gauge symmetry and the dark matter sector with U ′ R (1) × U ′′ L (1) gauge symmetry. In the physical phase the dark matter sectors with U ′ R (1) and U ′′ L (1) symmetries are responsible for the UV completion of the effective interaction (nχℓl) [4] and interference of the dark matter into dynamics of neutron stars [6][7][8][9], respectively. The dark matter sector with U ′′ L (1) gauge symmetry we have constructed in analogue with scenario proposed by Cline and Cornell [9]. This means that dark matter fermions with mass m χ < m n couple to a very light dark matter spin-1 boson Z ′′ providing a necessary repulsion between dark matter fermions in order to give a possibility for neutron stars to reach masses of about 2M ⊙ [10], where M ⊙ is the mass of the Sun [5]. We have shown that in the physical phase the predictions of the dark matter sector with U ′ R (1) gauge symmetry do not contradict constraints on i) dark matter production in ATLAS experiments at the LHC, ii) the cross section for the low-energy dark matter fermion-electron scattering (χ + e − → χ + e − ) [11], and iii) the cross section for the low-energy dark matter fermion-antifermion annihilation into the electron-positron pairs (χ +χ → e − + e + ) [12]. We have also proposed that reactions n → χ + ν e +ν e , n + n → χ + χ, n + n → χ + χ + ν e +ν e and χ + χ → n + n, allowed in our model, can serve as URCA processes for the neutron star cooling [13][14][15]. Having assumed that the results of the experimental data [2,3] can be also interpreted as a production of electronpositron pairs below reaction threshold of the decay mode n → χ + e − + e + , we have proposed to search for traces of dark matter fermions induced by the nχ e − e + interaction in the low-energy electron-neutron inelastic scattering e − +n → χ+e − . Such a reaction can be compared experimentally with low-energy electron-neutron elastic scattering e − + n → n + e − [16]- [20]. The differential cross section for the reaction e − + n → χ + e − possesses the following properties: i) it is inversely proportional to a velocity of incoming electrons, ii) it is isotropic relative to outgoing electrons and iii) momenta of outgoing electrons are much larger than momenta of incoming electrons. Because of these properties of the differential cross section for the reaction e − + n → χ + e − can be in principle distinguished above the background of the elastic electron-neutron scattering e − + n → n + e − . In order to have more processes with particles of the SM in the initial and final states allowing to search dark matter in terrestrial laboratories we have proposed to search dark matter fermions by means of the electrodisintegration of the deuteron into dark matter fermions and protons e − +d → χ+p+e − close to threshold [21], induced by the electron-neutron inelastic scattering e − +n → χ+e − with energies of incoming electrons larger than the deuteron binding energy, which of about |ε d | ∼ 2 MeV. We have calculated the triple-differential cross section for the reaction e − + d → χ + p + e − close to threshold, and proposed to detect dark matter fermions from the electrodisintegration of the deuteron e − + d → χ + p + e − above the background e − + d → n + p + e − by detecting outgoing electrons, protons and neutrons in coincidence. A missing of neutron signals at simultaneously detected signals of protons and outgoing electrons should testify an observation of dark matter fermions in the final state of the electrodisintegration of the deuteron close to threshold. As has been pointed out in [4], the acceptance of existence of the neutron dark matter decay modes n → χ+anything is not innocent and demands to pay the following price. Indeed, the neutron lifetime time τ n = 879.6(1.1) s, calculated in the SM [22] for the axial coupling constant λ = −1.2750 (9) [31] by taking into account the complete set of corrections of order 10 −3 , caused by the weak magnetism and proton recoil, taken to next-to-leafing order in the large nucleon mass M expansion, and radiative corrections of order O(α/π), where α is the fine-structure constant [5], agrees well with the world averaged lifetime of the neutron τ n = 880.1(1.0) s [5], and the neutron lifetime τ n = 879.6(6), averaged over the experimental values measured in bottle experiments [24]- [29] included in the Particle Date Group (PDG) [5]. It agrees also with the value τ n = 879.4(6) s and the axial coupling constant λ = −1.2755 (11), obtained by Czarnecki et al. [30] by means of a global analysis of the experimental data on the neutron lifetime and axial coupling constant. At first glimpse such an agreement rules out fully any dark matter decay mode n → χ + anything of the neutron. For a possibility of the neutron to have any dark matter decay mode n → χ+ anything the SM should explain the neutron lifetime τ n = 888.0(2.0) s, measured in beam experiments, instead of to explain the neutron lifetime τ n = 879.6(6) s, measured in bottle ones. As has been shown in [4], using the analytical expression for the neutron lifetime (see Eq. (41) and (42) of Ref. [22]) the value τ n = 888.0(2.0) s can be fitted by the axial coupling constant equal to λ = −1.2690. Since such a value of the axial coupling constant is ruled out by recent experiments [31]- [34] and a global analysis by Czarnecki et al. [30], so the hypothesis of the existence of the neutron dark matter decay modes should state that the SM, including a complete set of corrections of order 10 −3 caused by the weak magnetism, proton recoil and radiative corrections [35] (see also [22]), is not able to describe correctly the rate and correlation coefficients of the neutron decay modes n → p + anything. Hence, the theoretical description of the neutron lifetime, measured in beam experiments, should go beyond the SM. Indeed, keeping the value of the axial coupling constant equal to λ = −1.2750 or so [31]- [34] and having accepted the existence of the dark matter decay modes n → χ + anything we have also to accept a sufficiently large contribution of the Fierz interference term b [36], dependent on the scalar and tensor coupling constants of interactions beyond the SM [37]- [50] (see also [35] and [22]). Using the results obtained in [22], the neutron lifetime τ n = 888.0 s can be fitted by the axial coupling constant λ = −1.2750, the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V ud = 0.97420 [5] and the Fierz interference term b ∼ −0.014 calculated at the neglect of the quadratic contributions of scalar and tensor coupling constants of interactions beyond the SM [4]. Thus, in order to confirm a possibility for the neutron to have any dark matter decay modes n → χ + anything we have to show that a tangible influence of the Fierz interference term b ∼ −0.014 is restricted only by the rate 1/τ n = 1/888.0 s −1 = 1.126 × 10 −3 s −1 of the neutron decay modes n → p + anything, measured in beam experiments, and such a term does not affect the correlation coefficients of the electron-energy and angular distributions of the neutron β − -decay. This paper is addressed to the analysis of the contributions of scalar and tensor interactions beyond the SM to the rate of the neutron decay modes n → p + anything, measured in beam experiments, and correlation coefficients of the neutron β − -decay with polarized neutron, polarized electron and unpolarized proton. We take into account the contributions of the SM, including a complete set of corrections of order 10 −3 , caused by the weak magnetism and proton recoil, calculated to next-to-leading order in the large nucleon mass M expansion [51,52] (see also [35] and [22,53]), and radiative corrections of order O(α/π), calculated to leading order in the large nucleon mass expansion [54]- [57] (see also [35] and [22,53]). We search anyone solution for the values of scalar and tensor coupling constants of interactions beyond the SM allowing to fit the rate 1/τ n = 1.126 × 10 −3 s −1 of the neutron decay modes n → p + anything, measured in beam experiments, and the experimental data on the correlation coefficients of the neutron β − -decay under consideration. The existence of such a solution for the values of scalar and tensor coupling constants of interactions beyond the SM should imply an allowance for the neutron to have the dark matter decay modes n → χ + anything. The paper is organized as follows. In section II we give the electron-energy and angular distribution of the neutron β − -decay with polarized neutron, polarized electron and unpolarized proton. We write down the expressions for the correlation coefficients including the contributions of the SM corrections of order 10 −3 , caused by the weak magnetism and proton recoil to next-to-leading order in the large nucleon mass expansion, and radiative corrections of order O(α/π), and the contributions of scalar and tensor interactions beyond the SM, calculated to leading order in the large nucleon mass M expansion. In section III we analyse the rate 1/τ n = 1.126 × 10 −3 s −1 of the neutron decay modes n → p + anything, measured in beam experiments, and fit it by the contribution of the Fierz interference term by taking into account the quadratic contributions of scalar and tensor interactions beyond the SM. In section IV we define the correlation coefficients for the real scalar and tensor coupling constants settingC S = −C S andC T = −C T . In section V we propose the solution C S = −C S = −C T =C T = −8.79 × 10 −3 for real scalar and tensor coupling constants, and define the correlations coefficients in terms of the coupling constant C S and the Fierz interference term b = −1.44 × 10 −2 . We show that the contributions of quadratic terms C 2 S are of the standard order 10 −4 . In turn, the contributions of linear terms are of order 10 −2 − 10 −3 . In section VI we analyse the contributions of the Fierz interference term to the electron and antineutrino asymmetries, defined by the neutron spin and electron and antineutrino 3-momentum correlations, respectively, and to the asymmetry, caused by the correlations of the electron and antineutrino 3-momenta. We show that the contribution of the Fierz interference term b = −1.44 × 10 −2 does not contradict the experimental data on the measurements of the correlation coefficients A 0 , B 0 and also a 0 , defined to leading order in the large nucleon mass M expansion [23,60]. In section VII we analyse the averaged values of the correlation coefficients N (E e ) and R(E e ). The correlation coefficient N (E e ) defines the neutron-electron spinspin ξ n · ξ e correlations. In turn, the correlation coefficient R(E e ) is caused by the P-odd and T-odd correlations defined by ξ n · ( k e × ξ e ), where ξ n and ξ e are the unit vectors of the neutron and electron polarizations, and k e is the electron 3-momentum. We show that the averaged value of the correlation coefficient N (E e ) , obtained in this paper, agrees with the experimental value within two standard deviations. In turn, the averaged value of the correlation coefficient R(E e ) acquires the relative contributions of order 10 −4 , caused by interactions beyond the SM. However, the solution C S = −C S = −C T =C T = −8.79 × 10 −3 , being valid for the neutron β − -decay, fails when it is applied to the analysis of the superallowed 0 + → 0 + transitions. Indeed, as has been found by Hardy and Towner [58] and González-Alonso et al. [59], the scalar coupling constant should obey the constraints |C S | ≃ 0.0014 (13) and |C S | ≃ 0.0014 (12), respectively. Since in the superallowed 0 + → 0 + transitions the scalar coupling constant is commensurable with zero, in section VIII we propose the solutionC S = −C S = 0 and C T = −C T = 1.11 × 10 −2 with the Fierz interference term b = −1.44×10 −2 . The Fierz interference term has the same value for two different solutions, since it is well-defined in the linear approximation. We show that the correlation coefficients and asymmetries of the neutron β − -decay, calculated for the solutionC S = −C S = 0 and C T = −C T = 1.11 × 10 −2 and the Fierz interference term b = −1.44 × 10 −2 , do not contradict contemporary experimental data. In section IX we discuss the obtained results. We argue that the obtained agreement between theoretical values of the correlation coefficients, defined by the contributions of the SM to order 10 −3 , the Fierz interference term b = −1.44 × 10 −2 and other linear and quadratic coupling constants of scalar and tensor interactions beyond the SM, implies an allowance for the neutron to have dark matter decay modes n → χ + anything. II. ELECTRON-ENERGY AND ANGULAR DISTRIBUTION OF NEUTRON β − -DECAY WITH POLARIZED NEUTRON, POLARIZED ELECTRON AND UNPOLARIZED PROTON The electron-energy and angular distribution of the neutron β − -decay with polarized neutron and electron and unpolarized proton takes the form where G F = 1.1664 × 10 −11 MeV −2 and V ud = 0.97420 (21) are the Fermi weak coupling constant and the Cabibbo-Kobayashi-Maskawa (CKM) matrix element [5], respectively, λ = −1.2750 (9) is a real axial coupling constant [23], MeV is the end-point energy of the electron spectrum, calculated for m n = 939.5654 MeV, m p = 938.2720 MeV and m e = 0.5110 MeV [5], ξ n and ξ e are unit polarization vectors of the neutron and electron, respectively, F (E e , Z = 1) is the relativistic Fermi function [40,[61][62][63] F (E e , Z = 1) = 1 + r p is the electric radius of the proton. In the numerical calculations we use r p = 0.841 fm [64]. The Fermi function F (E e , Z = 1) describes final-state Coulomb proton-electron interaction. Then, b is the Fierz interference term [36,38]- [50] (see also [35] and [22]). The infinitesimal solid angles dΩ e = sin ϑ e dϑ e dϕ e and dΩ ν = sin ϑ ν dϑ ν dϕ ν are defined relative to the 3-momenta k e and k ν of the decay electron and antineutrino, respectively. The correlation coefficients ζ(E e ), a(E e ) and so on, taking into account the contributions of the SM and scalar and tensor interactions beyond the SM [22,50], and the Fierz interference term b are given by where the correlation coefficients X (SM) (E e ) for X = ζ, a, A and so on are calculated within the SM including the complete set of corrections caused by the weak magnetism and proton recoil of order O(E e /M ) and radiative corrections of order O(α/π) [22,53]. In turn, the correlation coefficients b F and X (BSM) are defined by [22,50] b For the subsequent analysis it is convenient to rewrite the correlation coefficients Eq.(4) in terms of real and imaginary parts of scalar and tensor coupling constants. We get Now we may proceed to the analysis of contributions of scalar and tensor interactions beyond the SM to the rate of the neutron decay modes n → p + anything, measured in beam experiments, and correlation coefficients under consideration. IV. CORRELATION COEFFICIENTS The aim of this paper is to find anyone plausible solution for the scalar and tensor coupling constants of interactions beyond the SM, which should be compatible with present time accuracy of the definition of the correlation coefficients of the neutron β − -decay. As a first step of the analysis of the contributions of scalar and tensor interaction beyond the SM we follow [44] and [50] and set i) real the scalar and tensor coupling constants and ii)C j = −C j for j = S.T . This gives and The total correlation coefficients are equal to Now we may proceed to analysing possible solutions for the scalar and tensor coupling constants. V. CORRELATION COEFFICIENTS AND SCALAR AND TENSOR COUPING CONSTANTS. SOLUTION 1 The simplest solution, which sticks out a mile, is C S = −C T . Setting C S = −C T we transcribe the correlation coefficients Eq.(10) into the form At C S = −C T Eq.(8) reduces to the quadratic algebraical equation with the solution where we have chosen only the solution obeying the constraint |C S | ≪ 1. In the linear approximation we get Plugging Eq.(13) into Eq.(11) we obtain the following correlation coefficients, corrected by the contributions of scalar and tensor interactions beyond the SM: where we have used b N = −b F ≃ −b. Thus, we have shown that there exists the solution C S = −C S = −C T =C T = −8.79 × 10 −3 for real scalar and tensor coupling constants, which determines reasonable contributions of interactions beyond the SM to correlation coefficients a(E e ) and A(E e ) of order 10 −4 [65] and to the correlation coefficients G(E e ), N (E e ) and Q e (E e ) of order 10 −3 . In turn, the correlation coefficient B(E e ) ∼ 1 acquires the correction of order 10 −2 . Now we have to compare the obtained results with the experimental data. For this aim we have to analyse the asymmetries of the neutron β − -decay and the averaged values of correlation coefficients. The most sensitive asymmetry of the neutron β − -decay is the electron asymmetry, caused by correlations of the neutron spin ξ n and the electron 3-momentum k e and described by the scalar product ξ n · k e . The experimental electron asymmetry A exp (E e ) of electrons emitted forward and backward with respect to the neutron spin ξ e into the solid angle ∆Ω 12 = 2π(cos θ 1 − cos θ 2 ) with 0 ≤ ϕ ≤ 2π and θ 1 ≤ θ e ≤ θ 2 is equal to [22,50] A exp (E e ) = 1 2 β A(E e )P n (cos θ 1 + cos θ 2 ), (15) where P n = | ξ n | ≤ 1 is the neutron spin polarization, and the correlation coefficient A(E e ) is [22,50] A(E e ) = A The correlation coefficient A where the function f n (E e ) defines the radiative corrections, calculated by Shann [55], and the function A W (E e ) has been calculated by Bilen'kii et al. [51] and by Wilkinson [52] by taking into account the contributions of order O(E e /M ) caused by the weak magnetism and proton recoil (see also [22]). Following [31] we plot in Fig. 1 the function − 1 2 βA(E e ) in the electron energy region m e ≤ E e ≤ E 0 , where the electron asymmetry calculated in the SM at b = 0 is given by the blue curve, whereas the electron asymmetry calculated with the account for the contributions of interactions beyond the SM at b = −1.44 × 10 −2 is presented by the red curve. One may see that these two theoretical curves cannot be practically distinguished in experiments. The results presented by the theoretical curves in Fig. 1 can be also confirmed by the following estimates. We propose to estimate the correlation coefficient A(E e ), calculated to leading order in the large nucleon mass M expression and at the neglect of the radiative corrections. The result is equal to [34]. The analogous estimate we may make for the correlation coefficient a(E e ), describing electron-antineutrino 3momentum correlations, defined by the scalar product k e · k ν . To leading order in the large nucleon mass M expansion we get where The antineutrino asymmetry of the neutron β − -decay is caused by correlations of the neutron spin ξ n and antineutrino 3-momentum k ν , defined by the scalar product ξ n · k ν . Following [66] we define the antineutrino asymmetry B exp (E e ) as follows [22] This expression determines the asymmetry of the emission of the antineutrinos into the forward and backward hemisphere with respect to the neutron spin, where N ∓∓ (E e ) is the number of events of the emission of the electron-proton pairs as functions of the electron energy E e . The signs (++) and (−−) show that the electron-proton pairs were emitted parallel (++) and antiparallel (−−) to a direction of the neutron spin. This means that antineutrinos were emitted antiparallel (++) and parallel (−−) to a direction of the neutron spin. The number of events N −− (E e ) and N ++ (E e ) are defined by the electron-energy and angular distribution of the neutron β − -decay, integrated over the forward and backward hemisphere relative to the neutron spin, respectively [22]. The analytical expression for B exp (E e ) is given by [22] B exp (E e ) = 2P 3 , r ≤ 1 (21) and B exp (E e ) = 2P 3 where r = k e /E ν = k e /(E 0 − E e ) [66] (see also [22]). In the neutron β − -decay with polarized neutron and electron and unpolarized proton the averaged values have been measured only for the correlation coefficients N (E e ) and R(E e ): N exp = N (E e ) = 0.067 ± 0.011 ± 0.004 and R exp = R(E e ) = 0.004 ± 0.012 ± 0.005 [67]. Using Eq. (14) and the results, obtained in [50,53], we get N (E e ) = 0.07685 , R(E e ) = 0.00089. The averaged value of the correlation coefficient N (E e ) = 0.07685 agrees with the experimental one within one standard deviation. In turn, as it follows from Eq. (14) the averaged value of the correlation coefficient R(E e ) can acquire only the relative correction of order 10 −4 , caused by scalar and tensor interactions beyond the SM. This does not contradict the experimental data by [67]. VIII. CORRELATION COEFFICIENTS AND SCALAR AND TENSOR COUPLING CONSTANTS. SOLUTION 2 The solution, implying C S = −C S = −C T =C T = −8.79 × 10 −3 and giving the Fierz interference term b = −1.44×10 −2 , contradicts the constraints on the scalar coupling constant C S extracted from the superallowed 0 + → 0 + transitions [58,59]. According to [58,59], the scalar coupling constant C S should obey the constraints |C S | ≃ 0.0014(13) [58] and |C S | ≃ 0.0014 (12) [59], respectively. Since the analysis of the superallowed 0 + → 0 + transitions implies that the real scalar coupling constant C S should be commensurable with zero, in this section we propose another solution setting C S = −C S = 0 and C T = −C T . In this case the correlation coefficients in Eq.(10) and b F , b E and b N take the form At C S = 0 the algebraic equation Eq.(8) has the following solution where we have chosen only the solution obeying the constraint |C T | ≪ 1. In the linear approximation we get It is obvious that the correlation coefficients Eq. (27) and corresponding asymmetries of the neutron β − -decay do not contradict contemporary experimental data. Our results, obtained above for the asymmetries of the neutron β − -decay with polarized neutron and unpolarized proton and electron are not practically changed. Better agreement we obtain for the averaged value of the correlation coefficient N (E e ) of the neutron-electron spin-spin correlations in the neutron β − -decay with polarized neutron and electron and unpolarized proton. We get N (E e ) = 0.07185, which agrees well with the experimental value N exp = N (E e ) = 0.067 ± 0.011 ± 0.004 [67]. IX. DISCUSSION The main aim of this paper is to show that the fit of the rate of the neutron decay modes n → p + anything, measured in beam experiments, by the Fierz interference term b = −1.44 × 10 −2 does not contradict contemporary experimental data on the values of the correlation coefficients of the neutron β − -decay with polarized neutron, polarized electron and unpolarized proton. We have found that there exist as minimum two solutions of our interest for real scalar and tensor coupling constants: 1) C S = −C S = −C T =C T = −8.79 × 10 −3 and 2) C S = −C S = 0 and C T = −C T = 1.11 × 10 −2 . These solutions define the Fierz interference term b = −1.44 × 10 −2 , and their contributions of order 10 −4 − 10 −2 to the correlation coefficients do not contradict contemporary experimental data on the correlation coefficients and asymmetries of the neutron β − -decay with polarized neutron, polarized electron and unpolarized proton. The results of such an analysis of the rate of the neutron decay modes n → p + anything, measured in beam experiments, and correlation coefficients of the neutron β − -decay can be interpreted as an allowance for the neutron to have the dark matter decay modes n → χ + anything, which have been analysed in [4] in the physical phase of a quantum field theory model with The traces of the dark matter fermions χ with mass m χ < m n can be searched in terrestrial laboratories through the measurements of the differential cross section for the low-energy inelastic electron-neutron scattering e − + n → χ + e − [4] and the triple-differential cross section for the electrodisintegration of the deuteron into dark matter fermions and protons e − + d → χ + p + e − close to threshold [21]. Our analysis of the rate of the neutron decay modes n → p + anything, measured in beam experiments, and the correlation coefficients of the neutron β − -decay, taking into account the complete set of corrections of order 10 −3 , caused by the weak magnetism and proton recoil of order O(E e /M ) and radiative corrections of order O(α/π), and the Fierz interference term b = −1.44 × 10 −2 , does diminish an important role of the SM corrections of order 10 −5 , which has been pointed out in [50,53,69,70]. As has been shown in [53] the SM corrections of order 10 −5 concern i) Wilkinson's corrections [52], i.e. the higher order corrections caused by 1) the proton recoil in the Coulomb electronproton final-state interaction, 2) the finite proton radius, 3) the proton-lepton convolution and 4) the higher-order outer radiative corrections, and then ii) the higher order corrections defined by 1) the radiative corrections of order O(α 2 /π 2 ), calculated to leading order in the large nucleon mass expansion, 2) the radiative corrections of order O(αE e /M ), calculated to next-to-leading order in the large nucleon mass expansion, which depend strongly on contributions of hadronic structure of the nucleon [69,70], and 3) the corrections of the weak magnetism and proton recoil of order O(E 2 e /M 2 ), calculated to next-to-next-to-leading order in the large nucleon mass expansion [50,69]. These theoretical corrections should provide for the analysis of experimental data of "discovery" experiments the required 5σ level of experimental uncertainties of a few parts in 10 −5 [53]. An important role of strong low-energy interactions and contributions of hadronic structure of the nucleon for a correct gauge invariant calculation of radiative corrections of order O(αE e /M ) and O(α 2 /π 2 ) as functions of the electron energy E e has been pointed out in [50,69]. This agrees well with Weinberg's assertion about important role of strong low-energy interactions in decay processes [71]. A procedure for the calculation of these radiative corrections to the neutron β − -decays with a consistent account for contributions of strong low-energy interactions, leading to gauge invariant observable expressions dependent on the electron energy E e determined at the confidence level of Sirlin's radiative corrections [54], has been proposed in [69,70]. The calculation of the SM corrections of order 10 −5 should also give a theoretical background for the experimental analysis of the corrections caused by the second class currents [72], which has been analysed in the neutron β − -decay with polarized neutron and unpolarized proton and electron by Gardner and Zang [48] and Garner and Plaster [49], and in the neutron β − -decay with polarized neutron and electron and unpolarized proton by Ivanov et al. [50]. Finalizing our discussion we would like to emphasize that perspectives of development of investigations of the neutron β − -decays with the contribution of the Fierz interference term b = −1.44 × 10 −2 become real only in case of discovery of the neutron dark matter decay modes n → χ + anything in terrestrial laboratories by measuring the differential cross sections for the inelastic low-energy electron-neutron scattering e − + n → χ + e − and for the electrodisintegration of the deuteron into dark matter fermions and protons e − + d → χ + p + e − close to threshold. These processes are induced by the same interaction nχe − e + having the strength of the interaction nχν eνe responsible for the dark matter decay mode n → χ + ν e +ν e , allowing to explain the neutron lifetime anomaly. Of course, indirect confirmations of existence of the neutron dark matter decay mode n → χ + ν e +ν e through the evolution of neutron stars and neutron star cooling should also testify a revision of the experimental data on the neutron β − -decays with substantially improved accuracies of the measurements of correlation coefficients and asymmetries allowing to feel the contribution of the Fierz interference term b = −1.44 × 10 −2 . Such a value of the Fierz interference term is required by the necessity to fit the rate of neutron decay modes n → p + anything, measured in beam experiments. It is obvious that in this connection a relative accuracy of measurements of the rate of the neutron decay modes n → p + anything (or the neutron lifetime τ n = 888.0(2.0) s) in beam experiments should be improved up to a few parts of 10 −4 or even better.
8,197
2018-08-26T00:00:00.000
[ "Physics" ]
NAP1L1: A Novel Human Colorectal Cancer Biomarker Derived From Animal Models of Apc Inactivation Introduction Colorectal cancer (CRC) is the second leading cause of cancer death worldwide and most deaths result from metastases. We have analyzed animal models in which Apc, a gene that is frequently mutated during the early stages of colorectal carcinogenesis, was inactivated and human samples to try to identify novel potential biomarkers for CRC. Materials and Methods We initially compared the proteomic and transcriptomic profiles of the small intestinal epithelium of transgenic mice in which Apc and/or Myc had been inactivated. We then studied the mRNA and immunohistochemical expression of one protein that we identified to show altered expression following Apc inactivation, nucleosome assembly protein 1–like 1 (NAP1L1) in human CRC samples and performed a prognostic correlation between biomarker expression and survival in CRC patients. Results Nap1l1 mRNA expression was increased in mouse small intestine following Apc deletion in a Myc dependant manner and was also increased in human CRC samples. Immunohistochemical NAP1L1 expression was decreased in human CRC samples relative to matched adjacent normal colonic tissue. In a separate cohort of 75 CRC patients, we found a strong correlation between NAP1L1 nuclear expression and overall survival in those patients who had stage III and IV cancers. Conclusion NAP1L1 expression is increased in the mouse small intestine following Apc inactivation and its expression is also altered in human CRC. Immunohistochemical NAP1L1 nuclear expression correlated with overall survival in a cohort of CRC patients. Further studies are now required to clarify the role of this protein in CRC. Introduction: Colorectal cancer (CRC) is the second leading cause of cancer death worldwide and most deaths result from metastases. We have analyzed animal models in which Apc, a gene that is frequently mutated during the early stages of colorectal carcinogenesis, was inactivated and human samples to try to identify novel potential biomarkers for CRC. Materials and Methods: We initially compared the proteomic and transcriptomic profiles of the small intestinal epithelium of transgenic mice in which Apc and/or Myc had been inactivated. We then studied the mRNA and immunohistochemical expression of one protein that we identified to show altered expression following Apc inactivation, nucleosome assembly protein 1-like 1 (NAP1L1) in human CRC samples and performed a prognostic correlation between biomarker expression and survival in CRC patients. INTRODUCTION Colorectal cancer (CRC) is the third most common cancer type and the second leading cause of cancer death worldwide (1). Deaths usually result from recurrent and metastatic disease. Most international guidelines recommend chemotherapy to reduce the risk of recurrence in stage III tumors and to prolong survival in stage IV cancers (2). Conversely, chemotherapy is generally not used in stage I and most stage II tumors. However, some patients with low-risk stage III disease may respond well following courses of chemotherapy that are shorter than the 6-month standard schedule, although the definition of "lowrisk" has not been well established in this scenario (3,4). Additionally, almost 20% of patients who have stage II tumors and who are therefore considered to have low-risk disease, relapse and eventually die from cancer progression (5). There is currently no accurate biomarker to better define prognosis within stage groups. Therefore, biomarkers for prognostic stratification in CRC have the potential of improving the treatment decision-making process (4,6). We hypothesized that studying molecular mechanisms that are known to be involved in CRC development might yield promising novel biomarkers for this disease. Adenomatous polyposis coli (APC) inactivating mutations are the earliest and most common genetic alterations in the adenoma-carcinoma sequence that leads to CRC (7). Such mutations result in the accumulation of β-catenin within cells and activation of the Wnt signaling pathway (8). Animal models of Apc inactivation demonstrate epithelial transformation and tumor formation mirroring cancer development (9)(10)(11). One of these models is the AhCre + Apc fl/fl mouse, an animal bearing loxp-flanked Apc alleles and a Crerecombinase transgene. When this animal is exposed to β-naphthoflavone, Cre-recombinase transcription is activated resulting in the deletion of loxp-flanked Apc alleles specifically in the small intestinal epithelium, thus causing an acute activation of the Wnt pathway (9). The Apc Min/+ mouse has a germline mutation in one Apc allele simulating a familial adenomatous polyposis (FAP) patient, and spontaneously develops multiple intestinal neoplasms during its life-span (12,13). The Myc gene is a Wnt-target that plays an essential role in the development of malignant phenotypes upon Apc inactivation (14,15). We hypothesized that the analysis of mouse models of Apc and Apc/Myc deletion could lead to the discovery of genes or proteins with potential clinical use as human CRC biomarkers. Our group has previously published a proteomic evaluation of an animal model of CRC based on acute Apc deletion (AhCre + Apc fl/fl mouse) (16). A 4-plex iTRAQ labeling analysis identified 126 proteins that were significantly altered upon Apc deletion (76 up-and 50 down-regulated) and which were proposed as Wnt targets. We have now performed an additional analysis of this dataset by comparing the protein list with the transcriptomic data generated using Affymetrix arrays and intestinal tissues from the same mice (14). This study used Apc-deficient (AhCre + Apc fl/fl Myc +/+ ) and doublemutant Apc-Myc deficient (AhCre + Apc fl/fl Myc fl/fl ) mice to identify Myc dependant Wnt pathway genes following Apc inactivation. We investigated whether there were any genes/proteins that showed congruent findings in both analyses according to strict criteria. One protein, nucleosome assembly protein 1 like 1 (NAP1L1) was identified that met our criteria. We therefore analyzed the expression of Nap1l1 in Apcdeficient (AhCre + Apc fl/fl Myc +/+ ) and double-mutant Apc-Myc deficient (AhCre + Apc fl/fl Myc fl/fl ) mice to assess whether its expression was Myc-dependant and therefore a potential biomarker of Wnt activation. Following confirmation of our findings, we investigated the mRNA and protein expression of NAP1L1 in tumor and adjacent normal mucosa samples from patients with CRC as well as colonic tissues from unaffected individuals. Animal Samples Mouse experiments were performed with UK Home Office approval with personal and project licenses (30/2737 and 30/3279) and according to ARRIVE guidelines and following local ethical review by the Cardiff University Animal Welfare Ethical Review Panel. The transgenic mice that were used in this study were generated and maintained at the University of Cardiff as previously described in (14). Small intestinal epithelial cell extracts were generated from these mice following injection of beta-naphthlaflavone as described by Hammoudi et al. (16). qPCR RT-PCR For the mouse small intestinal tissue samples and human CRC samples obtained in the United Kingdom (Wales cohorts 1 and 2), total RNA was used to synthesize first strand cDNA using a VersoTM cDNA Kit (Thermo Scientific) and anchored oligo-dT primers (Thermo Scientific) according to the manufacturer's instructions. Single-stranded cDNA samples were amplified in a Polymerase chain reaction (PCR) using sequence-specific primers (Eurogentec) and probes from the Universal Probe Library (Roche) that were designed using the Universal ProbeLibrary Assay Design Centre, using PCR Master mix (Roche) and a light cycler 480 (Roche) (see Supplementary Table 2 for primers and probes used). For the human CRC samples recruited in Brazil, RNA was extracted using the RNeasy R Mini Kit (Qiagen, Hilden, Germany). cDNA was produced using TaqMan R Reverse Transcription Reagents kit (Applied Biosystems, Carlsbad, CA, United States) according to the manufacturer's protocol. Quantitative PCR reactions were performed using the 7500 Fast Real-Time PCR System (Applied Biosystems, Foster City, CA, United States) (see Supplementary Table 3 for expression assay specifications). Proteomic and Microarray Comparison Analysis We set out to determine the subset of genes/proteins, which demonstrated upregulation of both protein and mRNA Frontiers in Oncology | www.frontiersin.org following Apc deletion, but no increase in expression in AhCre + Apc fl/fl Myc fl/fl mice. The MAXD/View-Program data files from the microarray analyses were used to calculate gene expression fold changes in the intestinal tissues from the various transgenic mice. These data were then combined with our previously published proteomic profile data (14) in which we identified proteins which showed a greater than 1.2 fold increase in expression following Apc deletion, using iTRAQ analysis. The following features from the DNA microarray analysis were used to identify candidate Myc dependant Wnt pathway proteins: (i) a statistically significant (p < 0.05) greater than 2 fold increase in expression in Apc-deficient (AhCre + Apc fl/fl ) mice compared to wild-type mice (APC:WT); (ii) no significant increase in expression in the double-mutant Apc-Myc deficient (AhCre + Apc fl/fl Myc fl/fl ) mice when compared to the wildtype mice (a ratio value of 1:1, with boundaries of 0.75 and 1.25) (APCMYC:WT) and (iii) a AhCre + Apc fl/fl Myc fl/fl to AhCre + Apc fl/fl , ratio < 0.5 with p < 0.05 (APCMYC:APC) and findings being confirmed by at least three Affymetrix probes corresponding to the protein. United Kingdom Cohorts Total RNA samples from CRC tumor tissues and adjacent uninvolved colonic mucosa (18 patients) were obtained from the Wales Cancer Bank (Wales cohort 1) with the ethical approval and informed consent that is associated with this tissue bank, and these samples were used in the initial gene expression studies. There were 9 male and 9 female patients, with 6 samples from stage I, 6 from stage II and 6 from stage III CRC and mean patient age was 69.3 years. Another set of 30 matched sample pairs was subsequently obtained from the same Tissue Bank (Wales cohort 2) and these were analyzed separately for validation of the findings. Wales cohort 2 had 9 samples with stage I, 8 samples with stage II and 13 samples with stage III CRC and the mean patient age was 69.4 years. A tissue microarray (TMA) was constructed using samples from 19 CRC patients recruited at the Countess of Chester Hospital NHS Foundation Trust (Chester, United Kingdom) again with informed consent and local ethics committee approval (12/NW/0011). Cancer tissues were available for all cases (5 cases with stage I, 3 cases with stage II, 5 cases with stage III and 6 cases with stage IV CRC), whilst normal adjacent mucosa was only obtained for 8 of them. The analysis of this cohort was part of the immunohistochemistry (IHC) validation study and the findings are presented in Supplementary Figure 2. Brazil Cohort Fresh frozen tissues from tumors removed from 25 CRC patients and normal colonic tissues from 10 normal individuals (who had a normal colonoscopy on a bowel cancer screening program) were collected, after informed consent was obtained, and were analyzed in the gene expression studies. The CRC samples were from 17 males and 8 females with stage I: 7; II: 3; III: 8 and IV: 7 CRC and the mean patient age was 55.9 years. The normal samples were from 3 males and 7 females and the patients had a mean age of 54.7 years. For the initial Brazilian IHC study, samples from 32 patients were prospectively collected in Cuiaba -Brazil between January 2013 and August 2015. Informed consent was obtained. Fragments from the tumor and from the normal adjacent mucosa (at least 10 cm from the tumor) were fixed in 10% buffered formalin for 24 h, and then processed into paraffin blocks. Patient characteristics are described in the results section below. For the IHC prognostic study, samples were obtained for 75 CRC patients from the archives of two pathology laboratories in Cuiaba, Brazil (Laboratorio São Nicolau and the Julio Muller University Hospital Pathology Lab). Inclusion criteria were: (i) 4 or more years since diagnosis, (ii) presence of tumor tissue in the paraffin block, (iii) traceable patient survival information, and (iv) survival for at least 30 days after surgery. We tracked the patient's current health service to obtain mortality information. Alternatively, if no clinical information was available, we checked the Brazilian electronic death database "Sistema de Informacao de Mortalidade" which records all deaths and their causes. Overall survival was recorded as the interval between diagnosis and death from any cause (when death had occurred) or the date when the database was last checked (when death had not occurred). Immunohistochemistry in the Validation Study Four µm sections from paraffin blocks were subjected to standard protocols for IHC. Tris-buffered saline-tween 0.1% (TBS-T) was used for all washes. In summary the main steps were: dewaxing in xylene twice; endogenous peroxidases block using 3% hydrogen peroxide in methanol; rehydration in decreasing concentrations of ethanol solutions and, finally, distilled water; heat-induced epitope retrieval using 10 mM citrate buffer pH 6.0 in a microwave oven at 800 W for 20 min; block using 10% normal goat serum (Dako, Ely, United Kingdom) in TBS-T for 45 min at room temperature; primary rabbit anti-NAP1L1 antibody (cat. number ab33076, Abcam, Cambridge, United Kingdom) 1:4,000 in 10% normal goat serum in TBS-T overnight at 4 • C in a humid chamber; biotinylated secondary goat anti-rabbit antibody (Dako, Ely, United Kingdom) solution 1:200 in 10% normal goat serum for 30 min at room temperature; avidin-biotinperoxidase complex (Vectastain Elite ABC kit -Peterborough, United Kingdom) for 30 min at room temperature; visualization using 3,3 -diaminobenzidine (Sigmafast DAB tablets -Sigma, Gillingham, United Kingdom) substrate solution; application of hematoxylin counterstain; dehydration in increasing concentrations of ethanol; clearing in xylene and mounting using DPX mounting medium (Sigma) and glass coverslips. Stained sections were viewed and photographed using a microscope and camera set (Leica Biosystems, Milton Keynes, United Kingdom). The same staining protocol was used for the additional United Kingdom patient cohort described in Supplementary Figure 2. Immunohistochemistry in the Prognostic Study The protocol adopted in this part of the research was the routine technique used in the São Nicolau Laboratory (Cuiaba/Brazil), a pathology lab with extensive expertise in IHC. All branded solutions and buffers were purchased from Cell-Marque TM /Sigma-Aldrich (Rocklin, CA, United States). Four µm tissue sections were dewaxed in xylene and rehydrated as previously described. After a wash step in distilled water, slides were immersed in Trilogy TM pre-treatment solution and incubated at 96 • C for 30 min for epitope retrieval. After this, the slides were washed in phosphate buffered saline (PBS), Peroxide block TM solution was added and samples were incubated for 20 min. After another PBS wash, the primary antibody solution (same concentration as those described above) was placed onto the samples and incubated for 20 min at room temperature. After washes in PBS, HiDef Detection TM amplifier (secondary antibody solution) was applied to the slides for 15 min. After a further PBS wash, the former step was repeated using HiDef Detection TM detector (a horseradish peroxidase polymer solution). Finally, color development was performed by incubating the slides with DAB substrate TM chromogen. Stained slides were counterstained, dehydrated and mounted as described above. Slides were photographed using an Axio Scope.A1 microscope coupled with an AxioCamHRc camera (Zeiss, Oberkochen, Germany). Some samples were stained using both the protocol described here and that in the previous section and these confirmed that the staining patterns were similar (see Supplementary Figure 1). Immunohistochemistry Scoring System Scoring was performed electronically using the software ImageJ (publicly available at rsbweb.nih.gov/ij/) (IMAGEJ). The images were initially edited in Image J to exclude non-epithelial/noncancerous/stromal tissues. For cytoplasmic assessment, the plugin IHC Profiler was used (17). Based on the readings produced by this tool, we calculated a modified IHC Profiler , with final scores ranging from 0 to 300. For nuclear scoring, the plugin ImmunoRatio was used (18). It assesses the percentage of positive nuclei based on a threshold setting. Two microscopy fields (×400) containing at least 100 stained epithelial (in control cases) or cancer cells each were analyzed per sample. Statistical Analysis Comparisons of continuous variables were carried out using Mann-Whitney U test or Kruskal-Wallis test followed by the Dunn-Bonferroni test for post hoc comparison. Categorical data were compared using the Chi-square test (or Fisher's exact test in case of less than five expected counts per cell in the contingency table). For the survival analysis, groups were assessed using the Kaplan-Meier method, and survival curves were compared by log-rank tests. When significant differences were observed, Cox proportional hazards model was used for multivariate analysis. Two-sided p-values <0.05 were accepted as significant in the entire study. All statistical analyses were performed using the software IBM R SPSS R Statistics version 22 and R packages. Combination of Proteomic and Transcriptomic Datasets Our previously published proteomic data (14) was integrated with the DNA microarray data from the double-mutant Apc- Evaluation of Nap1l1 mRNA Expression in Mouse Small Intestine In order to validate whether Nap1l1 mRNA was upregulated following conditional Apc deletion in the mouse intestinal epithelium, qRT-PCR was carried out using mRNA extracted from small intestinal epithelial cell extracts from AhCre + Apc fl/f Myc fl/fl , AhCre + Apc fl/fl , and AhCre + Myc fl/fl mice, 4 days post induction. We compared relative mRNA expression in these transgenic mouse cohorts with the mRNA expression levels in AhCre + Apc +/+ Myc +/+ (wild-type) mice. Results are shown as fold change relative to the wild-type mice (Figure 1). This analysis confirmed that Nap1l1 mRNA expression was significantly increased following Apc deletion in a Myc-dependent manner. Evaluation of NAP1L1 mRNA Expression in Human Colorectal Cancer Samples NAP1L1 was then analyzed in three cohorts of human CRC samples. We assessed the expression of NAP1L1 firstly in mRNA from CRC tissues and matched normal mucosa from 18 patients supplied by the Wales Cancer Bank. NAP1L1 demonstrated statistically significant elevated mRNA levels in CRC samples (Figure 2, Wales cohort 1). We next performed a confirmatory study using a different set of 30 samples from the same Tissue Bank, and we observed consistent results (Figure 2, Wales cohort 2). In order to further validate the findings, we repeated the experiment using a different platform (in terms of equipment and reagents) and compared a cohort of 10 normal colonic samples (individuals without any colonoscopic evidence of colorectal neoplasia) and 25 CRC samples from Brazil (Figure 2, Brazil cohort). Once more, significantly increased levels of NAP1L1 mRNA were observed in CRC specimens and this time the comparison was with normal colonic tissue from patients who had no evidence of colorectal neoplasia. 2 | qPCR analysis of NAP1L1 expression in CRC tumors presented as fold-change relative to normal tissues in different cohorts. Each column represents the relative quantification (fold-change) of NAP1L1 mRNA expression in CRC compared to the respective normal control (normal control expression mean = 1). Wales cohort 1, mean fold-change = 2.7; p < 0.05 (18 paired colorectal cancer and adjacent normal colonic tissue samples). Wales cohort 2, mean fold-change = 5.8; p < 0.001 (30 paired colorectal cancer and adjacent normal colonic tissue samples). Brazil cohort, mean fold-change = 7.9; p < 0.001 (10 normal samples from healthy individuals without colorectal neoplasia and 25 colorectal cancer samples). Mann-Whitney U test. Error bars: ±1 SE. Confirmation of Differential Immuno-Expression of NAP1L1 in Human Colorectal Tissues Immunohistochemistry for NAP1L1 was then performed in colorectal tissue samples from a different cohort of 32 patients, as described in section "Materials and Methods." Cancer tissues and the matched unaffected mucosa collected 10 cm from the primary lesion were analyzed. Scoring was performed electronically using the software ImageJ (publicly available at rsbweb.nih.gov/ij/) and the plugins IHC Profiler for cytoplasmic scoring (resulting in a modified H-score ranging from 0 to 300) (17) and ImmunoRatio for nuclear scoring (ranging from 0 to 100%) (18). The samples were subdivided into two groups who had early stage (11 samples encompassing stages I and II) and late stage (21 samples including stages III and IV) CRC. We initially assessed the expression of β-catenin to confirm Wnt pathway activation and to validate the electronic scoring methods (Figure 3). A clear and statistically significant increase in both nuclear and cytoplasmic localization of β-catenin was observed in cancer tissues compared to the adjacent mucosa, as expected based on previous literature (19)(20)(21), thus validating our scoring system. Using the same scoring methods, we observed an opposite staining pattern for NAP1L1. A clear and statistically significant decrease in both the nuclear and cytoplasmic expression of NAP1L1 was seen in CRC tissues relative to the adjacent mucosa (Figure 4). No difference was detected between early and late stage tumor groups for both markers. We also performed a confirmatory analysis using a different cohort of 19 patients from the Countess of Chester Hospital NHS Foundation Trust (Chester, United Kingdom). This analysis used a slightly different manual scoring method as described in Supplementary Figure 2 NAP1L1 Nuclear Expression Is a Strong Predictor of Survival in Late Stage CRC Having demonstrated decreased NAP1L1 immunohistochemical expression in CRC samples, we investigated whether the expression pattern had any effect on patient outcome. We analyzed a further cohort of 75 CRC cases diagnosed between 2004 and 2012. Median follow-up was 84.7 months (range 48-153 months). Given the relatively small number of cases, cancer stages were again combined into two groups: early stage (stages I and II) and late-stage (stages III and IV). Immunohistochemistry was conducted as described in section "Materials and Methods." Table 1 describes the characteristics of the patients included in this analysis. Initially, using mortality status as the binary event of interest, ROC curves were generated. The area under the curve (AUC) was 0.58 for the nuclear score and 0.60 for the cytoplasmic score. Cut-offs were determined either by manually assessing the best balance between sensitivity and specificity or electronically by the use of the software Cutoff Finder (22) and X-Tile (23). All methods suggested the same cut-off for nuclear staining: 32% (of positive nuclei). Use of this threshold resulted in a sensitivity of 61% and a specificity of 67.5% for discriminating mortality status. For cytoplasmic staining, sensitivity/specificity optimization suggested a cut-off of 135 (in a range from 0 to 300), yielding a sensitivity of 57% and a specificity of 54%. The electronic tools suggested higher cut-offs (167 and 168). Although these resulted in a higher specificity (92%), the sensitivity was low (29%). Despite these differences, the prognostic results were similar, so the cytoplasmic cut-off of 135 was selected to describe the results. The prognostic cohort was therefore divided into two groups with low-expression and high-expression of NAP1L1. Groups were similar in terms of age, gender, stage and grade. Table 2 shows the clinicopathological characteristics of the groups according to the nuclear expression of NAP1L1. Similar balanced distribution was also observed for cytoplasmic expression. Using the Kaplan-Meier method, cumulative survivals for the two groups (high and low nuclear expression) were compared. Initially, groups were assessed as a whole, regardless of disease stage (Figure 5A). A clear difference in cumulative survival was observed according to nuclear NAP1L1 staining (p = 0.012, logrank test). In the multivariate analysis including age, gender, stage and grade (Cox proportional hazards model), the nuclear score was independently associated with cumulative survival. The high nuclear expression group exhibited a hazard ratio (HR): 0.39 ([95%CI: 0.17-0.87]; p = 0.02), denoting a 61% reduction in cumulative mortality in this group. As a result, the estimated 5-year survival was 44.4% in the low expression group and 75% in the high expression group. Median duration of survival was 32 months in the low expression group, whilst it was not reached for the high expression cohort. The only additional variable also associated with survival was tumor stage (HR: 2.55 [95%CI: 1.01-6.43]; p = 0.047), an expected finding since stage is a known prognostic factor in CRC. These results strongly suggest an association between NAP1L1 nuclear staining and survival in CRC patients. Conversely, cytoplasmic NAP1L1 staining was not associated with survival ( Figure 5B) or with any other clinicopathological variable. We then analyzed survival according to NAP1L1 nuclear expression in different stage groups (Figures 5C,D). For early stage disease, no significant difference in survival was found. By contrast, a highly significant difference in survival was observed for the cohort containing stages III and IV tumors. Multivariate analysis once again demonstrated that NAP1L1 nuclear score was an independent prognostic factor in CRC patients. The calculated HR (0.28 [95%CI: 0.11-0.71]; p = 0.008) was even more notable than that observed for the entire cohort, now suggesting a 72% reduction in cumulative mortality. The 5-year survival advantage for high expression tumors was also greater: 70%, versus 34% for low expression cancers. Median survival was only 23 months in the low expression group and, again, was not reached in the high expression cohort. DISCUSSION The discovery of novel CRC biomarkers to assist in early diagnosis, prognostic stratification and prediction of response to treatment remains an unmet medical need. We hypothesized that the study of animal models of CRC based on transgenic Apc gene inactivation could lead to the discovery of novel useful CRC biomarkers in humans. By combining transcriptomic and proteomic analyses of small intestinal tissue from transgenic mice in which Apc and/or Myc had been specifically deleted, we identified NAP1L1 as the only gene/protein that showed significantly altered expression in Apc and Myc-dependent manners in all analyses. We confirmed these findings using qPCR in mouse small intestine and additionally demonstrated that NAP1L1 mRNA expression was increased in human CRC. It was unfortunately not possible to study whether there was any altered NAP1L1 expression in the colon of the AhCre mouse model as there is no Cre mediated recombination in the colon of these mice following injection of β-naphthoflavone and they have no colonic phenotype (24). NAP1L1 is a highly conserved histone chaperone protein which is one of five NAP1-like proteins in mammals (21,22). It has been suggested to play a role in mediating nucleosome formation and regulation of the H2A-H2B complex as well as nucleosome assembly (25), cell cycle progression, and cell proliferation (26). It has also been linked to embryogenesis and tissue differentiation (23)(24)(25). Few researchers have previously studied NAP1L1 expression in cancer cell lines or tissues. Drozdov et al. compared small intestinal neuroendocrine tumors (NETs) and normal enterochromaffin cell preparations, and showed a 13.7-fold increase in NAP1L1 expression in tumor tissues (27). However, no analysis of the adjacent mucosa was performed. Kidd et al. also suggested that NAP1L1 was increased in NETs but not in CRCs (28). Line et al. evaluated NAP1L1 mRNA expression in CRC and adjacent tissues as a secondary endpoint in a study primarily aimed at finding sero-reactive biomarkers (29). They showed that, among 15 cases of CRC, seven exhibited moderate increases in NAP1L1 expression (ranging from 2.9 to 9.3-fold) and eight cases showed expression levels similar to those in the corresponding adjacent mucosa. A recent paper has also demonstrated that NAP1L1 is a prognostic biomarker and contributes to doxorubicin chemotherapy resistance in hepatocellular carcinoma (30). Immunohistochemistry is used in routine clinical practice to assess the expression of proteins with prognostic or predictive value in other types of cancer such as breast (31) and lung carcinomas (32), soft tissue sarcomas (33), and lymphomas (34). Given the absence of a standard scoring method for NAP1L1, we initially decided to assess both the nuclear and the cytoplasmic expression of the protein in our samples using electronic tools. Our results showed that NAP1L1 expression was decreased both in the nucleus and the cytoplasm of CRC tissues when compared to the normal adjacent mucosa. This was a somewhat unexpected finding, given the increased expression of NAP1L1 mRNA in animal models and in human tissues. Such discrepancy between mRNA and protein expression has however previously been demonstrated for other cancer markers (35)(36)(37). Several processes could be responsible, such as posttranscriptional modifications, protein degradation, secretion via exocytosis or alterations in subcellular protein localization. For example a recent paper has reported that NAP1L1 undergoes alternative cleavage and polyadenylation in the more advanced stages of CRC (38). The full length isoform of NAP1L1 was overrepresented in the cytoplasmic fraction of a CRC cell line which had a more metastatic phenotype. This may therefore represent one mechanism to explain the altered NAP1L1 subcellular localization that is reported in CRC specimens in our current manuscript. Counterintuitively, our finding of increased gene expression in the initial screen may have been a response to reduced protein content and not the primary event. Further research is required in order to clarify this issue. Qiao et al. (39) demonstrated that knock down of NAP1L1 increased cellular proliferation, disrupted normal cell development and distribution, and caused global deregulation of gene expression. These are classical hallmarks of cancer and of activated Wnt signaling. Qiao et al. also demonstrated that Nap1l1 knockdown resulted in reduced RassF10 expression, low expression of which has been associated with poor survival in CRC patients in another study (40). Thus reduced NAP1L1 protein expression could be mechanistically associated with tumor progression. We also assessed whether NAP1L1 expression was associated with prognosis. We retrospectively retrieved blocks from a cohort of CRC patients with more than 4 years follow up. Cut-offs for nuclear and cytoplasmic expression of NAP1L1 were established and a survival analysis was performed. Nuclear expression of NAP1L1 correlated with overall survival in CRC. High nuclear expression was independently associated with an increase in median survival and 5-year survival estimates. Subgroup analysis however showed that the survival correlation was limited to late stage tumors (stages III and IV). No association between NAP1L1 nuclear expression and clinicopathologic variables (age, gender, stage and grade) was observed. These findings suggest that the expression of NAP1L1 could potentially help in discriminating low and high-risk disease in stage III CRC cases and, also, to determine the aggressiveness of the disease in stage IV cancers. In both cases, this information could help to better define the best treatment approach. We acknowledge some limitations in this study. Although positive and relevant findings were observed, the use of small clinical sample sizes may have limited our observations. This also meant that it was necessary to evaluate cancer stages as combined groups rather than individually. Better prognostic stratification in stage II disease is urgently needed to improve the treatment decision-making process and our data did not permit this. Moreover, analyzing larger cohorts of stage III and stage IV diseases separately would also be desirable, as these stages are associated with markedly different clinical outcomes. This study provides proof of concept that the analysis of animal models of Wnt pathway activation may yield potentially useful CRC biomarkers in humans. We undertook a comprehensive assessment of NAP1L1 expression in animal models and clinical samples and our findings suggest that it could be a prognostic biomarker for CRC. Confirmatory research studying larger sample cohorts and a better assessment of the role of this protein in CRC carcinogenesis is now recommended before this marker can be introduced routinely into clinical practice. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the Broad Institute Gene Set Enrichment Analysis (GSEA) M1755, M1756, M1757, and M1578. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the respective institutions, Brazilian National Commission for Research Ethics (CONEP); Wales Cancer Bank and Countess of Chester Hospital NHS Foundation Trust. The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by the Cardiff University Animal Welfare Ethical Review Panel and UK Home Office. AUTHOR CONTRIBUTIONS JJ, DP, KR, and AC designed the research project. JJ, FM, and DP supervised CQ's Ph.D. studies. JJ and DV supervised NA-K's MRes studies. KR managed the mouse inter-crosses and collection of murine samples. FS performed the qPCR of human samples. CQ and NA-K performed the human sample collection, qPCR, immunohistochemistry staining, and scoring and data analysis. CQ, DP, and JJ drafted the manuscript and all other authors critically reviewed the manuscript and approved the final version submitted. All authors contributed to the article and approved the submitted version. ACKNOWLEDGMENTS We acknowledge the contribution made by Dr. Abeer Hammoudi to the initial proteomic experiments. We would like to acknowledge the Laboratorio São Nicolau (Dr. Ivana Menezes, Cuiaba -Brazil) for helping in the collection of clinical samples and in the performance of immunohistochemistry for the prognostic study. We thank Elaine Taylor for assistance with mouse husbandry, Mark Bishop and Matthew Zverev for technical support and genotyping of murine samples. Some of the content of this manuscript formed part of CQ's doctoral thesis to obtain a Ph.D. degree at the University of Liverpool (41).
7,476
2020-08-11T00:00:00.000
[ "Biology" ]
Plasmon electron energy-gain spectroscopy We explore multiple energy losses and gains undergone by swift electrons interacting with resonant evanescent light fields. We predict remarkably high gain probabilities in the range of a percentage when the electrons are passing near a resonant plasmonic structure under continuous-wave illumination conditions with moderate laser intensities ∼ 108 W m−2. Additionally, we observe fine structure in the dependence of the gain and loss probabilities on the light wavelength, which reveals a complex interplay between multiple plasmon–electron interactions. These results constitute a solid basis for the development of a new spectroscopy technique based upon the analysis of energy gains, capable of rendering information on the optical properties of the sampled resonant nanostructures. We illustrate this concept for plasmon-supporting noble metal nanoparticles. Introduction Since Feynman pointed out the suitability of electron microscopes to fulfill the need for better imaging down to the nanoscale [1], electron microscopy has undergone a tremendous 100-fold improvement in spatial resolution down to ∼1 Å [2], just a factor of ∼40 larger than the de Broglie wavelength at typical electron-beam energies ∼200 keV. The analysis of energy exchanges between the electrons and the sample adds further information on the chemical composition and electronic structure of the specimen with similar spatial resolution. In particular, electron energy-loss spectroscopy (EELS) [3,4] has proved to be extremely useful to resolve chemical species by sampling differences in the electronic environment [5]. The spatial resolution of electron microscopes is well suited to study plasmons, the collective excitations of conduction electrons in metals [6]. Actually, plasmons were first revealed as energy loss features in the spectra of electrons reflected from metal surfaces [7]. Since then, electron beams have become an important tool to yield information on plasmons [8,9]. More recently, EELS has been extensively used to map plasmons in metallic nanostructures [10,11], and it is thus helping to develop new applications of these collective modes to biomedicine [12], photovoltaics [13] and quantum optics [14]. For example, plasmons have been recently mapped in silver nanowires with an impressive energy resolution <0.1 eV relying on a new generation of transmission electron microscopes (TEMs) that are equipped with electron monochromators [15], although they involve a compromise between signal intensity and energy resolution. However, the latter is still limited by the width of the zeroloss peak (i.e. the peak of electrons that have not undergone inelastic scattering). An alternative method that combines the spatial resolution of electron beams and the energy resolution of optical probes has been suggested based on the analysis of energy gains experienced by the electrons [16,17]. In this so-called electron energy-gain spectroscopy (EEGS), electrons that have absorbed energy from an external light source appear on the negative side of the energyloss spectrum, and the area under an energy-gain peak reflects the response of the sample at the illuminating frequency, thus increasing the energy resolution, which is no longer limited by the width of the electron zero-loss peak. In this context, the observation of multiple energy gains and losses in pulsed electrons in coincidence with pulsed laser irradiation constitutes an important step toward the experimental realization of EEGS [18,19]. The multiple energy transfers between the electrons and the samples are well understood [20][21][22], and in particular, the electrons have been shown to undergo a complex evolution involving multiple energy exchanges on the sub-femtosecond time scale [20]. The interesting possibility of interfering losses stimulated by external illumination and inelastic losses has been recently discussed [23]. In this paper, we extend our previous results and show that these multiphoton gains and losses can be useful not only for imaging but also for performing time-and space-resolved spectroscopy, particularly in plasmonic structures. This can be achieved by varying the external light frequency, and due to the field enhancement produced by the surface plasmons, it can be performed at low light intensities. Additionally, we develop a simple model to describe EELS, EEGS and cathodoluminescence (CL) in a unified quantum treatment that provides further intuition into the physical mechanisms underlying these processes. Outline of the theory In a TEM, the electrons can be described as a coherent superposition of plane waves [11]. The degrees of freedom associated with momentum components perpendicular to the beam direction z can be neglected for a swift electron, which can then be accurately described as a plane wave moving alongẑ. We intend to obtain the probability of gain and loss processes by solving the quantum-mechanical evolution of the electron wavefunction in the presence of a semi-classical coupling to the external laser field. Following the method derived in a previous paper [20], the unperturbed electron wavefunction can be written as is a Gaussian electron pulse of temporal duration ∼2 e . Here, |N k | 2 =((π/2) 1/2 e v k ) −1 is a normalization constant,h k = c(h 2 k 2 + m 2 e c 2 ) 1/2 is the relativistic electron energy and v k = ∂ k /∂k is the pulse group velocity (v ≈ 0.7c for the 200 keV electrons here considered). We describe multiphoton energy gains and losses undergone by the electron using a semiclassical model in which the quantum mechanical evolution of the electron is solved including its interaction with the evanescent light field. The electron-photon coupling Hamiltonian consists of two terms, one proportional to the absorption and the other to the emission of one photon: where ω is the central light frequency and the electric field parallel to the electron beam is described by a temporal Gaussian wave-packet where ∼ 2 p is the light pulse duration and τ is the delay between the arrivals of the photon and the electron pulses at the position of the sample. The self-consistent electron wavefunction is readily given by the Lippman-Schwinger equation [24] ψ(z, t) = ψ 0 (z, t) is the one-dimensional electron Green function corresponding to propagation alongẑ. We solve this equation by writing the wavefunction as a sum over different perturbation orders [20] where the dot expresses the convolution operator. In equation (1), N represents the order of scattering, which is also the number of emission and absorption events experienced by the electron. At an order of scattering N , we find electrons that have gained or lost |L| N photons. Equation (1) can be solved by recursion. From the resulting self-consistent wavefunction, we can readily calculate the probability that the electron ends up with momentum around k L = k 0 + Lω/v, so that it has emitted (L < 0) or gained (L > 0) an amount of energy corresponding to |L| photons. We find [20] where C N L are constants. Remarkably, we find that the multiphoton probabilities depend on the pulse durations and delay only through the ratios τ/ p and e / p . Clearly, a delay in the arrival of photon and electron pulses is translated into a decrease in the effective interaction strength and therefore, it acts as an eraser of the multiphoton probabilities. Numerical results and discussion We focus on the light-frequency dependence of the multiphoton exchange probabilities and discuss the interaction with a plasmonic sample, as we intend to analyze the suitability of EEGS to yield information on its optical response. In particular, we consider nanoshells consisting of a silica core ( = 2) coated with either 5 nm of gold or 4 nm of silver. The full diameter of the particle is 100 nm in all cases. The choice of metal thickness is made to feature a spectrally isolated dipole plasmon around 700-800 nm light wavelength. Upon illumination with the laser external field E ext , we approximate the induced electric field in that spectral region by its dominant dipolar component E = [k 2 p + (p · ∇)∇] exp(ikr )/r , where k = ω/c is the light wavevector and p = αE ext is the induced dipole moment. Here, we incorporate retardation effects in the polarizability α by expressing it in terms of the dipolar Mie coefficient as α = 3t E 1 /2k 3 , where t E 1 finds a closed-form analytical expression for spherical shells [25]. We use a measured frequency-dependent dielectric function for silver and electron z gold [26] to represent the response of the metallic coating of the nanoshells. Figure 1 depicts a sketch of the system, for which we assume co-parallel electron and laser beams. Under this configuration, the electron is only sensitive to the induced field, as the incident field is normal to the electron velocity and losses/gains are mediated by the electric field along the beam direction. Similar results are obtained for other light incidence directions and polarization conditions, and although the particle-mediated light-electron coupling strength depends on these parameters, our qualitative conclusions remain unchanged. The coupling strength can be intuitively understood by examining the z component of the electric field produced by the induced dipole. The extinction cross-section σ ext of the nanoshells (figure 1(b), solid curves, obtained from σ ext = 4π k Im{α}) shows a prominent near-infrared plasmon that is isolated from other modes of the system (cf solid and dashed curves, with the latter obtained with inclusion of all multipoles [25]; notice that both calculations are in excellent agreement, except for the ∼580 nm quadrupolar plasmon of silver, which is obviously absent from the dipolar results). It is important to realize that only the component of the electric field along z contributes to the photon-electron coupling (i.e. only the induced field contributes). Figure 1(c) shows the intensity of the induced field for illumination at the peak plasmon frequency, which exhibits a clear enhancement with respect to the incident field. Silver nanoshells have larger on-resonance extinction and induced field, which translate into higher multiphoton probabilities (see below). Interestingly, the occupation probability of the electron states changes dramatically by varying the frequency of the incoming light. When the frequency approaches the dipole plasmon resonance of the particle, the near-field intensity is enhanced, and therefore, the interaction with the electron is stronger. We show in figure 2(a) that the electron is mostly in the elastic or zero-loss channel for low intensities. When the intensity increases, this channel is increasingly depleted and shows a dip at the plasmon frequency. This depletion is accompanied by a complex dynamics that results in a sizable population of electron inelastic channels, as shown in We must stress that the intensity of the external field needed to produce these effects is orders of magnitude lower than that reported in previous works [18,20], thanks to the mediation of the particle plasmons, which act as optical amplifiers. In particular, figure 2(a) shows small depletion of the zero-loss channel even for peak laser intensities as low as 10 8 W m −2 . Incidentally, there is a small shift between near and far field resonance frequencies, as previously reported for light scattering from nanoparticles [27,28]. This is clear from figure 2, where the gray dotted line, representing the plasmon frequency as obtained from the extinction cross-section (far-field), is blue-shifted a few nanometers with respect to the maximum depletion of the elastic electron component. This is a manifestation of the fact that the coupling between photons and electrons is mediated by the near field, which is dominated by evanescent components (i.e. the localized plasmons die away from the particle, and they are mainly consisting of non-propagating fields involving wavevectors outside the light cone, where coupling to the electromagnetic field of the electron is possible). The occupation probability depends on the ratio between the photon and the electron pulse durations (see figure 3). In the limit of continuous-wave (cw) illumination (i.e. when the electron pulse is much shorter than the light pulse), the elastic signal is depleted at low intensities compared with the depletion for pulsed illumination, which can be intuitively understood from the stronger interaction associated with continuous plasmon excitation (cf e.g. the higher depopulation of the elastic channel for continuous and pulse illumination in figure 3). Compared with gold, the silver nanoshell produces higher depletion of the elastic channel at lower intensities, compatible with cw illumination without damaging the samples. Multiphoton events can be observed for intensities as low as ∼10 9 W m 2 . Although we have focused on nanoshells because of the tutorial character of the dipolar model with which they can be modeled, similar conclusions can be also drawn for metallic nanorods, the plasmons of which can be tuned by changing their aspect ratio. In particular, the lowest-order dipolar mode of a rod is expected to also produce significant field enhancement that can yield even stronger EEGS signals under cw illumination conditions. Unified analytical quantum model for electron energy-loss spectroscopy (EELS), electron energy-gain spectroscopy (EEGS) and cathodoluminescence (CL) In order to place the above EEGS probabilities in perspective, it is useful to compare them with those of more traditional electron spectroscopies -EELS and CL. We formulate a simple quantum model in this section that unifies the description of all these three spectroscopies and provides further insight into the mechanisms that underlie the exchanges between photons, plasmons and fast electrons. For simplicity, we consider a sample consisting of a plasmonsupporting small particle. We describe photons and plasmons in terms of their annihilation (creation) operators a i and b l (a † i and b † l ), where i and l label different modes of frequencies ω i andω pl , respectively. We consider degenerate plasmons and include their inelastic decay rate pl as an imaginary part inω pl = ω pl − i pl /2. Likewise, c † k (c k ) creates (annihilates) a fast electron of energyh k and momentumhkẑ. We neglect the dynamics of the electron along directions perpendicular toẑ, which is a safe assumption for typical electron beams with small divergence angles. The Hamiltonian of this system can then be written as where is the non-interacting part, whereas the interaction Hamiltonian consists of just two components, H int = H ph−pl + H e−pl , because electrons and photons do not directly couple in free space. The photon-plasmon coupling Hamiltonian is expressed in terms of the particle dipole operator and the quantized electromagnetic field where V is the mode quantization volume,ε i is the polarization vector and the particle is assumed to be at r = 0. We adopt the rotating wave approximation, which allows us to write with real coupling coefficients The electron-plasmon interaction Hamiltonian is where g kk l are complex coupling constants given by (see appendix A) γ = 1/ 1 − v 2 /c 2 is the Lorentz factor, L is the electron quantization length along the beam direction, K 0 and K 1 are modified Bessel functions and R 0 is the impact parameter of the beam relative to the particle (see figure 1(a)). The states of the system consist of the tensorial product of electron, photon and plasmon states. In the initial state, the electron has energy k 0 , the light mode i of the illuminating laser has a population of N 1 photons, and no plasmons are excited: |k 0 N i 0 l = |k 0 ⊗ |N i ⊗ |0 l . For low irradiation intensities, the interactions can be treated as perturbations and we solve the evolution up to second order. In this picture, higher order processes involving multiple plasmon excitation produce stimulated emission effects, which are discussed later in this section. At first order in perturbation theory (see figure 4(a)), Energy loss (eV) two processes are possible: extinction of light by the nanoshell and electron energy loss. At second order, the processes that involve the creation of only one plasmon are CL and EEGS. For the system sketched in figure 1(a), the probability per unit of transferred energy for EELS, EEGS and CL is which agree with previous results obtained from dielectric theory [11,17]. In the above equations, α(ω) = d 2 3h 1 ω pl −ω is the particle polarizability (see appendix B.1.1), ω i is the incident light frequency and I 0 = (c/2π )|E ext | 2 is the light intensity, which as expected, only appears in the photon-assisted processes (EEGS). In the above, we discuss rather elementary processes in a diagramatic fashion. This basic academic approach can nonetheless be applied to actual experiments, as we can regard the particle and its plasmons as an intermediate coupler between the incident photons and the electron, so that a factor proportional to the large number of incident photons N i (i.e., I 0 ) pops up in equation (12). Alternatively, we could have described the laser by a coherent photon state, which excites a coherent plasmon state. In both of these approaches, the inverse process of stimulated photon emission into state i (i.e., the electron loses energyhω i and a photon is emitted in this mode), mediated by particle plasmons, has exactly the same probability, but now multiplied by N i + 1 instead of N i . For large N i , the stimulated EELS (SEELS) probability is approximately given by equation (12) (i.e., N i + 1 ≈ N i ). Incidentally, EEGS, SEELS and CL are intimately related to the Einstein coefficients for absorption, stimulated emission and spontaneous emission, respectively. For the single plasmon mode under consideration, the absorption and stimulated emission coefficients are identical, and so are the EEGS and EELS matrix elements in the limit of large photon numbers. Figure 4(b) shows calculated energy loss spectra for a silver nanoshell under the conditions of figure 1(a) based upon equations (11)- (13), taking into account the SEELS contribution just discussed. The δ function in the latter is slightly broadened for clarity (the actual width of this peak will be essentially limited by both the laser width and the resolution of the energy analyser). Several photon energies around the plasmon peak have been considered, giving rise to substantial contributions comparable to the regular EELS intensity when the light is tuned to the plasmon energy. Finally, we compare in figure 5 calculated EEGS and EELS spectra for the same silver nanoshell as in figure 4. The probability of exciting one plasmon in EELS is comparable to the probability of gaining/losing one photon in EEGS when the nanoshell is illuminated with intensities as low as 10 8 W m −2 , below the damage threshold of the materials involved, thus indicating that this effect is measurable using cw illumination. Incidentally, figure 5(b) shows excellent agreement between the full numerical results of equation (2) and the analytical expression resulting from considering only single-photon absorption (i.e. after integrating the delta function in equation (12)), with only small deviations at high energies originating in nonlinear multiphoton inelastic scattering. Conclusions In summary, the interaction between swift electrons and intense induced light fields mediated by plasmon-supporting nanostructures provides useful information on the sample, with great potential to combine unprecedented space-, energy-and time-resolutions in a single spectral microscopy technique. Remarkably, we find the plasmonic enhancement of the induced field to lead to large energy-gain (and stimulated losses) probabilities using moderate levels of incident light intensity compatible with cw illumination without damaging the samples. The electron undergoes a complex temporal evolution in its interaction with the particles, which takes place over a time scale in the sub-femtosecond domain, thus opening a window to ultrafast phenomena that could be eventually explored by resorting on laser pulse shaping. B.1. First order processes B.1.1. Light extinction. This process is mediated by H ph−pl and the final state corresponds to one lost photon and one excited plasmon in mode l: |k 0 (N − 1) i 1 l . The transition rate from the initial state to all possible final states with an excited plasmon is given by where we have assumed a spherically symmetric particle (d l = d and l (ε i ·x l ) 2 = 1) and we have implicitly defined δ(ω pl − ω i ) = (1/π)Im 1 ω pl −ω i and the imaginary part comes fromω pl through the plasmon width Γ pl . Now, computing the light intensity as [29] I 0 = (c/2π) E − E + = chω i N /V , we find the extinction cross-section which corresponds to an effective polarizability given by This expression is used in the following sections. B.1.2. Electron energy loss. This process is mediated by H e−pl and the final state corresponds to an electron that has lost energy k 0 − k > 0 to excite the plasmon mode l: |k N i 1 l . The probability of losing the energy of one plasmon is given by where the transition rate has been multiplied by the interaction time L/v and we again consider a spherical particle. From here, we find the EELS probability per unit of energy losshω as Finally, using the prescription k → (L/2π) dk and working in the non-recoil approximation (i.e. k − k 0 ≈ (k − k 0 )v), we obtain equation (11) for the EELS probability. Noticing that k(N − 1) i 0 l |H e−pl |k 0 (N − 1) i 1 l = g kk 0 l and k 0 (N − 1) i 1 l |H ph−pl |k 0 N i 0 l = −i √ N g il , and considering the light incidence and polarization conditions (see figure 1(a)), we find Finally, equation (12) directly follows from evaluating the EEGS probability per unit of transferred energy, EEGS (ω) =h −1 k P EEGS δ( k − k 0 − ω). Proceeding as in section B.2.1, with k N i 1 j 0 l |H ph−pl |k N i 1 l = ig jl and k N i 1 l |H e−pl |k 0 N i 0 l = g kk 0 l , we obtain P CL = 2π L h 4 v j l g jl g kk 0 l k 0 − k −ω pl 2 δ( k − k 0 + ω j ).
5,121.2
2013-10-17T00:00:00.000
[ "Physics" ]
A Distributed Strategy for Cooperative Autonomous Robots Using Pedestrian Behavior for Multi-Target Search in the Unknown Environment Searching multiple targets with swarm robots is a realistic and significant problem. The goal is to search the targets in the minimum time while avoiding collisions with other robots. In this paper, inspired by pedestrian behavior, swarm robotic pedestrian behavior (SRPB) was proposed. It considered many realistic constraints in the multi-target search problem, including limited communication range, limited working time, unknown sources, unknown extrema, the arbitrary initial location of robots, non-oriented search, and no central coordination. The performance of different cooperative strategies was evaluated in terms of average time to find the first, the half, and the last source, the number of located sources and the collision rate. Several experiments with different target signals, fixed initial location, arbitrary initial location, different population sizes, and the different number of targets were implemented. It was demonstrated by numerous experiments that SRPB had excellent stability, quick source seeking, a high number of located sources, and a low collision rate in various search strategies. Introduction Steering a group of autonomous robots to search the targets is a well-studied problem due to its numerous important applications. There are many significant applications for target search, including searching and rescuing in a hazard environment [1], environmental monitoring [2], perception in battlefield [3], locating gas leakage, odor source detection, etc. In these scenarios, the robots can sense the environment, collecting and exchanging measurements of the targets, and to exploit this information to guide their movements. The goal is to search the targets in the minimum time while avoiding collisions with other robots. There are many algorithms to complete the task of a multi-target search. According to the richness of target information, the algorithms of the target search can be divided into three categories. The first is information-lack. In this category, the environment is much larger than the range of communication and sensing, and there is no information about the targets. The goal is to maximize environmental coverage while minimizing overlaps. Search pattern, random walk, search map, digital pheromone [4], Glasius bio-inspired neural network (GBNN) [5,6], and optimization algorithms are the typical algorithms. The search pattern, such as zigzag and spiral [7], can effectively cover a given domain with fewer time t. r i d (t) is the decision radius of glowworm i at time t. The update equation of position is shown in (7). Finally, the decision radius is updated by (8). n t is the maximum size of a group. N i (t) is the number of neighbors in U i (t). In this paper, a random component was introduced to the GSO to help robots to explore the area when there is no neighbor. k∈ U i (t) l k (t) − l i (t) (6) x i (t r i d (t + 1) = min r s , max 0, r i d (t) + β(n t − N i (t) (8) FA: Firefly algorithms [36]. It is similar to GSO. The robot in the FA is influenced by all of the neighbors that are superior to its own. The attractiveness is proportional to fitness, and it decreases as the distance between robots increases. If no one is higher than its own, the robot will move randomly. The movement that a firefly i is attracted by firefly j is described as: where ξ i is a random vector. Paper [28] shows that levy flight distribution is more effective than the Gaussian distribution in global searching. So, in this paper, ξ i was a levy flight random vector. Random Walk Strategies In an information-lack environment, random walk is the most flexible strategy. Brownian motion and Levy flight search are commonly used. Brownian motion is efficient when the area is small, or the number of robots is large. Levy flight search is used when the distribution of the targets is sparse, and the area is large. Since the width and length of the environment are larger than the maximum speed, the Levy flight search is used to compare with other algorithms. LFS: Levy flight search [9]. In this model, the speed vector of robots obeys the power-law distribution, and it can get from Equation (10). Where a~N(0, σ 2 ), b~N(0, 1) are two independent random variables that have a normal Gaussian distribution. Problem Description Consider N t (N t ≥ 1) sources distributed in a W × L environment. These sources can emit some kind of measurable signals, and they could be the electromagnetic signal, the light signal [37], the thermal signal, the acoustic signal [38], even the odor signal [39], and so on. The positions of sources, the spatial distribution of the signal field in the space, and the number of sources are unknown to robots, but each robot can measure the strength of the signal emitted by the sources. Besides, there is the maximum strength at the location of sources, and robots can measure the signal strength at the robots' position [40]. The goal is that a group of robots consisted of N r (N r >> N t ) autonomous robots seek the sources simultaneously [41]. In this paper, there have been some assumptions about this problem. Assumption 1: the boundary of the environment is known. There are N t (N t ≥1) sources distributed randomly in the environment. Q = q 1 , q 2 . . . , q n },q i ∈ R n×2 is the set of position vectors of sources. Different sources are represented as τ Q = {1, 2, . . . , N t }. Besides, the distance between two adjacent sources is more than 2R c , where R c is the communication range of robots. It can be described as: min q i − q j > 2R c , ∀i, j ∈ τ Q , i! = j (11) In this paper, there were no other methods to distinguish different sources except the received signal strength. Combining the maximum strengths of signal and position of sources could help robots distinguish different sources. If the maximum strengths of sources are known, different sources can be recognized by the maximum strengths and different locations. In reality, the extrema of sources are unknown. For example, the power of sources is unknown in the sea rescue and battlefield awareness. Therefore, robots can only distinguish different sources by the signal strength taken at robots' positions and the information of neighborhood robots. Since the robot in swarm intelligence is attracted by neighbors who have high strengths, the source with low power is ignored when the distance between two sources is too close. If the maximum signal strength between the two sources is the same, robots will oscillate between two sources. The communication range of robots is always regarded as the attractive range in multi-target search. If the distances between the two sources are greater than 2R c , it will avoid the above situations, and robots can seek different sources simultaneously. Assumption 2: in this paper, two signal distributions were considered. One is the isotropic signal (12), and the other is the anisotropic signal (13). where R is the effective range of radiation, l i x,y is the signal strength of source i at position (x, y). Θ π/4 represents the π/4 rotation matrix, and r i is the position vector. The signal distribution in an environment can be represented by: Sensors 2020, 20, x FOR PEER REVIEW 6 of 28 sources can be recognized by the maximum strengths and different locations. In reality, the extrema of sources are unknown. For example, the power of sources is unknown in the sea rescue and battlefield awareness. Therefore, robots can only distinguish different sources by the signal strength taken at robots' positions and the information of neighborhood robots. Since the robot in swarm intelligence is attracted by neighbors who have high strengths, the source with low power is ignored when the distance between two sources is too close. If the maximum signal strength between the two sources is the same, robots will oscillate between two sources. The communication range of robots is always regarded as the attractive range in multi-target search. If the distances between the two sources are greater than 2Rc, it will avoid the above situations, and robots can seek different sources simultaneously. Assumption 2: in this paper, two signal distributions were considered. One is the isotropic signal (12), and the other is the anisotropic signal (13). where R is the effective range of radiation, , xy i l is the signal strength of source i at position (x,y). . The locations of the four sources are at position q1 (35,25), q2 (25,80), q3(70,80), q4 (85,35), respectively. In Figure 1a, R is equal to 10 m. In . Assumption 4: that the source k is located by the robot i can be defined as: Assumption 3: there are N r (N r N t ) robots. It can be represented as τ R = {1, 2, . . . , N r }. The position of robots are P = p 1 , p 2 , . . . , p m , p j ∈ R m×2 . Assumption 4: that the source k is located by the robot i can be defined as: ∃i ∈ τ R , ∀k ∈ τ Q q k − p i ≤ r s Assumption 5: the communication and sensing ranges are smaller than the environment, and there is no method to far communication. It can be represented as W, L R c > R s > r s . W and L are the width and length of the environment, respectively.R c is the radius of communication.R s is the radius of sensing. In reality, there are many obstacles, including static, dynamic obstacles. In this paper, each robot considered the other robots within its sensing range as moving obstacles. The repulsive effect in Section 4 works in the sensing range. When the sources are within the range of radius r s of robots, the sources are defined as "located". Under these assumptions, the problem is to design a strategy that robots can locate multiple sources and autonomous construct groups. The goal is to seek the sources in the minimum time while avoiding collisions with other robots. Proposed Algorithm Pedestrian behavior is self-organized behavior. It supports an efficient motion in subway/railway stations. Pedestrians exhibit different behavior in the same environment at the same time: they will try to reach the desired destination, and they keep a limited distance from the other pedestrians, also propelled toward their destination by the other pedestrians. Sometimes a large group of pedestrians divides into small groups [42]. The pedestrian behavior is used in distributed autonomous robotic systems because there are some similarities between pedestrians and swarm robots. Firstly, swarm robots and pedestrians decide the next movement with limited observation and calculation. Secondly, the behavior of pedestrians in subway/railway stations is similar to robot navigation [43] and target search. For example, in a search scenario, swarm robots don't know the specific location of the destination, but they need to reach the destination with limited information. Thirdly, the acquired information, including limited vision information, partial information of targets, is similar. Therefore, the pedestrian behavior will provide a great way to solve the problem of multi-target search. There are many models to depict pedestrian behavior, such as decision-field-theory [44], social force model [45]. The social force model is similar to swarm intelligence algorithms. It introduces several forces to describe the effects of pedestrian behavior. Based on the social force model, swarm robotic pedestrian behavior (SRPB) is presented. In this paper, there were four rules that determined the movement of robots. (a) The robot can exploit information about sources and the environment. (b) The movement of the robot is influenced by other robots. (c) Robots are attracted by other robots. (d) A large group of robots divides into small groups. In the following, the main effects of swarm robotic pedestrian behavior are introduced in detail. Main Effects of Swarm Robotic Pedestrian Behavior 1. The robot can exploit the information about sources and environment. In multi-source seeking, little information about sources and environment can be used, but there are two classes of information to exploit. The first is the environmental size. When there is no signal strength at robots' position or robot is alone, the boundary of the environment can help robots to visit the given area. The virtual match points are introduced to help robots explore the environment. The number of virtual match points is calculated by: where W, L are the width and length of the environment, respectively. R c is the range of communication. N m is the number of virtual match points. round() is the round operator. The set of virtual match points is represented by: Each robot has a set of virtual match points. When the robot is alone, the position of attractive effect is the nearest point in the virtual match point set of this robot. If the robot reaches this point without finding a neighbor, the point will be excluded, and the robot chooses the next virtual match point and moves to it. Virtual match points can avoid revisiting the already covered area. Besides, when the repulsion radius is larger than the decision radius, the virtual match points where the distance to the robot is less than 2R c are excluded to avoid revisiting the already located sources. The second class of information is the already visited position and the corresponding strength. When there is no neighbor to cooperatively estimate the gradient of the source, the individual history effect is introduced to help robots move toward the strong signal area. The individual history effect is defined as: p pbest i (t) is the position with the best fitness value for robot i up to time t in the process. It provides a little gradient information of the target when the group is small. The individual history coefficient is expressed as (19). h i (t) is the cognitive coefficient of robot i at time t. K i h (t) is the individual history coefficient of robot i at time t. l i (t) is the fitness of robot i at time t. l i min is the minimum fitness of robot i in the motion. l i max is the maximum fitness of robot i in the motion. γ 0 is the maximum cognitive coefficient. It balances local searching and global searching. N i d (t) represents the number of robots within the robot i's decision radius at time t. The robots within the robot i's decision radius are defined as ψ i d (t) = j : d ij (t) < R i d (t); j ∈ τ R , and d ij (t) is the distance between robot i and robot j at time t. ε is dimensionless. When there is no signal strength, robots visit the given area with the virtual match points. The virtual match points help the robot avoid revisiting the already covered area. By the way, the update of the virtual match point set is distributed and independent. If one robot has visited an area, the other robots still can visit this area. When the robot is alone, the robot explores the given area with the virtual match points, and it approaches the strong strength area with the individual history effect. Besides, the individual history effect can help robots construct a group quickly because all robots will move towards a strong strength area. 2. The movement of robot is influenced by other robots. Different from the swarm intelligence algorithm, pedestrian behavior focuses on the repulsive force. In subway/railway stations, pedestrian keeps a safe distance from other pedestrians when the crowd flow is small. When the pedestrian flow is big, the pedestrian is propelled forward. It can be described by the repulsive effect. The robot keeps a safe distance from the other robots, and it can be propelled towards the destination. The repulsive effect is the combination of all repulsive forces. The repulsive force is described as follows: Sensors 2020, 20, 1606 9 of 28 K ij c (t) is the influence coefficient between robots i and robot j at time t. d ij (t) is the distance between robot j and robot i at time t. ψ i r (t) is the robots within the repulsion range of robot i at time t. R i r (t) represents the repulsion radius of robot i at time t. The robots within the repulsion range influence the behavior of the focal robot. The closer a robot gets to the focal robot, the bigger the absolute value of the influence coefficient is. The repulsive effect is the sum of all repulsive forces. It can be described by Equation (22): e i r (t) is the repulsive effect. r i (t) is the weighted sum of all repulsive forces. p j (t) is the position of robot j at time t. In traditional swarm intelligence algorithms, avoiding collisions can only keep a safe distance between robots. In this paper, the repulsive effect could help the robot keep a certain distance from other robots, and it propelled the robot towards the destination. Keeping a safe distance from other robots or propelling the robot to the destination depends on the priority of the focal robot in a group. When robots construct a group, robots in the group calculate their priority coefficients. The maximum value of the priority coefficient of a robot means that there are no neighbors. The minimum value means that there are many neighbors. In this paper, a robot considered other robots that were located within its decision radius and those with higher fitness value than its own as neighbors. The repulsive effect and repulsion radius will elastically change according to the priority coefficient. If the priority coefficient is large, the repulsive effect and repulsion radius will become big, and the swarm will propel this robot to the source, and the attractive effect will become small. If the priority coefficient is small, the attractive effect will become large, and the repulsive effect will become small. In this method, the priority coefficient is the criterion of change of the repulsion radius. It can improve the efficiency of source seeking and avoid collisions between robots. The priority coefficient is calculated by: ρ is the maximum priority coefficient. It helps robots keep a safe distance between robots and propels robots forward. K i r (t) is the collision coefficient of robot i at time t. l gbest (t) is the best fitness within the local group. l gworst (t) is the worst fitness within the local group, and l i (t) is the fitness of robot i at time t. ε is dimensionless. Finally, the repulsion radius will update as follow: where R max r , R min r are the maximum repulsion radius and the minimum repulsion radius, respectively. 3. Robots are attracted by other robots. Pedestrian in an unknown environment tends to follow the person who has a specific objective or has more information about the destination. When a person would like to go to an unknown place, the best way is to follow the people who know this place. In swarm intelligence algorithms, the attractive force determines the convergence of an algorithm. The attractive force is influenced by neighbors, and the decision radius limits the range of neighbors. A robot considers other robots that are located within its decision radius and those with higher fitness value than its own as neighbors, and the robot selects a neighbor using a probabilistic mechanism and moves to it. The set of neighbors can be expressed as ψ i n (t) = j : d ij (t) < R i d (t); l i (t) < l j (t); j ∈ τ R . When the robot has higher fitness than robot i and the distance to robot i is less than R i d (t), the robot is a neighbor of robot i. The probability of moving toward a neighbor j for robot i is given by: pc ij (t) is the probability of moving toward a neighbor j. l j (t) is the fitness of the neighbor robot j. Once one robot k is selected, the attractive effect is described as: The attractive coefficient is influenced by the individual history coefficient and the priority coefficient. It is described as: When the individual history coefficient is big, the robot is alone, and the attractive effect is small. When an individual history coefficient becomes small, the attractive effect becomes big because the robot joins a group, and it is attracted by other robots. When the priority coefficient becomes big, the repulsive effect becomes big, and the attractive effect becomes small. When the priority coefficient becomes small, the attractive effect becomes large, and the repulsive effect becomes small. 4. A large group of pedestrians divides into small groups. At the exit of a subway/railway station, pedestrian tends to move towards the exits where there are fewer persons. But in a distributed system, robots can't recognize the right size of a group because of the limited communication and sensing ranges. Even if the group size is clear, robots can't decide who will drop out of the current group because the movement of robots is independent. In this part, the self-tuning decision radius is introduced. Combining the decision radius and the repulsion radius can adjust the group size. The decision radius is updated by: where R i d (t) is the decision radius of robot i at time t, and N i n (t) is the number of neighbors of robot i at time t.N max is the maximum number of neighbors. The change rate of the decision radius is influenced by β. When the N i n (t) is less than N max , robot i is alone or the fitness of robot i is the best in the group, so the decision radius will increase. When the N i n (t) is more than N max , robot i has the lowest fitness value in the group, so the decision radius decreases sharply. Once the repulsion radius is larger than the decision radius, the virtual match points where the distance to the robot is less than 2R c are excluded. Then, the robot i will become alone, and the decision radius will increase slowly. In this way, the worst robot will drop out of the group, and the chain effect between robots will help the group to keep a suitable size. The Equation of Velocity and Position The equation of velocity is updated by Equation (29), and the equation of position is (30). w is the inertia coefficient. Inertia provides the reference information for the velocity and smoothes the trajectories of robots.v i (t) is the velocity of robot i at time t, and it is not more than v m . p i (t) is the position of robot i at the time t. v m is the maximum velocity. In Figure 2, we gave some illustrations about the velocity updating process of a robot in SRPB to explain the four rules. The little solid line circles in Figure 2 represent the robots. The red dotted circle is the repulsion radius. The green dotted circle is the decision radius. The purple crosses are the virtual match points. The sources are represented by the green asterisk. Different color arrows indicate different effects. By the way, the attractive effect has two forms. In Figure 2a,d, the attractive effect is influenced by the virtual match points, and it is also influenced by neighbors, as shown in Figure 2b,c. Different attractive effects are depicted by different colors in Figure 2. In Figure 2a, when the robot is alone, three effects, including inertia, the attractive effect, and the history effect, determine the motion of the robot. The robot selects the nearest point in the virtual match point set as an attractive point. The history effect helps the robot approach the strong signal area. There is no repulsive effect because no robots are within the sensing range of the focal robot, and the velocity is determined by inertia, history effect, and attractive effect. When robots construct a group, the robots calculate the priority coefficient according to the fitness in the group. The repulsion radius is proportional to the priority coefficient. As shown in Figure 2b, the robot is influenced by four effects. Since the robot shown in Figure 2b has the maximum fitness value in the group, the priority coefficient is big, and the repulsion radius becomes large. The attractive effect becomes small. The history effect is small because there are many robots within the sensing range. Other robots in the group propel the robot toward the destination. As shown in Figure 2c, there is no repulsive effect because the repulsion radius is small, and there are no robots within the repulsion radius. The robot is attracted by one of the neighbors. In Figure 2d, the decision radius is tuned by the number of neighbors. When the repulsion radius is larger than the decision radius, the robot drops out of the group and searches other sources. In addition, when robots leave one source, the virtual match points where the distance to the robot is less than 2 c R are excluded. It can help robots avoid revisiting already located sources. In Figure 2a,d, the attractive effect is influenced by the virtual match points, and it is also influenced by neighbors, as shown in Figure 2b,c. Different attractive effects are depicted by different colors in Figure 2. In Figure 2a, when the robot is alone, three effects, including inertia, the attractive effect, and the history effect, determine the motion of the robot. The robot selects the nearest point in the virtual match point set as an attractive point. The history effect helps the robot approach the strong signal area. There is no repulsive effect because no robots are within the sensing range of the focal robot, and the velocity is determined by inertia, history effect, and attractive effect. When robots construct a group, the robots calculate the priority coefficient according to the fitness in the group. The repulsion radius is proportional to the priority coefficient. As shown in Figure 2b, the robot is influenced by four effects. Since the robot shown in Figure 2b has the maximum fitness value in the group, the priority coefficient is big, and the repulsion radius becomes large. The attractive effect becomes small. The history effect is small because there are many robots within the sensing range. Other robots in the group propel the robot toward the destination. As shown in Figure 2c, there is no repulsive effect because the repulsion radius is small, and there are no robots within the repulsion radius. The robot is attracted by one of the neighbors. In Figure 2d, the decision radius is tuned by the number of neighbors. When the repulsion radius is larger than the decision radius, the robot drops out of the group and searches other sources. In addition, when robots leave one source, the virtual match points where the distance to the robot is less than 2R c are excluded. It can help robots avoid revisiting already located sources. The Pseudo-Code of SRPB In this part, the proposed SRPB algorithm was shown in Algorithm 1. All rules of SRPB were implemented according to the pseudo-code of SRPB. Simulations and Analysis In this section, there are several parts to discuss the proposed algorithm. At first, we analyzed the effect of parameters and performed some experiments. Those parameters were taken to complete the experiments of comparison. Secondly, swarm exploration behavior with different signals was shown. Thirdly, several groups of experiments were implemented with different population sizes, different numbers of sources, and different distribution of initial position. The performances of different cooperative strategies, including SRPB, PSO, RPSO, A-RPSO, GSO, FA, and LFS, were evaluated in terms of average time to find the first, the half, and the last source, the number of located sources, and the collision rate. Finally, the analysis of how to implement this strategy, in reality, was given. Besides, different criteria were evaluated by the mean and standard deviation of many experiments, and these were denoted by mI and dI, respectively. mI indicates the searching efficiency of strategies, while dI reflects stability. All experiments were implemented with MATLAB R2017a in windows 10. Moreover, a collision is defined as: at one moment, the distance between any two robots is less than half of the minimum repulsion radius. The collision rate is equal to the ratio of the collision number to the given time. The average discovery number rate is the ratio of the located sources to the total sources. Parameter Analysis for the SRPB Strategy There are four parameters that influence the search efficiency of the SRPB strategy, including w, γ 0 , ρ, and β. Each parameter was analyzed separately and sequentially, with the other three parameters fixed. The experiments were implemented in a fixed scenario, as shown in Figure 1a, with 20 robots and 4 sources. Firstly, the parameter of the inertia weight w was analyzed with γ 0 = 0.48, ρ = 0.8, and β = 0.3. Inertia provides the reference information for the velocity, and it can smooth the trajectories of robots. In most swarm intelligence algorithms, inertia weight is usually within (0, 1), and it needs a large value, so different values within (0.5, 0.98) are used to analyze the algorithm. By the way, before the effect of inertia weight was analyzed, we had already taken other parameters with a proper value. In this paper, γ 0 was to balance local searching and global searching. Exploring sources as many as possible is better for the environment with an unknown number of sources, but the efficiency of source seeking is related to local searching. To balance it, γ 0 = 0.48 was taken. ρ is used to provide a repulsive force to propel a robot forward and keep a safe distance. It must take a large value. When the repulsive effect acts as a thrust, the attractive effect is small because there are no neighbors, and the attractive effect plays a little role in velocity. So, ρ = 0.8 was taken. Finally, β is related to the tuning of the decision radius. In this paper, the self-tuning decision radius was used to determine which robot should drop out of the group. β is the change rate of the decision radius. When the number of neighbors exceeds a value, the decision radius will decrease sharply and then slowly increases. If β is large, the decision radius will increase quickly. It cannot drop out of the group because the robots are attracted by the neighbors within the decision radius. Besides, β cannot be a small value. It makes robots move far away from the neighbor sources because the robot cannot cooperate with other robots. So, β = 0.3 was taken. As can be seen from Figure 3, with the increase of w, the average number of located sources increases. The collision rate becomes small, and the average time to located sources decreases. It shows that inertia weight needs to be a large value, and inertia plays an important role in the motion. Inertia provides the reference information for the velocity, especially when there is no information about the environment. Besides, when w is greater than 0.9, the collision rate, the average number of located sources, and the average time to located sources remain unchanged. It concludes that the parameter w should be big. Sensors 2020, 20, x FOR PEER REVIEW 13 of 28 Moreover, a collision is defined as: at one moment, the distance between any two robots is less than half of the minimum repulsion radius. The collision rate is equal to the ratio of the collision number to the given time. The average discovery number rate is the ratio of the located sources to the total sources. Parameter Analysis for the SRPB Strategy There are four parameters that influence the search efficiency of the SRPB strategy, including w, 0  ,  , and  . Each parameter was analyzed separately and sequentially, with the other three parameters fixed. The experiments were implemented in a fixed scenario, as shown in Figure 1a, with 20 robots and 4 sources. Firstly, the parameter of the inertia weight w was analyzed with 0 0.48   , 0.8   , and  =0. 3. Inertia provides the reference information for the velocity, and it can smooth the trajectories of robots. In most swarm intelligence algorithms, inertia weight is usually within (0,1), and it needs a large value, so different values within (0.5,0.98) are used to analyze the algorithm. By the way, before the effect of inertia weight was analyzed, we had already taken other parameters with a proper value. In this paper, 0  was to balance local searching and global searching. Exploring sources as many as possible is better for the environment with an unknown number of sources, but the efficiency of source seeking is related to local searching. To balance it, 0 0.48   was taken.  is used to provide a repulsive force to propel a robot forward and keep a safe distance. It must take a large value. When the repulsive effect acts as a thrust, the attractive effect is small because there are no neighbors, and the attractive effect plays a little role in velocity. So, 0.8   was taken. Finally,  is related to the tuning of the decision radius. In this paper, the self-tuning decision radius was used to determine which robot should drop out of the group.  is the change rate of the decision radius. When the number of neighbors exceeds a value, the decision radius will decrease sharply and then slowly increases. If  is large, the decision radius will increase quickly. It cannot drop out of the group because the robots are attracted by the neighbors within the decision radius. Besides,  cannot be a small value. It makes robots move far away from the neighbor sources because the robot cannot cooperate with other robots. So,  = 0.3 was taken. (a) (b) As can be seen from Figure 3, with the increase of w, the average number of located sources increases. The collision rate becomes small, and the average time to located sources decreases. It shows that inertia weight needs to be a large value, and inertia plays an important role in the motion. Inertia provides the reference information for the velocity, especially when there is no information about the environment. Besides, when w is greater than 0.9, the collision rate, the average number of located sources, and the average time to located sources remain unchanged. It concludes that the parameter w should be big. Secondly, parameter  , which varies within (0.1,0.9), was analyzed with 0 0.48   , 0.8   , and w=0.95 in the same scenario. The performance with different  is shown in Figure 4. Secondly, parameter β, which varies within (0.1, 0.9), was analyzed with γ 0 = 0.48, ρ = 0.8, and w = 0.95 in the same scenario. The performance with different β is shown in Figure 4. As can be seen from Figure 3, with the increase of w, the average number of located sources increases. The collision rate becomes small, and the average time to located sources decreases. It shows that inertia weight needs to be a large value, and inertia plays an important role in the motion. Inertia provides the reference information for the velocity, especially when there is no information about the environment. Besides, when w is greater than 0.9, the collision rate, the average number of located sources, and the average time to located sources remain unchanged. It concludes that the parameter w should be big. Secondly, parameter  , which varies within (0.1,0.9), was analyzed with 0 0.48 The change rate of decision radius is influenced by β. When the number of neighbors exceeds a value, the decision radius decreases sharply. Once the repulsion radius is larger than the decision radius, the robot will become alone and then move to other areas. If β is too big, the robots within the decision radius of the focal robot attract the focal robot all the time, and the group size can't be adjusted effectively. It can be seen from Figure 4 that the average number of located sources, the collision rate, and the average time to find different sources are poor with the increase of β. Thirdly, the parameter γ 0 was analyzed with β = 0.3, ρ = 0.8, and w = 0.95. γ 0 is to balance local searching and global searching. When the value of γ 0 is greater than 0.5, robots perform global searching first. Local searching is a priority when γ 0 is less than 0.5. Figure 5 shows that a large γ 0 performs well because the history effect can help robots approach the strong signal area. The history effect plays an important role in seeking the source, especially when the robot is alone. Nevertheless, there are two situations that robots become alone. In the initial location, some robots may be alone because of arbitrary location, and they would like to approach the source quickly. It requires a big history effect. When the robot drops out of the group, it tends to explore the other sources. The small history effect helps the robot drop out of the group; otherwise, the robot will always stay at this group. A small history coefficient will help robots divide into several small groups. It also makes robots seek sources as many as possible because the number of located sources is related to the number of groups of robots. In reality, since the number of sources is unknown, γ 0 is smaller than 0.5 to explore more sources. The change rate of decision radius is influenced by  . When the number of neighbors exceeds a value, the decision radius decreases sharply. Once the repulsion radius is larger than the decision radius, the robot will become alone and then move to other areas. If  is too big, the robots within the decision radius of the focal robot attract the focal robot all the time, and the group size can't be adjusted effectively. It can be seen from Figure 4 that the average number of located sources, the collision rate, and the average time to find different sources are poor with the increase of  . Thirdly, the parameter 0  was analyzed with  = 0.3, 0.8   , and w=0.95. 0  is to balance local searching and global searching. When the value of 0  is greater than 0.5, robots perform global searching first. Local searching is a priority when 0  is less than 0.5. Figure 5 shows that a large 0  performs well because the history effect can help robots approach the strong signal area. The history effect plays an important role in seeking the source, especially when the robot is alone. Nevertheless, there are two situations that robots become alone. In the initial location, some robots may be alone because of arbitrary location, and they would like to approach the source quickly. It requires a big history effect. When the robot drops out of the group, it tends to explore the other sources. The small history effect helps the robot drop out of the group; otherwise, The change rate of decision radius is influenced by  . When the number of neighbors exceeds a value, the decision radius decreases sharply. Once the repulsion radius is larger than the decision radius, the robot will become alone and then move to other areas. If  is too big, the robots within the decision radius of the focal robot attract the focal robot all the time, and the group size can't be adjusted effectively. It can be seen from Figure 4 that the average number of located sources, the collision rate, and the average time to find different sources are poor with the increase of  . Thirdly, the parameter 0  was analyzed with  = 0.3, 0.8   , and w=0.95. 0  is to balance local searching and global searching. When the value of 0  is greater than 0.5, robots perform global searching first. Local searching is a priority when 0  is less than 0.5. Figure 5 shows that a large 0  performs well because the history effect can help robots approach the strong signal area. The history effect plays an important role in seeking the source, especially when the robot is alone. Nevertheless, there are two situations that robots become alone. In the initial location, some robots may be alone because of arbitrary location, and they would like to approach the source quickly. It requires a big history effect. When the robot drops out of the group, it tends to explore the other sources. The small history effect helps the robot drop out of the group; otherwise, Finally, parameter ρ was analyzed with β = 0.3, γ 0 = 0.48, and w = 0.95. ρ is related to the role of the repulsive effect. There are two roles that the repulsive effect plays. One is to keep a safe distance from other robots, and the other is acting as a thrust. As shown in Figure 6b, when the value of ρ is small, robots can't avoid collisions between robots. When the repulsive effect acts as a thrust, the repulsive effect should be greater than the attractive effect because sometimes the attractive effect and the repulsive effect are contradictory. Therefore, in this paper, ρ was greater than 0.5. When the repulsive effect has a great effect on robots, the attractive effect is small, and the repulsive effect will propel the robot forwards. There are two roles that the repulsive effect plays. One is to keep a safe distance from other robots, and the other is acting as a thrust. As shown in Figure 6b, when the value of  is small, robots can't avoid collisions between robots. When the repulsive effect acts as a thrust, the repulsive effect should be greater than the attractive effect because sometimes the attractive effect and the repulsive effect are contradictory. Therefore, in this paper,  was greater than 0.5. When the repulsive effect has a great effect on robots, the attractive effect is small, and the repulsive effect will propel the robot forwards. Algorithms for Comparison In this part, all parameters of comparison algorithms are given. Considering fuel consumption, robots can only work in a limited time. It can be determined by the maximum speed and the width and length of the environment. For example, if the environment is 100 100 mm  , and the maximum speed of robots is 2 m/s, each robot can work for 100 s. In this way, a single robot can't visit a complete environment. The goal is to minimize the average time to find sources and to maximize the number of located sources. In all experiments, the radius of communication is 10 m. The minimum repulsion radius is 2 m. The maximum repulsion radius is half of the communication radius. The maximum speed is 2 m/s. All algorithms and their corresponding parameter configurations are shown as Algorithms for Comparison In this part, all parameters of comparison algorithms are given. Considering fuel consumption, robots can only work in a limited time. It can be determined by the maximum speed and the width and length of the environment. For example, if the environment is 100 m × 100 m, and the maximum speed of robots is 2 m/s, each robot can work for 100 s. In this way, a single robot can't visit a complete environment. The goal is to minimize the average time to find sources and to maximize the number of located sources. In all experiments, the radius of communication is 10 m. The minimum repulsion radius is 2 m. The maximum repulsion radius is half of the communication radius. The maximum speed is 2 m/s. All algorithms and their corresponding parameter configurations are shown as follows: PSO: multiple target particle swarm optimization. In [32], multi-target search was considered. Therefore, all parameters are shown as: Inertia weight w = 0.9, cognition coefficient c 1 = 1.0, social coefficient c 2 = 1.0. RPSO: Robotic particle swarm optimization. This method has been used in one target search, and it can be applied in a multi-target search when the gbest in the RPSO is regarded as the location of the best robots within the local swarm. In this paper, all parameters were tuned under the same experimental conditions, shown in part 5.1. Inertia weight w = 0.95, cognition coefficient c 1 = 1.0, social coefficient c 2 = 2.0, obstacle avoidance coefficient c 3 = 2.0. GSO: Glowworm swarm optimization. This algorithm was used for a multi-target search in paper [29]. The parameters are: the luciferin enhancement constant γ = 0.6, the maximum size of a group n t = 4, and β = 0.08. Swarm Exploration Behavior with Different Signals In this part, the swarm exploration behavior with different signals is shown. These experiments implement with four sources and 20 robots in an 100 m × 100 m environment. The four sources are at position q 1 (35,25), q 2 (25,80), q 3 (70,80), q 4 (85,35), respectively. The distribution of the four sources is shown in Figure 1. We gave the robots' trajectories from initial locations to the extrema, initial arbitrary distribution of the robots, and final location. Besides, the robots' trajectories from initial locations to different sources are shown, respectively. The limited work time is 100 s, and the initial locations of the robots are arbitrary. In Figures 7 and 8, the purple crosses are virtual match points. The sources are represented by the green asterisk. The little circle represents the final location of a robot. The pentagram represents the initial location of a robot. The dotted line is the robot's trajectory, and different colors represent different robots. Firstly, the isotropic signals shown in Figure 1a are used. The time to find the first, the half, and the last sources are 10 s, 23 s, 23 s, respectively. The collision rate is equal to 0.26, and the number of located sources is equal to 4. The robots' trajectories are shown in Figure 7a. The anisotropic signals shown in Figure 1b are used. The time to find the first, the half, and the last source are 7 s, 20 s, 26 s, respectively. The collision rate is equal to 0.29, and the number of located sources is equal to 4. The robots' trajectories are shown in Figure 8a. Sensors 2020, 20, x FOR PEER REVIEW 18 of 28 Stability between Different Algorithms In source seeking, different population sizes, different number of sources, and the size of the environment influence the performance of the swarm intelligence algorithms. A different initial position distribution of robots and the random effect of swarm intelligence algorithms also have an impact on the stability of source seeking. Some random parameters in a swarm intelligence algorithm can keep a diversity of solutions, but the algorithm with too many random effects is inefficient and unstable in source seeking. In reality, the stability of source seeking requires that the strategy can work in arbitrary initial locations and seek the targets with approximate numbers in a fixed initial location. In this part, the experiments of source seeking with the fixed initial location and the same sources are implemented in the environment, shown in Figure 1. At first, 20 robots are randomly placed in the environment, and then experiments with the same initial location are implemented 400 times. The mean (mI) and standard deviation (dI) of many experiments in different criteria are used to evaluate the stability of different algorithms. Stability between Different Algorithms In source seeking, different population sizes, different number of sources, and the size of the environment influence the performance of the swarm intelligence algorithms. A different initial position distribution of robots and the random effect of swarm intelligence algorithms also have an impact on the stability of source seeking. Some random parameters in a swarm intelligence algorithm can keep a diversity of solutions, but the algorithm with too many random effects is inefficient and unstable in source seeking. In reality, the stability of source seeking requires that the strategy can work in arbitrary initial locations and seek the targets with approximate numbers in a fixed initial location. In this part, the experiments of source seeking with the fixed initial location and the same sources are implemented in the environment, shown in Figure 1. At first, 20 robots are randomly placed in the environment, and then experiments with the same initial location are implemented 400 times. The mean (mI) and standard deviation (dI) of many experiments in different criteria are used to evaluate the stability of different algorithms. Figure 9 gives the error histograms of different criteria. In Figure 9, an algorithm with a high standard deviation means that the same algorithm in many experiments shows different performances in the same condition. As shown in Figure 9b, the average number of located sources (mI) is approximate in SRPB, PSO, RPSO, A-RPSO, and FA, but the SRPB has a slight advantage than other algorithms. Besides, SRPB has the lowest standard deviation (dI) between all algorithms. PSO, RPSO, A-RPSO, FA, GSO, and LFS have a high standard deviation (dI). It means that these algorithms, including PSO, RPSO, A-RPSO, FA, GSO, and LFS, are unstable. These algorithms are influenced by the random effect. According to the stability and an average number of located sources, shown in Figure 9a, these strategies can be sorted as SRPB>RPSO>PSO≈FA>A-RPSO>LFS>GSO. SRPB is more stable than other algorithms. In Figure 9b, according to the collision rate, SRPB is better than all strategies except the LFS, and it can be sorted as LFS>SRPB>A-RPSO≈RPSO>PSO≈FA>GSO. In Figure 9c, the performance of SRPB, PSO, RPSO, A-RPSO in terms of the time to find the first and the half sources are approximate. SRPB is better than other strategies in terms of the time to find the last source, and it shows great stability. The other strategies have a high standard deviation, so these algorithms can be sorted as SRPB>RPSO>PSO>A-RPSO>FA>GSO>LFS. Figure 9 gives the error histograms of different criteria. In Figure 9, an algorithm with a high standard deviation means that the same algorithm in many experiments shows different performances in the same condition. As shown in Figure 9b, the average number of located sources (mI) is approximate in SRPB, PSO, RPSO, A-RPSO, and FA, but the SRPB has a slight advantage than other algorithms. Besides, SRPB has the lowest standard deviation (dI) between all algorithms. PSO, RPSO, A-RPSO, FA, GSO, and LFS have a high standard deviation (dI). It means that these algorithms, including PSO, RPSO, A-RPSO, FA, GSO, and LFS, are unstable. These algorithms are influenced by the random effect. According to the stability and an average number of located sources, shown in Figure 9a, these strategies can be sorted as SRPB>RPSO>PSO  FA>A-RPSO>LFS>GSO. SRPB is more stable than other algorithms. In Figure 9b, according to the collision rate, SRPB is better than all strategies except the LFS, and it can be sorted as LFS>SRPB>A-RPSO  RPSO>PSO  FA>GSO. In Figure 9c, the performance of SRPB, PSO, RPSO, A-RPSO in terms of the time to find the first and the half sources are approximate. SRPB is better than other strategies in terms of the time to find the last source, and it shows great stability. The other strategies have a high standard deviation, so these algorithms can be sorted as SRPB>RPSO>PSO>A-RPSO>FA>GSO>LFS. In conclusion, SRPB is more stable than other algorithms in the same condition, and it performs great stability and has a better performance than other algorithms. In conclusion, SRPB is more stable than other algorithms in the same condition, and it performs great stability and has a better performance than other algorithms. Different Population Sizes In this part, experiments with different population sizes and different initial position distribution of robots are implemented in the environment, shown in Figure 1a. Eight tests are carried out with 12,15,18,20,25,30,40, 50 robots, in turn, and the working time of robots is 200 s. Each test is implemented 400 times, and the initial position of robots is updated every time. The performance of SRPB is compared with PSO, RPSO, A-RPSO, GSO, FA, and LFS. By the way, dI is the standard deviation of many experiments. There is a contrast curve of the collision rate of search strategies in Figure 10. Figure 10a shows that the collision rate of different algorithms grows large with the increase of population sizes. When the population size of robots exceeds 20, the collision rate of PSO, RPSO, A-RPSO, FA, and GSO is greater than 80%. SRPB shows an obvious growth, but its collision rate is lower than the other strategies, including PSO, RPSO, A-RPSO, FA, and GSO. The collision rate of LFS is the lowest due to a lack of cooperation. Besides, PSO, RPSO, A-RPSO, FA, GSO have a large standard deviation when the population sizes of robots are less than 30, and the standard deviation of the collision rate of SRPB remains unchanged. Therefore, SRPB is more stable than other strategies. According to the collision rate, these strategies can be sorted as LFS>SRPB>GSO>A-RPSO≈RPSO>FA>PSO. There is a contrast curve of the collision rate of search strategies in Figure 10. Figure 10a shows that the collision rate of different algorithms grows large with the increase of population sizes. When the population size of robots exceeds 20, the collision rate of PSO, RPSO, A-RPSO, FA, and GSO is greater than 80%. SRPB shows an obvious growth, but its collision rate is lower than the other strategies, including PSO, RPSO, A-RPSO, FA, and GSO. The collision rate of LFS is the lowest due to a lack of cooperation. Besides, PSO, RPSO, A-RPSO, FA, GSO have a large standard deviation when the population sizes of robots are less than 30, and the standard deviation of the collision rate of SRPB remains unchanged. Therefore, SRPB is more stable than other strategies. According to the collision rate, these strategies can be sorted as LFS>SRPB>GSO>A-RPSO  RPSO>FA>PSO. As we can see from Figure 11, SRPB is superior to other algorithms when the population size of robots is lower to 30. When the population size of robots exceeds 30, robots in these algorithms can find the approximate number of sources. Besides, Figure 11b shows that SRPB is slightly influenced by the different initial positions of robots, and it is more stable than other strategies. Hence, it can be sorted as SRPB>RPSO>FA>PSO  A-RPSO. By the way, LFS is superior to GSO when the population As we can see from Figure 11, SRPB is superior to other algorithms when the population size of robots is lower to 30. When the population size of robots exceeds 30, robots in these algorithms can find the approximate number of sources. Besides, Figure 11b shows that SRPB is slightly influenced by the different initial positions of robots, and it is more stable than other strategies. Hence, it can be sorted as SRPB>RPSO>FA>PSO≈A-RPSO. By the way, LFS is superior to GSO when the population size of robots is less than 30. Once the population size of robots exceeds 30, GSO outperforms LFS because GSO cannot move without neighbors, and it suits to work in the large population size of robots. There is a contrast curve of the collision rate of search strategies in Figure 10. Figure 10a shows that the collision rate of different algorithms grows large with the increase of population sizes. When the population size of robots exceeds 20, the collision rate of PSO, RPSO, A-RPSO, FA, and GSO is greater than 80%. SRPB shows an obvious growth, but its collision rate is lower than the other strategies, including PSO, RPSO, A-RPSO, FA, and GSO. The collision rate of LFS is the lowest due to a lack of cooperation. Besides, PSO, RPSO, A-RPSO, FA, GSO have a large standard deviation when the population sizes of robots are less than 30, and the standard deviation of the collision rate of SRPB remains unchanged. Therefore, SRPB is more stable than other strategies. According to the collision rate, these strategies can be sorted as LFS>SRPB>GSO>A-RPSO  RPSO>FA>PSO. As we can see from Figure 11, SRPB is superior to other algorithms when the population size of robots is lower to 30. When the population size of robots exceeds 30, robots in these algorithms can find the approximate number of sources. Besides, Figure 11b shows that SRPB is slightly influenced by the different initial positions of robots, and it is more stable than other strategies. Hence, it can be sorted as SRPB>RPSO>FA>PSO  A-RPSO. By the way, LFS is superior to GSO when the population According to the time to find the last source, shown in Figure 12, SRPB is superior to other algorithms, and these algorithms can be sorted as SRPB>RPSO>PSO≈A-RPSO>FA>GSO>LFS. size of robots is less than 30. Once the population size of robots exceeds 30, GSO outperforms LFS because GSO cannot move without neighbors, and it suits to work in the large population size of robots. According to the time to find the last source, shown in Figure 12, SRPB is superior to other algorithms, and these algorithms can be sorted as SRPB>RPSO>PSO  A-RPSO>FA>GSO>LFS. In conclusion, the proposed algorithm SRPB performs well than other algorithms, and it has excellent stability. For all algorithms, with the number of robots increasing, the time to find the last source decreases, and the number of located sources and the collision rate gradually increases. In conclusion, the proposed algorithm SRPB performs well than other algorithms, and it has excellent stability. For all algorithms, with the number of robots increasing, the time to find the last source decreases, and the number of located sources and the collision rate gradually increases. Different Numbers of Targets In this part, the search efficiency of comparison algorithms with various numbers of targets is investigated. Six tests are carried out with 4, 6, 8, 10, 12, 15 targets, in turn, and the size of the environment is 300 m × 300 m. There are 50 robots in the environment, and each robot can work 300 s. Experiments with the different initial positions of robots are implemented 400 times in every test. In Figure 13, the collision rate of different algorithms remains basically unchanged in a different number of targets. The results in part 5.5 show the collision rate of SRPB is 83% for 50 robots when the environment is 100 m × 100 m. In part 5.6, the collision rate of SRPB is 56% for 50 robots when the environment is 300 m × 300 m. We could infer that the collision rate is influenced by the environment and the population sizes. Once the environment and the population size are determined, the collision rate of algorithms does not vary with the number of targets. According to the collision rate, these strategies are sorted as LFS>SRPB>GSO>RPSO≈A-RPSO≈FA>PSO. Sensors 2020, 20, x FOR PEER REVIEW 23 of 28 size of robots is less than 30. Once the population size of robots exceeds 30, GSO outperforms LFS because GSO cannot move without neighbors, and it suits to work in the large population size of robots. According to the time to find the last source, shown in Figure 12, SRPB is superior to other algorithms, and these algorithms can be sorted as SRPB>RPSO>PSO  A-RPSO>FA>GSO>LFS. In conclusion, the proposed algorithm SRPB performs well than other algorithms, and it has excellent stability. For all algorithms, with the number of robots increasing, the time to find the last source decreases, and the number of located sources and the collision rate gradually increases. Different Numbers of Targets In this part, the search efficiency of comparison algorithms with various numbers of targets is investigated. Six tests are carried out with 4, 6, 8, 10, 12, 15 targets, in turn, and the size of the environment is 300 300 mm  . There are 50 robots in the environment, and each robot can work 300 s. Experiments with the different initial positions of robots are implemented 400 times in every test. The contrast curves of the discovery rate of the strategies are given in Figure 14. In the same environment, the average discovery rate of SRPB is greater than other strategies, and PSO is the suboptimum strategy. With the increase of targets, the average discovery rate decreases gradually. PSO, RPSO, A-RPSO, FA, GSO, FA, and LFS decline more sharply than SRPB. We could infer that the number of located sources is related to the number of robots. There are some speculations. In the ideal case, fifty robots could simultaneously find fifty targets in oriented search. Of course, it just suits the situation that a robot has found a target, and it cannot search the other targets. In the non-oriented search, the number of located sources is less than the number of robots because a source is located by a group of robots. In this paper, the maximum size of a group was four. It means that fifty robots can form twelve groups, at least. When the targets in the environment are in abundance, and the distribution of targets is not sparse, fifty robots in SRPB can find twelve targets, at least. As shown in Figure 14, fifty robots in SRPB find an average of 12.5 targets. Finally, according to the discovery rate, these algorithms can be sorted as SRPB>PSO>RPSO≈A-RPSO≈FA>LFS>GSO. Sensors 2020, 20, x FOR PEER REVIEW 24 of 28 In Figure 13, the collision rate of different algorithms remains basically unchanged in a different number of targets. The results in part 5.5 show the collision rate of SRPB is 83% for 50 robots when the environment is 100 100 mm  . In part 5.6, the collision rate of SRPB is 56% for 50 robots when the environment is 300 300 m m  . We could infer that the collision rate is influenced by the environment and the population sizes. Once the environment and the population size are determined, the collision rate of algorithms does not vary with the number of targets. According to the collision rate, these strategies are sorted as LFS>SRPB>GSO>RPSO  A-RPSO  FA>PSO. The contrast curves of the discovery rate of the strategies are given in Figure 14. In the same environment, the average discovery rate of SRPB is greater than other strategies, and PSO is the suboptimum strategy. With the increase of targets, the average discovery rate decreases gradually. PSO, RPSO, A-RPSO, FA, GSO, FA, and LFS decline more sharply than SRPB. We could infer that the number of located sources is related to the number of robots. There are some speculations. In the ideal case, fifty robots could simultaneously find fifty targets in oriented search. Of course, it just suits the situation that a robot has found a target, and it cannot search the other targets. In the nonoriented search, the number of located sources is less than the number of robots because a source is located by a group of robots. In this paper, the maximum size of a group was four. It means that fifty robots can form twelve groups, at least. When the targets in the environment are in abundance, and the distribution of targets is not sparse, fifty robots in SRPB can find twelve targets, at least. As shown in Figure 14, fifty robots in SRPB find an average of 12.5 targets. Finally, according to the discovery rate, these algorithms can be sorted as SRPB>PSO>RPSO  A-RPSO  FA>LFS>GSO. According to the time to find the first target, these algorithms can be sorted as SRPB≈PSO>RPSO A-RPSO>FA>LFS>GSO, and Figure 15c shows SRPB>PSO>RPSO>A-RPSO>FA>LFS≈GSO. Furthermore, Figure 15b,d show that SRPB is more stable than other algorithms because it has a low standard deviation. With the number of targets increasing, SRPB has more advantages than other algorithms in terms of the time to find the half targets. In Table 1, when the number of targets exceeds a certain value, robots in some algorithms can't find the last target. It concludes that the number of located sources is related to the population size of robots. Sensors 2020, 20, x FOR PEER REVIEW 24 of 28 In Figure 13, the collision rate of different algorithms remains basically unchanged in a different number of targets. The results in part 5.5 show the collision rate of SRPB is 83% for 50 robots when the environment is 100 100 mm  . In part 5.6, the collision rate of SRPB is 56% for 50 robots when the environment is 300 300 m m  . We could infer that the collision rate is influenced by the environment and the population sizes. Once the environment and the population size are determined, the collision rate of algorithms does not vary with the number of targets. According to the collision rate, these strategies are sorted as LFS>SRPB>GSO>RPSO  A-RPSO  FA>PSO. The contrast curves of the discovery rate of the strategies are given in Figure 14. In the same environment, the average discovery rate of SRPB is greater than other strategies, and PSO is the suboptimum strategy. With the increase of targets, the average discovery rate decreases gradually. PSO, RPSO, A-RPSO, FA, GSO, FA, and LFS decline more sharply than SRPB. We could infer that the number of located sources is related to the number of robots. There are some speculations. In the ideal case, fifty robots could simultaneously find fifty targets in oriented search. Of course, it just suits the situation that a robot has found a target, and it cannot search the other targets. In the nonoriented search, the number of located sources is less than the number of robots because a source is located by a group of robots. In this paper, the maximum size of a group was four. It means that fifty robots can form twelve groups, at least. When the targets in the environment are in abundance, and the distribution of targets is not sparse, fifty robots in SRPB can find twelve targets, at least. As shown in Figure 14, fifty robots in SRPB find an average of 12.5 targets. Finally, according to the discovery rate, these algorithms can be sorted as SRPB>PSO>RPSO  A-RPSO  FA>LFS>GSO. According to the time to find the first target, these algorithms can be sorted as SRPB  PSO>RPSO A-RPSO>FA>LFS>GSO, and Figure 15c shows SRPB>PSO>RPSO>A-RPSO>FA>LFS  GSO. Furthermore, Figure 15b,d show that SRPB is more stable than other algorithms because it has a low standard deviation. With the number of targets increasing, SRPB has more advantages than other algorithms in terms of the time to find the half targets. In Table 1, when the number of targets exceeds a certain value, robots in some algorithms can't find the last target. It concludes that the number of located sources is related to the population size of robots. All in all, the proposed algorithm can find targets as many as possible, and it has excellent stability, quick source seeking, and low collision rate. The overall performance of SRPB is better than PSO, RPSO, A-RPSO, GSO, FA, and LFS. Practical Application Analysis As mentioned before, the comparison reveals that the SRPB strategy has better performance than other algorithms. Some analyses are given to illustrate and analyze how to implement the strategy in a real robot. Multi-source seeking is a significant problem. In reality, there are many applications about source seeking. For example, consider in the maritime rescue, there are several people with wireless transmitters for help. The autonomous unmanned aerial vehicle and unmanned surface vehicles can be used in this scenario to locate the positions of people. Since the radio signal is non-oriented, robots can locate the person with received signal strength taken at the robots' position. Besides, the limited communication range will influence the cooperation, and the robot cannot be remotely controlled. All in all, the proposed algorithm can find targets as many as possible, and it has excellent stability, quick source seeking, and low collision rate. The overall performance of SRPB is better than PSO, RPSO, A-RPSO, GSO, FA, and LFS. Practical Application Analysis As mentioned before, the comparison reveals that the SRPB strategy has better performance than other algorithms. Some analyses are given to illustrate and analyze how to implement the strategy in a real robot. Multi-source seeking is a significant problem. In reality, there are many applications about source seeking. For example, consider in the maritime rescue, there are several people with wireless transmitters for help. The autonomous unmanned aerial vehicle and unmanned surface vehicles can be used in this scenario to locate the positions of people. Since the radio signal is non-oriented, robots can locate the person with received signal strength taken at the robots' position. Besides, the limited communication range will influence the cooperation, and the robot cannot be remotely controlled. Fuel consumption limits the working time. An unknown number of sources and unknown extrema make this task difficult. The method in this paper could be implemented in this situation. Each robot updates its velocity and position by Equations (29), (30), and stores a set of virtual match points. There are some assumptions. The width and length of the environment are W, L, respectively. The working time is T, and the number of robots is N r . In this simulation, the computation complexity of the SRPB is O(T · N r · W · L), and the space complexity is O(N r · W · L). In reality, each robot determines the motion by itself, and the computation complexity of a robot implementing the SRPB is O(W · L), and the space complexity is O(W · L). In a general control processor, such as ARM, the SRPB can be implemented. There is enough storage space to store the information of the environment because the number of virtual match points is small. Every robot equips with receiving antenna to receive the strength of the signal. In other types of sources, corresponding sensors are equipped in robots to receive the strength of signals. Besides, only the position and the corresponding signal strength are required to exchange with each other, so the information of communication is also small. In conclusion, the strategy can be implemented in reality because all required aspects, including processor, communication, sensor, and scene, are met. Conclusions In this paper, we reviewed the target search algorithms and gave a classification. Aiming at the multiple weak sources seeking problem for swarm robots in an unknown environment, a model of the multi-target with different signals was given. Inspired from pedestrian behavior in subway/railway stations, a novel cooperative strategy, swarm robotic pedestrian behavior (SRPB), was proposed. It considered many realistic constraints, including limited communication range, limited working time, unknown sources, unknown extrema, the arbitrary initial location of robots, non-oriented search, and no central coordination. The robots' trajectories from initial locations to the extrema showed that SRPB could effectively complete the task of multiple source seeking. The performance of the SRPB was evaluated in terms of average time to find the first, the half, and the last source, the number of located sources and the collision rate. Several experiments showed that SRPB had the highest efficiency and the best stability in all comparison strategies, and it had a low collision rate and a high number of located sources. Besides, numerous experiments demonstrated that the collision rate was related to the environment size and the number of robots, and the number of located sources was related to the number of robots. Finally, the analysis of how to implement this strategy, in reality, was given to support further research.
16,836.4
2020-03-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Preservation of Methane Hydrates Prepared from Dilute Electrolyte Solutions The anomalous or self-preservation of methane hydrate at atmospheric pressure and temperatures below the ice point was investigated to determine whether this phenomenon might have applications in the storage and transportation of natural gas. Particular attention was paid to the effects of dilute electrolytes, as the presence of impurities in water is unavoidable in commercial transportation processes. The presence of electrolytes had a marked effect on the decomposition kinetics of methane hydrate at temperatures between 243 and 269 K. It was also found that chloride and sulfate ions may exhibit greater effects than do sodium and magnesium ions. Introduction Gas hydrates are crystalline solids that are formed by association of water and a gas under conditions of relatively high pressure and low temperature.An important practical feature of gas hydrates is that, depending on the gas, a given volume of gas hydrate can contain up to about 150 times the mass of gas present in an equivalent volume of the pure gas in the standard state.The storage and transportation of natural gas in the form of its hydrate has therefore been recently suggested as a practical measure [1][2][3][4]. As a general rule, gas hydrates must be stored in conditions under which they are thermodynamically stable.In practice, however, when methane hydrate is subject to temperatures of between about 243 and 270 K at atmospheric pressure, it becomes metastable and continues to exist for a certain time at temperatures that are 50-80 K above its nominal equilibrium temperature (193 K) [5]; however, at temperatures above or below this anomalous region of stability, methane hydrate decomposes at rates that are orders of magnitude greater than those in the anomalous region. It has been postulated that a thin film of ice may form on the surface of the hydrate during its partial decomposition, and that this film serves as a barrier to subsequent gas diffusion [6][7][8][9]; however, Stern et al. speculated that anomalous preservation is not primarily the result of encapsulation by ice [5,10]. Recently, an "ultrastability" of methane and natural gas hydrates prepared from dilute aqueous surfactant solutions has been reported by Zhang and Rogers [11], who claimed that only 0.04% of stored gas in a methane + ethane + propane hydrate was evolved during 256 hours at 268 K and atmospheric pressure.According to these authors, ice shielding is not the primary mechanism of anomalous preservation and the enhancement of preservation by the use of additives may be a practical possibility.Because waters that contain various amounts of impurities, such as river water, are likely to be used for the commercial production of natural gas hydrates for the purpose of transportation, we examined the effects of dilute electrolytes on the kinetics of decomposition of methane hydrate. Experimental Section In this study, sodium chloride (Wako), sodium sulfate (Wako), and magnesium chloride hexahydrate (Aldrich) were used as electrolytes.These salts were dissolved in distilled water (Wako). Methane hydrate was prepared in a high-pressure cell with internal mixing baffles that moved vertically across the gas-liquid interface.The cell was essentially the same to that reported previously [12].The walls of the cell were equipped with a circulating water jacket to maintain the cell temperature at 280 K. Pure water or aqueous electrolyte solution was introduced into the cell, which was then pressurized with methane.Mixing was started, and the pressure gradually decreased as methane hydrate formed.The pressure was maintained in the range 5.8-6.2MPa by intermittently supplying methane.The hydrate crystals were formed at the gas-liquid interface and on the cell wall.The crystals interfered with the mixing motion and the motion finally stopped.The consumption of methane greatly slowed on this occasion. When the consumption of methane had almost ceased, the residual liquid in the cell was discharged through a tap at the bottom of the cell.The cell pressure decreased to around 5.8 MPa and hence methane was supplied to the cell.The volume of the discharged water was about one sixth of the supplied water and the electrolyte concentration of the discharged water was 120-140% of the supplied water.The electrolytes were primarily excluded from the hydrate crystals and the remaining electrolytes located probably on the surface and grain boundary of the final hydrate product [13]. The remaining contents of the cell were kept at the same pressure and temperature for a further 2-3 days before the cell was further cooled to 253 K for 1 day and then depressurized to atmospheric pressure.The methane hydrate that was produced by this procedure was partly granular and partly compact.No differences were found by visual observation between hydrates prepared from pure water and those prepared from dilute electrolytes, as well as the amount and rate of methane consumption recorded during the sample preparation. The decomposition experiments were conducted under conditions that simulated the transportation of hydrate powder in a bulk cargo carrier.The experimental apparatus is shown in Figure 1.Powdered methane hydrate with an average diameter of about 0.5-1 mm, prepared by grinding the multicrystalline aggregates at 253 K and atmospheric pressure, was used as an initial material in all the experiments for the purpose of eliminating the probable dependency of grain size on the decomposition kinetics.Immediately after grinding, about 2-4 g of the resulting powder was packed into a bag that was placed in a glass vessel.The temperature of the vessel was controlled at the experimental temperature (243-269 K).The bag became swollen by methane that evolved as a result of decomposition of the sample; this caused an identical volume of air to be discharged through a vent in the vessel.The volume of the discharged air was measured by means of a tipping-bucket-type gas meter (Japan Flow Control MGC).In order to get the total gas content of the sample, the residual hydrate was decomposed completely at the end of each isothermal experiment (normally of 336 hours duration) by raising the temperature above the ice point. Results and Discussion Figure 2 shows the temporal variations in the residual fraction of methane in samples kept at various constant temperatures.The residual fraction is based on the total gas occluded at the start of the isothermal decomposition experiments; this eliminates any losses incurred during collection from the high-pressure cell and grinding at 253 K.In all the experiments, the decomposition slowed as time elapsed.To check the reproductively of the experiments, two runs were conducted for the case of 1.0 mol/m 3 sodium chloride at 263 K.A significant difference is seen on the decomposition rate and thus minor fluctuations on the results cannot be discussed based on the present experiments. In the case of methane hydrates prepared from pure water, the rate of decomposition decreased on reducing the temperature from 269 to 258 K (Figure 2(a)).Note that an exceptionally rapid decomposition occurred at 253 K, but this slowed again as the temperature approached 243 K.Such nonmonotonous temperature dependency of the decomposition kinetics has already been reported by Stern et al. [5], but the details of their results were different from ours.They found two minima in the decomposition rate at 269 and 249 K, and one broad maximum at around 255 K. Apart from the differing temperature dependencies, the ranges of the mean decomposition rates unexpectedly match ours.The times required for the samples to evolve 50% of their gas content were reported to vary from 3 hours (256 K) to 30 day (269 K) and those for our results were between 33 hours (253 K) and 28 day (258 K) from extrapolation based on the mean rate between 150-300 hours.We have not yet clarified the cause of this discrepancy, but at present we regard it as arising from differences in experimental conditions, such as the size of the hydrate granules or the purity of the water that was used.Figure 2(b) shows the results for the methane hydrates prepared from 1.0 mol/m 3 aqueous sodium chloride.The rate of decomposition decreased with decreasing temperature from 269 to 253 K, but this trend reversed at 253 to 243 K. On the other hand, methane hydrates prepared from 0.50 mol/m 3 aqueous magnesium chloride showed a monotonous decomposition behavior with the rate of decomposition increasing with increasing test temperature (Figure 2(c)).A peculiar feature of the results for samples of hydrate prepared from 0.50 mol/m 3 aqueous sodium sulfate solution is a very weak temperature dependency (Figure 2(d)).Note that the residual fractions at various temperatures converge at around 150 hours, except for the case of a temperature of 258 K, whereas the initial rates of decomposition differ from one other.These results clearly show that decomposition of methane hydrate is significantly affected by the presence of dilute electrolytes, but this effect is not monotonous.Figure 3(a) shows the fractions of methane preserved for 150 hours at various temperatures in samples prepared from pure water, 1.0 mol/m 3 sodium chloride, 0.50 mol/m 3 magnesium chloride, and 0.50 mol/m 3 sodium sulfate solutions.An error bar corresponding to the case of sodium chloride at 263 K displays the difference between the results of two runs.The exceptionally rapid decomposition of the methane hydrates from pure water at 253 K was markedly suppressed by the addition of any of the salts.On the other hand, the decomposition at the highest temperature (269 K) was accelerated by the presence of sodium chloride or magnesium chloride.In comparison with the sodium sulfate system, the values for the samples from sodium chloride and magnesium chloride solutions at the same temperature show a passable resemblance to one another; we therefore surmise that chloride and sulfate ions have a greater effect on the decomposition of methane hydrate than do sodium and magnesium ions. Figure 3(b) shows the residual methane fraction after 150 hours for samples prepared from three sodium chloride solutions of differing concentrations.The temperature dependencies of the residual fractions for the three concentrations show a marked similarity to one another.Whereas minor reverses are seen for all concentrations, the residual fractions generally increase with decreasing temperature from 269 to 253 K with a maximum at about 253 K for each of the concentrations.These results suggest that the effects of ion concentration on the decomposition kinetics are rather weak at low concentrations. To summarize, the anomalous longevity of methane hydrates is markedly affected by the presence of low concentrations of ions in the water from which they are formed.These findings have both practical and scientific implications.From the point of view of practical applications, they show that it is possible to enhancing the longevity of gas hydrates by preparing them from dilute electrolyte solutions.Another result of the study is that an expanded examination of the effects of various additives could give further insights into the mechanism of anomalous preservation.Virtually no changes have been found on the properties of the hydrates other than the decomposition kinetics (e.g., phase equilibria, gas content, formation rate, and visual appearance) and thus the author has no trustworthy explanation at present for the mechanism of the effect of dilute electrolytes on anomalous preservation. Conclusion The decomposition behavior of methane hydrates prepared from pure water or dilute electrolyte solutions was studied under isothermal conditions between 243 and 269 K at atmospheric pressure.In the case of samples prepared from pure water, the decomposition rate showed a complicated dependence on the temperature, as previously reported.The temperature dependence was altered by the presence of low concentrations of sodium, magnesium, chloride, and sulfate ions.The effect of these ions was not monotonous.In particular, the temperature dependence was almost eliminated in the case of sodium sulfate, and it is suggested that chloride and sulfate ions may exhibit greater effects than do sodium and magnesium ions. Figure 1 : Figure 1: Schematic of the experimental apparatus for the decomposition of methane hydrate. Figure 3 : Figure 3: Residual fraction of methane in hydrate samples after an isothermal hold for 150 hours.(a) Samples prepared from pure water, 1.0 mol/m 3 aqueous sodium chloride, 0.50 mol/m 3 aqueous magnesium chloride, or 0.50 mol/m 3 aqueous sodium sulfate.(b) Samples prepared from sodium chloride solutions of various concentrations.
2,773.4
2009-10-21T00:00:00.000
[ "Chemistry" ]
A simulation study of short channel effects with a QET model based on Fermi–Dirac statistics and nonparabolicity for high-mobility MOSFETs In this paper, the quantum confinement and short channel effects of Si, Ge, and In 0 . 53 Ga 0 . 47 As n-MOSFETs are evaluated. Both bulk and double-gate structures are simulated using a quantum energy transport model based on Fermi–Dirac statistics. Nonparabolic band effects are further considered. The QET model allows us to simulate carrier transport including quantum confinement and hot carrier effects. The charge control by the gate is reduced in the Ge and In 0 . 53 Ga 0 . 47 As bulk n-MOSFETs due to the low effective mass and high permittivity. This charge control reduction induces the degradation of short channel effects. In double-gate structures, different improvements of drain induced barrier lowering (DIBL) and subthreshold slope (SS) are seen. The double-gate structure is effective in the suppression of DIBL for all channel materials. The SS degradation depends on channel materials even in double-gate structure. Introduction The scaling of conventional bulk Si-MOSFET approaches the fundamental limit due to the increase of off-leakage current and short channel effects [1]. Further performance B Shohiro Sho<EMAIL_ADDRESS>1 improvements require new channel materials such as Ge and III-V compound semiconductors [2] and new device structures such as FinFETs [3] and nanowire gate-all-around structures [4]. Performance analysis of single and multigate MOSFETs on high mobility substrates and Si is an important issue. A number of authors have focused on numerical and theoretical studies of such devices, using self-consistent Poisson/Monte Carlo simulations [5,6], comprehensive semiclassical multisubband Monte Carlo simulations [7], self-consistent solutions of Schrödinger/Poisson equations [6,8,9], a quantum-corrected Monte Carlo simulations [10], and an atomistic Schrödinger/Poisson equations in the non-equilibrium Green's function formalism [11]. This paper describes performance analysis of Si, Ge, and In 0.53 Ga 0.47 As n-MOSFETs using a quantum energy transport (QET) model based on Fermi-Dirac statistics and nonparabolicity. The QET model is viewed as one of the hierarchy of the quantum hydrodynamic models [12], which allows simulations of carrier transport including quantum confinement and hot carrier effects [13]. The simulation study focuses on the analysis of quantum confinement and short channel effects. Both bulk and double-gate n-MOSFETs are simulated. The paper is organized as follows: In Sect. 2, we describe a four-moments QET model based on Fermi-Dirac statistics and nonparabolicity. Section 3 presents numerical simulations of the QET model. The results are further compared with those calculated by quantum drift diffusion (QDD) and classical energy transport (ET) models. The analysis of short channel and quantum confinement effects of Si, Ge, and In 0.53 Ga 0.47 As n-MOSFETs for bulk and double-gate structures is presented. The dependence of short channel effects on channel materials is discussed. Section 4 concludes this paper. 4-moments QET model based on Fermi-Dirac statistics For the simulations of quantum confinement transport with hot carrier effects, we develop a four-moments QET model in [13]. This model is viewed as one of the hierarchy of the quantum hydrodynamic models [12]. In classical hydrodynamic simulations, a four-moments energy transport model is proposed in [14] for simulations of thin body MOSFETs. In this work, Fermi-Dirac statistics and nonparabolic corrections are further included for the performance analysis of MOSFETs on high mobility substrates. In fact, high mobility materials such as III-V compound semiconductors have strong degeneracy, low density of state, and nonparabolic band structures [15]. Numerical implementation of Fermi-Dirac statistics is discussed in [16] for QDD models and in [17] for QET models. The electron density n is approximated by introducing the band parameter ω n as where ϕ, ϕ n , and T n are the electrostatic potential, quasi-Fermi-level, and electron temperature, respectively. n i , q, and k are the intrinsic carrier density, electronic charge, and Boltzmann constant, respectively. The quantum potential γ n is described as where m andh are the effective mass and Plank constant. The band parameter ω n is determined as where N c is the density of states in the conduction band, and G 1 2 is the inverse Fermi function of order 1/2 defined with The carrier density n including nonparabolic band effects is given by [18] where η = (E f − E c )/kT is the normalized Fermi level. The parameter α is a coefficient of nonparabolicity that can be calculated as where g is the normalized band gap (= E c −E v /kT ) and m 0 is the free electron rest mass. The simple analytical approximation of the inverse Fermi function is given in [19] for a weak degenerate case (η < 10). For high η, we apply Sommerfeld's approximation to calculate the inverse Fermi function. Both approximations are linearly interpolated. By employing the expression (1) in the QET model, we obtain the current density where μ n is the electron mobility. From (2), the quantum potential equation is obtained as where b n =h 2 12qm . The root-density ρ n is written as ρ n = √ n = √ n i ex p(u n ) by a variable u n = q kT n ( (ϕ+γ n +ω n −ϕ n ) 2 ) in (1). As shown in [16], under Fermi-Dirac statistics, (8) is replaced by the equivalent form b n ∇ · (ρ n ∇u n ) − kT n q ρ n u n = − ρ n 2 (ϕ + ω n − ϕ n ). If the variable u n is uniformly bounded, the electron density is maintained to be positive. This approach provides a numerical advantage for developing an iterative solution method. For electrons, the four-moments QET model based on Fermi-Dirac statistics is described as follows: where p, and C imp are the hole density, the permittivity of semiconductor, and the ionized impurity density, respectively. T L and τ are the lattice temperature and energy relaxation time. The ratio μ n /μ s selected here is 0.8 [20]. For holes, similar expressions are obtained. Mobility model For the energy dependence of the mobility, we apply the model of the Baccarani et al. [21], In the homogeneous case, this model is equivalent to the Hänsch mobility model [20] μ n (T n ) where v s is the saturation velocity. As mentioned in [22,23], the Hänsch mobility model is consistent with the high-field mobility model with the parameters ξ = 1/2 and β = 2. To account for the mobility reduction due to the ionized impurity scattering, we use the formula of Caughey and Thomas [24] for the low-field mobility μ L F in this work: The model parameter values [25,26] are summarized in Table 1. In this work, numerical simulations are performed by using the Baccarani's mobility model (16) for the QET model and the high-field mobility model (18) for the QDD model, respectively. The effects of interface traps and surface roughness scattering are not included in this work. Device structures The schematic views of simulated devices are shown in Fig. 1. Si, Ge, and In 0.53 Ga 0.47 As n-MOSFETs with high-k/metal gates are examined. Selected material parameters are listed in Table 2. The relative dielectric permittivity considered here is 22, and the value is known as "HfO 2 ". The equivalent oxide thickness (EOT) is 0.6 nm. The threshold of all devices is obtained by the adjustment of the gate work function, which is selected for each semiconductor material to meet a common threshold voltage of 0.2 V. The threshold voltage is defined as the gate voltage when the drain current is 10 µA/µm. The channel length of simulated devices is varied from 35 to 16 nm. The S/D doping is N S D = Figure 2a and b demonstrates the I D -V G characteristics of 50 and 20 nm Si bulk n-MOSFETs at V d = 0.05V and 0.8 V, which are calculated by the QET and QDD models. The same work function is used for both models. In a long channel device, the I D -V G characteristics calculated by two models are almost identical, as shown in Fig. 2a. For the ultra-short channel device, two models exhibit the different results of I D -V G characteristics due to the non-local transport effects and the reduction of the quantum confinement effects. The QET model provides higher drain current. The subthreshold of electrons toward the bulk in the channel and hence in ultrashort channel devices, a significant difference between two models is induced. The results clearly indicate that the quantum confinement effect in the ultra-short channel is reduced by the enhanced diffusion due to the high electron temperature. Figure 4a and b shows the electron density distributions calculated by the QET and QDD models for long and short channel devices. The results are plotted at the center of the channel. In the long channel device, the electron density distributions calculated by two models are almost identical at the surface. For the ultra-short channel device, due to the hot carrier effects, the electron density distribution calculated by the QET model is spread towards the bulk. This result in the reduction of charge control calculated by the QET model. Quantum confinement effects The dependence of quantum confinement effects on channel materials is investigated in Figs. 5 and 6. The results under Fermi-Dirac statistics and Boltzmann statistics are Fig. 5a, the inversion layer electrons in Ge and In 0.53 Ga 0.47 As n-MOSFETs spread into the bulk at the source end of the channel due to the low effective mass and high permittivity. Figure 5b reveals that in all devices the quantum confinement effect is further reduced by the enhanced diffusion due to the high electron temperature. These properties degrade the short channel effects of Ge and In 0.53 Ga 0.47 As devices when compared with Si devices, as discussed later. Since N c (= 2.64 × 10 17 cm −3 ) of In 0.53 Ga 0.47 As is low, the inversion layer electrons in the In 0.53 Ga 0.47 As n-MOSFET are further decreased due to the high degeneracy material. In Fig. 7a and b, we compare the electron density distributions calculated by the QET model based on Fermi-Dirac statistics for 20 nm Si, Ge, and In 0.53 Ga 0.47 As double-gate n-MOSFETs, respectively. The double-gate structures having a film thickness of 8 nm are simulated. The results are plot- ted at the source and drain ends of the channel. The devices are simulated at V g = 0.8 V and V d = 0.8 V. The inversion layer electrons in Ge and In 0.53 Ga 0.47 As n-MOSFETs spread into the center of the channel due to the low effective mass and high permittivity. In analogy to the results of bulk n-MOSFETs, the single inversion layer electrons in the In 0.53 Ga 0.47 As n-MOSFET are further decreased due to the high degeneracy material. In Fig. 7a, Si and Ge n-MOSFETs exhibit two inversion layers at the source end of the channel. In the film thickness of 8 nm, Ge n-MOSFETs form a single inversion layer due to the hot electron effects at the drain end of the channel as shown in Fig. 7b. of the Si n-MOSFET due to smaller electron effective mass and higher permittivity. It is shown in Fig. 8b that the DIBL of In 0.53 Ga 0.47 As n-MOSFET is suppressed because of the low S/D doping concentration. Fig. 9a and b shows the dependence of SS and DIBL on the channel length for Si, Ge, and In 0.53 Ga 0.47 As double-gate n-MOSFETs. The short channel effects are suppressed in the multi-gate structure. A different improvement between SS and DIBL is seen. In all devices, the DIBL effect is reduced in double-gate n-MOSFETs. In the In 0.53 Ga 0.47 As n-MOSFET, the DIBL effect is significantly reduced due to the low S/D doping concentration. This is because the thin film suppresses an extension of drain electric field into the channel. The SS improvement depends on the channel material even in double gate structure. The SS of Ge and In 0.53 Ga 0.47 As n-MOSFETs are larger than that of Si n-MOSFET due to the low effective mass and high permittivity as well as the results of bulk n-MOSFETs. both bulk and double-gate n-MOSFETs are shown. The V T roll-off of the In 0.53 Ga 0.47 As n-MOSFET is almost the same as that of Si n-MOSFET in the double gate structure because of the low S/D doping concentration. The Ge n-MOSFET shows the worst short channel effects. Conclusion The quantum confinement and short channel effects of Si, Ge, and In 0.53 Ga 0.47 As n-MOSFETs have been evaluated using a 4-moments QET model based on Fermi-Dirac statistics and nonparabolicity. The dependence of quantum confinement effects on channel materials has been clarified. The charge control by the gate is reduced in Ge and In 0.53 Ga 0.47 As n-MOSFETs due to the low effective mass and high permittivity. This results in the degradation of short channel effects. The double-gate structure is effective in the suppression of DIBL for all channel materials. The SS degradation depends on channel materials even in double-gate structure.
3,067.4
2016-03-01T00:00:00.000
[ "Engineering", "Physics" ]
Degrees-Of-Freedom in Multi-Cloud Based Sectored Cellular Networks This paper investigates the achievable per-user degrees-of-freedom (DoF) in multi-cloud based sectored hexagonal cellular networks (M-CRAN) at uplink. The network consists of N base stations (BS) and K≤N base band unit pools (BBUP), which function as independent cloud centers. The communication between BSs and BBUPs occurs by means of finite-capacity fronthaul links of capacities CF=μF·12log(1+P) with P denoting transmit power. In the system model, BBUPs have limited processing capacity CBBU=μBBU·12log(1+P). We propose two different achievability schemes based on dividing the network into non-interfering parallelogram and hexagonal clusters, respectively. The minimum number of users in a cluster is determined by the ratio of BBUPs to BSs, r=K/N. Both of the parallelogram and hexagonal schemes are based on practically implementable beamforming and adapt the way of forming clusters to the sectorization of the cells. Proposed coding schemes improve the sum-rate over naive approaches that ignore cell sectorization, both at finite signal-to-noise ratio (SNR) and in the high-SNR limit. We derive a lower bound on per-user DoF which is a function of μBBU, μF, and r. We show that cut-set bound are attained for several cases, the achievability gap between lower and cut-set bounds decreases with the inverse of BBUP-BS ratio 1r for μF≤2M irrespective of μBBU, and that per-user DoF achieved through hexagonal clustering can not exceed the per-user DoF of parallelogram clustering for any value of μBBU and r as long as μF≤2M. Since the achievability gap decreases with inverse of the BBUP-BS ratio for small and moderate fronthaul capacities, the cut-set bound is almost achieved even for small cluster sizes for this range of fronthaul capacities. For higher fronthaul capacities, the achievability gap is not always tight but decreases with processing capacity. However, the cut-set bound, e.g., at 5M6, can be achieved with a moderate clustering size. Introduction Interference is one of the fundamental obstacles for high data rate communications in current and future cellular networks because of restricting the effect on overall spectral efficiency in bits/sec/Hz/base station. Sectorization, which has been used in 4G networks, is one solution to alleviate intra-cell interference by using multiple antennas at base stations (BS) resulting in directional beams that cover an intended sector. In the literature, sectorization is often combined with hexagonal cell models, and mostly each cell is divided into three sectors [1,2]. Here, we follow the works in [3][4][5][6] that totally ignore the interference between the sectors in the same cell. In real systems, this is not the case since the side lobes of the radiation pattern cause to observe signals from adjacent inter-cell cluster. This work proposes a distributed iterative solution that achieves the performance of the case all BSs connected to a single BBUP. While the formerly mentioned works assume non-dynamic clustering for each BBUP, the authors of [30] propose and analyze dynamic clustering approach based on instantaneous CSI, where they also consider the allocation of computation resources of BBUPs as an optimization parameter. In the present work, we consider uplink of an M-CRAN with multiple-antenna mobile users and multiple-antenna BSs. We assume N 1 BSs, K ≤ N BBUPs with limited processing capacity and limited fronthaul capacity. The main interest of this paper is to understand highest achievable per-DoF and sum-rate for limited fronthaul and BBUP processing capacity for given BBUP-BS ratio K/N. We propose two coding schemes in each of which some mobile users are deactivated to decompose the network into isolated parallelogram and hexagonal clusters, respectively. For both clustering types, the minimum number of mobile users/sectors are determined regarding a BBUP-BS ratio due to one-to-one association between BBUPs and clusters. Each BBUP collects quantized versions of the received signals of the associated cluster through fronthaul links and decodes them jointly. The considered decoding scheme is thus reminiscent of clustered decoding as performed in [10,31]. The contributions of this paper are: • We propose a specific non-dynamic way of silencing mobile users in parallelogram clustering. One could attempt to silence entire cells. We find an efficient way of dividing the network non-interfering parallelogram clusters by silencing mobile users mostly in single sectors of the considered cells; • We propose achievability schemes for both parallelogram and hexagonal clusterings and derive lower bounds on per-user DoF for both schemes in a function of fronthaul and BBUP processing capacities and BBUP-BS ratio; • We prove that the performance of parallelogram clustering can not be worse than hexagonal clustering for small and moderate fronthaul capacities; • We show by simulations that, for high fronthaul capacities, the coding scheme proposed for hexagonal clustering can show better performance than parallelogram clustering if the processing capacity is large enough according to given BBUP-BS ratio. The upper bound is obtained through cut-set argument. In several cases, upper and lower bounds are matched. For small and moderate fronthaul capacities, the achievability gap is given as a function of fronthaul capacity and BBUP-BS ratio, and it is shown that it decreases with the inverse of the BBUP-BS ratio irrespective of BBUP processing capacity. In the finite SNR case, we compare the proposed coding schemes with the following schemes: • Naive versions of both schemes where all mobile users in certain cells are deactivated, • Interfering versions of both schemes where the network is decomposed into non-overlapping but interfering clusters, • An opportunistic scheme where each message is decoded based on the received signals of three neighboring sectors that have the strongest channel gains. Finite SNR analysis shows that, in the strong interference regime, the proposed schemes outperform all other schemes for almost all SNR range under all scenarios except two; for the 3-sector decoding scheme, low SNR range and scarce BBUP capacities and, for non-interfering schemes, moderate SNR range and high BBUP capacities. An interesting outcome of the finite SNR analysis is that interfering clustering schemes show either close to or better performance than proposed schemes in the finite SNR range under both weak and strong interference regimes; therefore, the interfering clusterings can be employed at finite SNR values with minor performance losses, since they may be more convenient for practical systems. Organization The rest of the paper is organized as follows: This section ends with some remarks on notation. The following Section 2 describes the problem definition. Section 3 presents the main results of the paper. In Sections 4 and 5, we present the coding schemes for the parallelogram and hexagonal clusterings, respectively. Section 6 presents the achievability results for the naive schemes and Section 7 presents simulation results for DoF per-user. In Section 8, we present the results regarding the finite SNR analysis. We conclude the paper with Section 9 and some technical proofs are presented in the appendices. Notation We denote the set of all integers by Z, the set of positive integers by Z + , and the set of real numbers by R. For other sets, we use calligraphic letters, for example, X . We represent random variables by uppercase letters, for example, X, and their realizations by lowercase letters, for example, x. We use boldface notation for vectors, that is, upper case boldface letters such as X for random vectors and lower case boldface letters such as x for deterministic vectors.) Matrices are depicted with sans serif font, for example, H. We also write X (n) for the tuple of random vectors (X 1 , . . . , X n ). Network Model Consider the uplink communication in a cellular network consisting of N 1 hexagonal cells as depicted in Figure 1. Each single cell contains a base station (BS) equipped with 3M directional receive antennas and is divided into three sectors, where each sector is covered by M receive antennas. Usage of directional antennas, where side lobe radiation patterns are negligible, implies that communications in the three sectors of a cell do not interfere with each other. It is assumed that different mobile users in the same sector perform orthogonal multiple-access as is typical for current 4G networks [32]. Thus, the model is restricted to a single mobile user per sector. For simplicity and symmetry, it is supposed that each mobile user is equipped with M transmit antennas. It is assumed that the signal from a mobile user attenuates rapidly enough so that it cannot cause interference to sector receive antennas (Rx) in non-adjacent sectors. These assumptions lead to the interference graph in Figure 1, where each small circle depicts a mobile user and Rx pair. Solid black lines between any two circles represent symmetric interference between mobile users and Rxs of adjacent sectors. Let N = {1, . . . , N} be an index set of all cells and associated BS in the network, and let T = {1, . . . , 3N} be index set of all sectors and their corresponding users and Rxs. Then, the observed signal at the Rx u ∈ T is given by the following discrete-time input-output relation: where • n denotes the number of channel use; • T u denotes the index set of mobile users whose transmitted signal is observed by Rx u (including mobile user u); • x v,n denotes the M-dimensional time-n signal sent by mobile user v; • z u,n denotes the M-dimensional i.i.d. standard Gaussian noise vector corrupting the time-n signal at Rx u; it is independent of all other noise vectors; • and H u,υ denotes an M-by-M dimensional random matrix with entries that are independently drawn according to a standard Gaussian distribution that models the channel from mobile user υ to Rx u. Channel matrices are randomly drawn but assumed to be constant over the n channel uses employed for the transmission of a message. In other words, the block length of a transmission is assumed shorter than the coherence time of the channel. Realizations of the channel matrices are assumed to be known by corresponding BSs, but not by the mobile users. Uplink Communication Model with M-CRAN Architecture Consider the network model defined in Section 2.1. Assume that the mobile user in sector u ∈ T wishes to send its message W u , which is selected at random from the set 1, . . . , 2 nR u , to the BS in which its sector is located. To this end, mobile user u encodes its message with the function where X (n) u = (X u,1 , . . . , X u,n ), and X u,n ∈ R M is a column vector for n = 1, . . . , n, satisfying the power constraint: 1 n n ∑ n =1 X u,n 2 ≤ P with probability 1. We assume that the decoding processes of receive signals during the uplink communication is performed by K ≤ N BBUPs, and that any BS j ∈ N can have access to any BBUP k ∈ {1, . . . , K} through a one-hop fronthaul link which can be modeled as noise-free but capacity limited. Definition 1. Observation Function Let U k be the index set of BSs communicating with BBUP k. Each BS j ∈ U k sends an observation function, φ and with u j,1 , u j,2 , and u j,3 denoting the three sectors of BS j. To account for capacity limits of the fronthaul links, we require where C F = µ F · 1 2 log(1 + P) and µ F is fronthaul capacity prelog, which is a positive constant. Let D k be the index set of sectors whose messages are to be decoded at BBUP k. After receiving observation functions, for each BBUP k and each u ∈ D k , BBUP k applies a deterministic and invertible function g k,u on the relevant observation functions to decode the message W u : Decoding is successful if, for all u ∈ T :Ŵ Increasing computational power of a processor leads to an increase in complexity. Hence, to take the computational limitation into consideration, we impose a complexity constraint on the BBUPs in terms of bit processing capacity per channel use. We assume that any BBUP k can implement the decoding process if and only if the sum rate of all observation functions that is sent to BBUP k satisfies where C BBU = µ BBU · 1 2 log(1 + P) and µ BBU is processing capacity prelog, which is a positive constant. Capacity and Degrees of Freedom A rate-tuple {R u } u∈T is said to be achievable if, for every > 0 and sufficiently large n, there exists encoding, observation, and decoding functions f j,k , and g (n) k,j satisfying (3), (6) and (9), such that The capacity region C (P, µ F , µ BBU , K) is the closure of all achievable rate-tuples {R u } u∈T , and the maximum sum-rate is defined as where the supremum is over all achievable rates {R u } u∈T ∈ C (P, µ F , µ BBU , K). Definition 2 (Per-User DoF). For any BBUP-BS ratio r ∈ (0, 1], fronthaul capacity prelog µ F > 0 and processsing capacity prelog µ BBU > 0, the per user DoF is given as Here, note that the allowed interval of r guarantees satisfying the proposed system model restriction K ≤ N. In the following, we use the abbreviation DoF to designate the per-user DoF. Main Results We derive two lower bounds and an upper bound on the DoF. As we will show, they match in some cases. The first and second lower bounds are achieved by the schemes described in Sections 4 and 5, respectively. Both schemes are based on deactivating a set of mobile users. In the first scheme, the mobile users are deactivated so that the remaining active users form parallelogram-like clusters. In the second, the remaining active users form hexagon-like clusters. We name the two DoF lower bounds as parallelogram bound and hexagon bound, respectively. Theorem 1 (Lower Bound). For any µ BBU > 0, µ F > 0, and 0 < r ≤ 1, the achievable DoF is given by where where above maximizations are over all positive integers t 1 , t 2 satisfying t 1 t 2 ≥ 1 r , and where above maximizations are over all positive integers t satisfying t ≥ 1 3r . Proof. The proof is given in Sections 4 and 5. Remark 1. For µ BBU > 0 and 0 < r ≤ 1 Proof. The proof is given in Appendix A. Theorem 2 (Cut-Set Bound). For any µ BBU > 0, µ F > 0 and 0 < r ≤ 1, the achievable DoF is upper bounded by Proof. The proof is given in Appendix B. Corollary 1 (Optimality in some special cases). • Proof. The proofs are given in Appendix C. Proof. The proof is given in Appendix D. Uplink Scheme with Parallelogram Clustering In the proposed uplink scheme, we deactivate a subset of mobile users so as to partition the network into non-interfering clusters of active users. These clusters have parallelogram shapes and are parametrized by positive integer pair (t 1 , t 2 ). Construction of Parallelogram Clusters For a given (t 1 , t 2 ) pair, we define a regular parallelogram grid such that the length of sides of a parallelogram in the diagonal direction (-30 degree with horizontal axis) is t 1 cell-hop length, and the length of sides in the vertical direction is t 2 cell-hop length. Then, we fit this parallelogram grid into our figurative network in a way that the intersections of the parallelogram grid coincide with BSs, which are supposed to be at the center of the cells. Subsequently, we deactivate all mobile users coinciding with the sides of the grid. This process divides the network into parallelogram-like non-interfering clusters of active users and their sectors, and we refer to them shortly as p-clusters. In Figure 2, we present an example of parallelogram clustering for (t 1 , t 2 ) = (2, 2), where users coinciding with green lines are deactivated. Throughout this section, we refer to active users as only users. Users of a p-cluster are located in: Single BS with one user. Therefore, the number of users n p in a p-cluster is: Let K = 1, . . . , K p , with K p ≤ K, be index set of p-clusters. We associate each p-cluster with single BBUP and denote the associated BBUP with the same index k ∈ K of the p-cluster. Let I k be the index set of BSs whose users are elements of kth p-cluster. Each BS j ∈ I k sends an observation function to k th BBUP, i.e., U k = I k . To be able to find a BBUP-BS ratio, we need to equally partition all BSs to BBUPs. Note that any BS j ∈ N with one user or three users is an element of a single index set I k , k ∈ K, and any BS j ∈ N with two users is an element of two different index sets, i.e., I k and I k 1 , k, k 1 ∈ K. Therefore, of the I k BSs of p-cluster k, we associate all of them with one user or three users, and half of them with two users to the BBUP k. This leads to the BBUP-BS ratio r p : We can choose any (t 1 , t 2 ) ∈ Z + pair to construct p-clusters that satisfies r p ≤ r: Coding Scheme Each mobile user u encodes its message W u , which is uniformly distributed over the set W u = 1, . . . , 2 nR u , with a multi-antenna Gaussian codebook of power P. Since Rxs of silenced user observe only interference, each BS j generates its observation function for (active) Rxs through independent quantization codebooks. To generate quantization codebooks, each BS j applies a point-to-point Gaussian vector quantizer to receive signal of each Rx so that the noise-level quantization rates imposed in the following are satisfied. Let J k denote the sector index set of p-cluster k. We choose D k = J k , where D k is an index set of sectors whose messages are to be decoded at BBUP k. Each BS j ∈ I k with three users transmits a message consisting of three quantization messages of its Rxs to BBUP k and each BS j ∈ I k with two users transmits only quantization message of Rx u to BBUP k if u ∈ J k . The BS j ∈ I k with a single user transmits the only quantization message of its cell to the BBUP k. Depending on the prelogs µ BBU and µ F , there are three different quantization rates: all BSs with three users quantize each receive signal at the rate R q1 = µ q1 1 2 log(1 + P) and all BSs with two users quantize each receive signal at the rate R q2 = µ q2 1 2 log(1 + P), and all BSs with one active user quantize their receive signals at the rate R q3 = µ q3 1 2 log(1 + P). After receiving quantization messages, each BBUP k reconstructs all observations with quantization noise term, i.e., {Ŷ (n) u } u∈D k . The input-output relationship experienced by each BBUP k is a multi-user MIMO-MAC channel ( [33], Chapter 9) and [34], where the effective noise is the sum of channel and quantization noises. Since the channel matrix from mobile users of D k to Rxs of D k is known by BBUP k and is square and full rank with probability 1, each BBUP k can perform joint decoding with vanishingly small average error probability, which leads to achieving the same DoFs as if each user message is decoded in a point-to-point communication. That is, the prelogs µ q1 , µ q2 and µ q3 are achieved for respective mobile users. To be able to find DoF for asymptotic case (The limit N → ∞ is only needed to eliminate edge effects.), i.e., while N → ∞, we need to equally partition deactivated users of the network to p-clusters. Note that deactivated users around a p-cluster are located on green lines of four different sides and each side is on the border of two p-clusters. Therefore, when half of the deactivated users around a p-cluster, i.e., (t 1 + t 2 ), are associated with the p-cluster itself, the equal partition of the deactivated users is performed. Then, the DoF of the scheme can be obtained as: where the expression in the numerator refers to the sum-DoF in a given p-cluster and the expression in the denominator refers to the total number of active and deactivated users for a given p-cluster. In the following, we will give a policy to choose quantization rates for any (t 1 , t 2 ) satisfying (25). The DoF of M × M MIMO system with independently fading channels, which is our case, is M as given in [35]: the quantization rate M 2 log(1 + P) is enough to describe message set W u of any user u asymptotically. Thus, here, we are not restricted by the processing capacity prelog µ BBU , i.e., the only restricting factor is fronthaul capacity prelog µ F . The main policy is to distribute transmission resources between (active) users of any given BS unless the per sector transmission capacity is more than the rate providing maximum DoF M, i.e., M 2 log(1 + P). To this end, we determine the quantization rates regarding µ F : • If µ F ≤ M, transmission resource of a fronthaul link is allocated equally among Rxs of a BS: and the achievable DoF is given as • If M ≤ µ F ≤ 2M, transmission resource of a fronthaul link is equally allocated among Rxs of a BS with two or three users; however, any BS with one user quantizes its received signal at the maximum rate since each fronthaul link has enough capacity to support that communication rate and the achievable DoF is given by • If 2M ≤ µ F ≤ 3M, transmission resource of a fronthaul link is equally allocated among Rxs of a BS with three users; however, any BS with one or two users quantizes their receive signals at the maximum rate for each Rx since each fronthaul link has enough capacity to support that communication rate ( M ≤ µ F 2 ): and the achievable DoF is given by • If 3M ≤ µ F , all BSs quantize their receive signal at the maximum rate at each sector (M ≤ µ F 3 ): and achievable DoF is given as: Under this condition, the achievable sum-DoF of a p-cluster, which is given in the numerator of (26), can be restricted by the processing capacity prelog µ BBU . If the µ BBU is not smaller than the achievable sum-DoF of a p-cluster for the given interval of µ F : The process that has been implemented in Section 4.2.1 is applied and, hence, the DoF expressions are given as in (27), (28), (29) and (30), respectively. However, if the processing capacity prelog µ BBU is smaller than the sum-DoF for the given µ F : We distribute the processing resource of a BBUP equally among sectors of a cluster and the quantization rate at each sector is chosen as which leads to: To provide fairness among the achievable DoFs of users, instances of the proposed scheme are time-shared so that each mobile user takes all relative positions in a p-cluster, which requires different instances. Uplink Scheme with Hexagon Clustering The same as done in the last section, we deactivate a subset of mobile users so as to partition the network into non-interfering clusters of active users and their sectors. The shape of the clusters is hexagon and the size of the hexagons are set by a positive integer t. Construction of Hexagon Clusters For a given design parameter t, we choose some BSs as center BSs to construct a regular grid of equilateral triangles where every three closest center BSs are 2t cell-hops apart from each other. Therefore, the maximum distance to the closest center BS is t cell-hops and we name the BSs whose distance is cell-hops to the closest center BS as layer-" " BSs, for = 1, . . . , t. We determine all BSs located at t cell-hops above and below of any center BS as corner and null BSs, respectively. Then, we create solid green lines between any closest null and corner BSs (t cell-hop apart from each other), which creates hexagon grids along the entire network. Subsequently, we deactivate mobile users coinciding with solid green lines. This process divides the network hexagon-like non-interfering clusters and we shortly name as h-clusters (The hexagonal clustering is first presented in [36].). Figure 3 shows an example of partition for t = 3. Later on, we refer to active users as only users. In an h-cluster, Therefore, the number of users in a h-cluster, n h , is: Let K = {1, . . . , K h }, with K h ≤ K, be index set of h-clusters. We associate each h-cluster with single BBUP and denote the associated BBUP with the same index k ∈ K of the h-cluster. Let I k be the index set of BSs whose users are elements of kth h-cluster. Each I k BS sends an observation function to kth BBUP, i.e., U k = I k . To be able to find a BBUP-BS ratio, r h , we need to equally partition all BSs to BBUPs. Note that each layer-t BS, except the one in the corner, belongs to two different index sets, i.e., I k and I k 1 , k, k 1 ∈ K. Each corner BS is an element of three different index sets, i.e., I k , I k 1 and I k 2 , k, k 1 , k 2 ∈ K. In addition, note that each null BS around a h-cluster k is on the border of three different h-clusters. To this end, of the I k BSs and null BSs around h-cluster k, we partition all layer-" ", = 1, . . . , t − 1, BSs including center BS, half of the layer-t BSs except corner and null BSs, and one third of corner and null BSs to the BBUP k, which leads to: Since we have a given ratio r, we can choose any t ∈ Z + such that r h ≤ r, i.e., t ≥ 1 3r . Coding Scheme Each mobile user u encodes its message W u , which is uniformly distributed over the set W u = 1, . . . , 2 nR u , with a multi-antenna Gaussian codebook of power P. As in Section 4, after observation at sector antennas, each BS j generates observation function for (active) Rxs through independent quantization codebooks. To generate quantization codebooks, each BS j applies a point-to-point Gaussian vector quantizer to a received signal of each Rx such that the following noise-level quantization rate constraints are met. Let J k denote the sector index set of h-cluster k. We choose D k = J k . Each BS j ∈ I k of layer-" ", = 1, . . . , t − 1, transmits a message consisting of three independent quantization messages of Rxs to BBUP k and each BS j ∈ I k of layer-"t" transmits only quantization message of sector u to BBUP k if u ∈ J k . Depending on the prelogs µ BBU and µ F , there are two different quantization rates: Each BS with three users quantize each receive signal at the rate R q1 = µ q1 1 2 log(1 + P) and each BS with two users quantize each receive signal at the rate R q2 = µ q2 1 2 log(1 + P). That is, in h-cluster k, the receive signals of all layer-" " BSs, = 1, . . . , t − 1, and the receive signals of every corner BS is quantized at rate R q1 , i.e., 9t 2 − 9t + 6, and receive signals of layer-"t" BSs other than corner BSs are quantized at R q2 , i.e., 6t − 6. After obtaining quantization messages, BBUP k reconstructs all {Ŷ (n) u } u∈D k with quantization error. The input-output relationship experienced at the BBUP k is multi-user Gaussian MIMO-MAC. Then, each BBUP k performs joint decoding with vanishingly small probability of error since the channel matrix from users of D k to Rxs of D k is known by BBUP k and is square and full rank with probability 1. This leads to achieving DoFs µ q1 and µ q2 for respective mobile users. To be able to find DoF for asymptotic case, i.e., N → ∞, we need to equally partition deactivated users of the network to h-clusters. The number of deactivated users around h-cluster k is 6t. Since each deactivated user is on the border of two h-clusters, to be able to find DoF of the scheme, we partition half of them, i.e., 3t, to users of h-cluster k, which gives the DoF expression: In the following, we will give the policy for choosing quantization rates. Case 1: µ BBU ≥ n h M In Section 4.2.1, µ F is the only limiting factor since the quantization rate M · 1 2 log(1 + P) is enough to describe message W u of any user u in the asymptotic case. The policy is again to distribute transmission resources equally among (active) Rxs of any given BS. To this end, we choose the quantization rates regarding µ F : transmission resource of a fronthaul link is equally allocated between Rxs and the achievable DoF is given as • If 2M ≤ µ F ≤ 3M, transmission resource of a fronthaul link is equally allocated among Rxs of a BS with three users; however, any BS with two users quantizes its receive signals at the maximum rate at each Rx since each fronthaul link has enough capacity to support that communication rate (M ≤ µ F 2 ): and the achievable DoF is given as • If µ F ≥ 3M, all BSs quantize their receive signals at the maximum quantization rate (M ≤ µ F 3 ): and the achievable DoF is given as Case 2: µ BBU ≤ n h M Under this condition, depending on µ F , the achievable sum-DoF of a h-cluster can be restricted by the processing capacity prelog µ BBU . Achievable sum-DoF is given in the numerator of (36). Therefore, if the processing capacity prelog µ BBU is not smaller than the achievable sum-DoF of a h-cluster for the given interval of µ F : The process that has been implemented in Section 5.2.1 is applied and, hence, the DoF expressions are given as in (38), (40) and (42), respectively. However, if the processing capacity prelog µ BBU is smaller than the sum-DoF for the given µ F : We distribute the processing resource of a BBUP equally among sectors of a cluster and the quantization rate at each sector is chosen as which leads to: To provide fairness among the achievable DoFs of users, instances of the proposed scheme are time-shared so that each mobile user takes all relative positions in a h-cluster, which requires different instances. DoF without Sectorization In the two proposed achievability schemes ("p-clustering" and "h-clustering"), we considered three non-interfering sectors in each cell. Now, if we consider cells without sectors, we can naively adapt our clustering by deactivating all users in the border cells of clusters. That is, for p-clustering, it requires deactivation of all users in the cells with one or two active mobile users and, for h-clustering, it requires deactivation of all users in the corner cells and the cells with two active users. This means that the network consists of only cells with three active users and cells with no active users for both schemes. This would again partition the network into non-interfering p-clusters and h-clusters without changing the r p and r h for any given (t 1 , t 2 ) pair or t, respectively. By following the similar procedure introduced in Sections 4 and 5, one can easily state the following result by simply distributing the available transmission resources equally among three Rxs of a given BS as long as the BBUP capacity is enough or, otherwise, distributing BBUP processing resources equally among the Rxs of a p-cluster/h-cluster. This leads to the following lemma: Lemma 1 (DoF for naive scheme). For any µ BBU > 0, µ F > 0 and 0 < r ≤ 1, the achievable DoF in a multi cloud based non-sectored cellular network is given by and above maximizations are over all positive integers t 1 ,t 2 satisfying t 1 t 2 ≥ 1 r , and where above maximizations are over all positive integers t satisfying t ≥ 1 3r . Notice that the same cut-set bound, Theorem 2, applies for the naive schemes since the observation functions, Definition 1, are defined not on the sector basis but on the BS basis. Numerical Results and Discussion In this section, we present simulation results to evaluate the proposed coding schemes for p-clustering and h-clustering. In Figure 4a, we investigate effect of clustering size on the achievable DoF for several fronthaul capacities µ F = [3,7,11] and µ BBU = 428. We define size of a p-cluster as inverse of r p , i.e., 1 r p = t 1 t 2 , and we denote it also with side length pair (t 1 , t 2 ). We define size of a h-cluster as inverse r h , i.e., 1 r h = 3t 2 , and we denote it also with the parameter t. It is observed that, for p-clustering, when the fronthaul capacity is small, i.e., µ F ≤ M, clustering size has no effect on DoF since µ F becomes a bottleneck. In general, we see that, for both p-clustering and h-clustering, the clustering size giving highest DoF decreases with µ F . The figure verifies the Remark 1 since, for all r p = r h , p-clustering outperforms h-clustering for µ F = [3,7]. It is also interesting to note that, for p-clustering, the achievable DoF is not monotonically increasing(decreasing) until(after) reaching the maximum for µ F = 11 (i.e., 2M < µ F ≤ 3M) since not only the clustering size but also the side length of the p-cluster is important for exploiting interference. For any r p , choosing a (t 1 , t 2 ) pair that is the minimum in the sum gives the maximum DoF since it provides higher joint processing gain for a p-cluster for the given size (i.e., 1 r p ), i.e., the more t 1 and t 2 becomes closer to each other the more mutual information clusters have. Therefore, larger p-cluster sizes may not result in higher DoF owing to the side length effect. However, for µ F ≤ 2M, the side lengths of p-cluster has no effect on achievable DoF for a given cluster size. Figure 4b shows the effect of clustering size on DoF for various values of µ BBU = [100, 300, 500] and µ F = 12. It is seen that, for each µ BBU , achievable DoF increases with cluster size until it becomes a bottleneck, i.e., until µ BBU becomes active in the achievability expression. Accordingly, the results clearly indicate that having more processing power makes possible larger cluster sizes and hence larger DoF. In Figure 5, we plot the achievable DoF and cut-set bound vs µ F for M = 4, r = 0.025 and µ BBU = 428, which refers the case BBUP processing capacity is equal to the required processing capacity when each receive signal in a p-cluster of size (t 1 , t 2 ) = (5,8) is quantized at the maximum quantization rate R q = M 2 log(1 + P). From the figure, we can deduce that almost upper bound for µ F ≤ 2M can be reached, which means that 2M 3 DoF is almost achievable at µ F = 2M given that processing capacity is high enough. In Figure 5, the operating points of clustering sizes is also depicted. For µ F ≤ 8, equivalently 2M, any p-clustering with 1 r p = 40 gives the highest achievable DoF for the given system parameters. However, for µ F > 8, there are several different operating points. For example, for 8 < µ F ≤ 9.4, the h-clustering of size t = 4 is the optimal clustering size, which means, for µ F > 2M, dividing the network into h-clusters provides higher joint processing gain than p-clustering for the same r h = r p if the BBUP processing capacity is enough. For the rest, the clustering size r p is decreasing with µ F due to the given BBUP capacity is not enough to handle the quantized data for larger cluster sizes. At the operating point µ F = 12, which allows maximum quantization rate for each receive signal, the p-clustering of size (t 1 , t 2 ) = (5, 8) achieves capacity. This proves that the proposed scheme utilizes the system resources optimally at this operating point and almost 9M 10 DoF is achievable. We plot also the lower bound on DoF achieved by the naive scheme vs µ F for the same parameters. We can clearly see that the performance of the proposed schemes is considerably better than naive schemes due to the sectorization gain brought by nulling intra-cell interference. In Figure 6, we plot the achievable DoF and cut-set bound as a function of processing capacity prelog µ BBU , for r = 0.025 and µ F = 12, which means that the fronthaul capacity has no restrictive effect on the achievable DoF. The operating points of clustering sizes regarding µ BBU is also presented. The plot clearly indicates that the cut set bound is achieved until µ BBU = 428, i.e., the processing resources is used efficiently even until achieving 9M 10 DoF. At the rest of the µ BBU range, it is seen that the optimal clustering sizes ( 1 r p or 1 r h ) increase with µ BBU , and for most of µ BBU > 428, h-clustering provides highest DoF. This indicates the advantage of employing h-clustering when the processing capacity is high enough. For some range of µ BBU , both h-clustering of size t = 4 and p-clustering of size (t 1 , t 2 ) = (8,8) provide the highest DoF, which shows that h-clustering with lower clustering size provides higher joint processing gain than p-clustering with larger clustering sizes due to clustering geometry. The figure also depicts the lower bound achieved by the naive approach vs. µ BBU for the same parameters and the gain of sectorization is clearly seen for higher values of processing capacity. Finite SNR Analysis In this section, we compare finite SNR performances of the proposed schemes with several other schemes, which will be introduced later on. For the finite SNR case, the quantization rates for both proposed clusterings are chosen as stated in Sections 4.2 and 5.2, but the conditions regarding a high SNR regime are not applied, i.e., the prelog of any quantization rate is not reduced to the number of antennas M. Then, each BBUP implements joint decoding for the users of the associated cluster after reconstructing all sector receive signals of the cluster. For simplicity, we present the comparisons for M = 1 throughout the section. To evaluate the performance of the proposed schemes at finite SNR values, other than naive schemes, we compare our schemes with three different schemes: • Scheme 1 is a variation of the proposed p-clustering scheme. In p-clustering, each p-cluster is surrounded by deactivated users located on the sides of (t 1 , t 2 )-hop parellelogram, where each side has t 1 and t 2 deactivated users, respectively. For each p-cluster, we associate all deactivated users on the lower side and right side of a (t 1 , t 2 )-hop parellelogram to the p-cluster under consideration. Subsequently, we activate all deactivated users and allow each BBUP to collect quantization messages of reactivated user sectors associated with its own p-cluster. This process partitions the network into non-overlapping but interfering paralleogram-like clusters, which we call I p -clusters later on; see Figure 7 for an example of (t 1 , t 2 ) = (4, 3). Note that I p -clustering requires the same BBUP-BS ratio r p as for a p-clustering case. With reactivation of all deactivated mobile users, there are 3t 1 t 2 active users in each I p -cluster and all cells consists of three active users. Therefore, each BS equally partitions its fronthaul transmission resources to Rxs if BBU processing resources is enough to implement the joint decoding; otherwise, the processing resources is evenly distributed among all Rxs of the I p -cluster, i.e., the quantization rate is chosen as over all positive integer (t 1 , t 2 ) pairs satisfying t 1 t 2 ≥ 1 r . To be able to guess the user messages, each BBUP implement joint decoding by treating out-of-cluster interference as noise. • Scheme 2 is a variation of the proposed h-clustering scheme. In h-clustering, there are 6t deactivated users around a cluster of size-t. For a specific h-cluster, we associate the deactivated users on the borders of any three adjacent h-clusters, e.g., east, southeast, and southwest, to the h-cluster under consideration. Then, we replicate this process for each h-cluster with the same relative directions of adjacent h-clusters. Subsequently, we reactivate all deactivated users and allow each BBUP k to collect the quantized received signals of sectors of reactivated users associated with its own h-cluster. This process partitions the network into interfering but non-overlapping clusters, which we call I h -cluster in the following, see Figure 8 for t = 2. Note that I h -clustering requires the same BBUP-BS ratio as for the h-clustering case. With reactivation of deactivated users, there are 9t 2 active users in each I h -cluster. Therefore, by applying similar arguments as stated above, the quantization rate for I h -clustering is chosen as over positive integers t satisfying t ≥ 1 3r . To be able to guess the user messages, each BBUP implement joint decoding by treating out-of-cluster interference as noise. • Scheme 3 is a variation of the practical opportunistic schemes. The decoding depends on the realization of the channel coefficients. With the help of neighbors of the considered BS, the corresponding BBUP identifies for each user in the corresponding cell the three adjacent sectors that give the best joint decoding performance for the corresponding message. To be able to make a fair comparison between the proposed schemes and the 3-sector decoding scheme, we impose the same fronthaul rate constraint on the 3-sector decoding scheme as in the non-interfering clustering scheme (note that there is no silenced user in the 3-sector decoding case) by assuming all processing resources are used. That is, the quantization rates are chosen as Then, the BBUP collects the quantization messages and decodes the corresponding message based on them. In our numerical comparison, we average the rate over 5000 independent channel realizations of the channel matrices, where for each realization all channel gains are drawn independently of each other according to a Gaussian distribution, by which we aim at modeling the random location of a mobile user. The direct channel gains of intra-sector links are drawn with variance 1 and the cross channel gains of inter-sector links are drawn with variance α 2 < 1 since any mobile user in adjacent sectors can not be closer to a sector receiver than the user in the considered sector, where α is the channel attenuation coefficient. Figure 9 presents the comparison of the performances of the proposed schemes with naive schemes, I p -clustering, I h -clustering schemes and 3-sector decoding scheme vs SNR when r = 1 10 . The simulations are performed for different cluster sizes such that t 1 * t 2 = 10, 11 and 12 and t = 2. However, in all the subfigures of Figure 9, we present only the ones showing relatively better performance than others to make presentation better. As seen from all the subfigures of Figure 9, the proposed schemes provides higher sum-rates than naive schemes for all SNR range and, under all scenarios, e.g., strong interference regime at low BBUP processing capacity as in Figure 9b, or low interference regime at high BBUP capacity as in Figure 9e. By comparing the subfigures of Figure 9 for a given α, we conclude that the proposed schemes become more efficient if the processing capacity of BBUPs increases, i.e., the allowed quantization rate increases. In addition, we can see that employing the smallest possible cluster for a given r is more advantageous for small processing capacities. For example, for α = 0.9, while the p-clustering scheme for (t 1 , t 2 ) = (5, 2) shows better performance than other proposed schemes of larger cluster sizes at all SNR range for µ BBU = 30, it outperforms the h-clustering for t = 2 only at low SNR values for µ BBU = 60, and it does not outperform either h-clustering for t = 2 or p-clustering for (t 1 , t 2 ) = (5, 2) at any SNR value for µ BBU = 120. By comparing the subfigures of Figure 9 for a given µ BBU , we can observe that, for each µ BBU value, the SNR range in which the performance of the 3-sector decoding scheme is superior to or close to the proposed schemes decreases when the channel attenuation coefficient is higher. In addition, we see that, for µ BBU = 60 and 120, the SNR range in which the h-clustering for t = 2 outperforms the I p -clustering for (t 1 , t 2 ) = (5, 2) and/or I h -clustering for t = 2 increases with the channel attenuation coefficient. We infer that the idea of isolated clustering is more advantageous at strong interference regime. Another general conclusion that we can draw from simulation results presented in Figure 9 is that, if the processing capacity is high enough, i.e., the quantization rate is high enough, decomposing the network into hexagonal-type clusters achieves higher rates than paralellogram-type clusters especially at moderate and high SNR range even if r h = r p . This is due to geometrical structure of hexagonal-type clustering that includes more users for both h-cluster/I h -cluster and less interfererers for I h -cluster in comparison with the parallelogram clusters for the same r h = r p . An interesting conclusion from the finite SNR analysis is that interfering clusterings show close performance to the proposed schemes in the finite SNR range; therefore, the interfering clusterings can also be employed at finite SNR values, since it may be more convenient for practical systems. Conclusions In this paper, we analyze the uplink per-user DoF of M-CRAN based sectored cellular networks. The main features of this paper are the following: it proposes efficient ways of decomposing the network into non-interfering clusters for M-CRAN scenarios, and it characterizes per-user DoF as a function of fronthaul and processing capacity prelogs, and BBUP-BS ratio. The lower bound is obtained through two coding schemes based on decomposing the network into non-interfering parallelogram and hexagonal clusters, respectively. In both schemes, BSs apply point-point quantization to receive signals and send the quantization messages to the associated BBUPs over fronthaul links for joint decoding. Simulation results show that, for small and moderate fronthaul capacities, the achievability gap between lower and cut-set bounds decreases with an inverse of the BBUP-BS ratio. Therefore, the cut-set bound is almost achieved even for small cluster sizes at this range of fronthaul capacities. For higher fronthaul capacity prelogs, the achievability gap is not always tight but decreases with processing capacity prelog. The finite SNR analysis shows that the proposed schemes outperform the naive schemes at all SNR ranges and, under all scenarios, the interfering clustering cases at all SNR range under strong interference regime when the BBUP processing capacity is scarce and moderate, and the 3-sector decoding case at all SNR range under strong interference regime if the BBUP processing capacity is moderate and high. In other scenarios for interfering clustering and 3-sector decoding cases, the proposed schemes always achieve higher sum-rates except low SNR values. In general, the results provide valuable insights into appropriate clustering ways for mobile users/sectors, emphasizing the isolation of clusters, particularly if inter-cell interference is highly detrimental. and, from (15), the achievable DoF for Hexagon Scheme can be written as: Proof. The first part of the proposition, i.e., (A1), is straightforward. For (A2), note that, for a given µ BBU > 0, there is a unique t ∈ Z + that satisfies (A3) and the maximization term in (A2) comes from the fact that, to implement the hexagonal scheme, we choose the design parameter t that gives the maximum DoF of among minimums found for satisfying t ≥ 1 3r . From (15), we infer that the term µ F (3t 2 −1) 9t 2 is active in the minimization process for t ≥ t ≥ 1 3r since µ BBU ≥ µ F 3t 2 − 1 , and that the term µ BBU 9t 2 is active in lower bound for t > t since µ BBU ≤ µ F 3 (t + 1) 2 − 1 . Therefore, the achievable DoF by hexagon scheme is given by the second term of (A2) if µ BBU ≥ µ F 3t * 2 − 1 . We now check all possible intervals of µ BBU regarding (A1) and (A2). This completes the proof for Remark 1. Appendix B. Proof of Theorem 2 For the sake of simplicity, define Y BBU k as the received signal of BBUP k: We obtain the first two terms of the upper bound by choosing the cut set S = {all base stations, j = 1, . . . , N} S c = {all BBUPs, k = 1, . . . , K} and defining In that case, for any fixed BBUP to BS association for any given network, the total rate of all users is upper bounded by: where second inequality comes from applying (6) and (9) to received signals of BBUPs, which gives the first two terms by Definition 2. The third term comes from the fact that, by [35], the DoF a M × M MIMO system is upper bounded by M. For M ≤ µ F ≤ 2M, the term with µ BBU is active in both upper and lower bounds if µ BBU · r ≤ µ F and µ BBU ≤ M + µ F (t 1 t 2 − 1), respectively, where the matching requires t 1 t 2 = 1 r . For 2M ≤ µ F ≤ 3M, the term with µ BBU is active in upper and parallelogram lower bound if µ BBU · r ≤ µ F and µ BBU ≤ M (2t 1 + 2t 2 − 3) + µ F (t 1 t 2 − t 1 − t 2 − 1), respectively, where matching requires t 1 t 2 = 1 r . If 1 r ∈ Z + , there is at least one (t 1 , t 2 ) pair that results in t 1 t 2 = 1 r . However, , the term with µ BBU can not be active in lower bound. This imposes choosing the pair (t 1 , t 2 ) that minimizes t 1 + t 2 . The term with µ BBU is active in upper and hexagon lower bound if µ BBU · r ≤ µ F and µ BBU ≤ M (6t where matching requires t = 1 3r . For 3M ≤ µ F , the matching cases can be found by applying similar procedures as in the 2M ≤ µ F ≤ 3M case. Appendix D. Proof of Theorem 3 Due to Remark 1, we do the achievability gap analysis only for paralleogram clustering. We do the analysis for one of the cases which leads to the maximum achievability gap. For other cases, a similar procedure can be applied. • If µ F ≤ M, the maximum gap occurs when µ F 3 and µ BBU 3t 1 t 2 is active in upper and lower bounds, respectively. Note that this assumption imposes µ F r ≤ µ BBU ≤ µ F · t 1 · t 2 . where (a) is due to max µ BBU = µ F · 1 r and (b) is due to min t 1 · t 2 = 1 r by (25).
12,489.4
2020-06-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Radial and translational motions of a gas bubble in a Gaussian standing wave field Highlights • Coupled dynamic equations derived for a bubble in Gaussian standing waves.• Transverse radiation force reverses sign with the variation of driving frequency.• Axial and transverse motions weakened with the widening of wave front. Introduction A gas bubble subject to an ultrasonics wave field exhibits both radial oscillation and translational motion [1].Unlike the radial oscillation driven by the temporal variation of the wave field, the translational motion is generated by the spatial gradient of the ambient acoustic pressure [2].It is noted that the radial oscillation can trigger the change in bubble's volume and thus affect the translational motion.Therefore, these two types of motion for a gas bubble are not independent but coupled with each other.The movement of gas bubbles is a fundamental question in acoustic cavitation, which is widely utilized in many fields such as chemical engineering [3], hydrology [4] and biomedical ultrasound [5,6].A precise prediction of the bubble dynamics is the prerequisite for understanding the physical mechanisms of wave propagation in bubbly media as well as the acoustic cavitation effect. There have been many valuable theoretical and experimental results concerning the radial oscillations for a single bubble excited by acoustic fields.The well-known Rayleigh-Plesset equation and Keller-Miksis equation provide a good description for the radial motion of a spherical bubble in incompressible and compressible fluids, respectively [7,8].The averaged force acting on a rigid sphere in an ideal fluid, known as the acoustic radiation force, was initially studied by King [9].Later, Yosioka and Kawasima [10] extended the study to a compressible sphere, showing that the radiation force will be strengthened by particle's compressibility.They also found that bubbles can exhibit more involved behaviors including erratic dancing motions and zigzag trajectories.A comprehensive study on the radiation force exerted on pulsating bubbles in a stationary field, normally called Bjerknes forces, was presented by Crum [2], who used the pressure gradient of wave fields to derive the expression of Bjerknes force [11].It was Bleich [12] who took the lead in studying the translational motion of gas bubbles.In his work, the host fluid was assumed to be non-viscous and all the nonlinear effects of the bubble dynamics were neglected for simplicity.Watanabe and Kukita [1] first solved the equations of radial and translational motions simultaneously.The irregular translation motions for a gas bubble were predicted and a classified discussion on its dynamic behaviors depending on the bubble size was provided in their work.Later, a modified theory taking into account of the liquid compressibility was derived by Doinikov [13] using the Lagrangian formalism, which shows that any bubble can oscillate irregularly once driven by a sufficiently high pressure regardless of its initial size.The same author also studied the nonlinear coupling between volume pulsation, translational motion and shape modes of an oscillating bubble, in which all shape modes are considered without any limitations imposed on natural frequencies [14,15].Sadeghy and Shamekhi [16] first evaluated the effects of fluid's elasticity on the bubble dynamics in the E-mail address<EMAIL_ADDRESS>recently, further improvement has been made for the model describing the motions of ultrasound-exerted gas bubbles both theoretically and experimentally.Cui et al. [17] investigated the effect of ethanol on the radial and translational motions of a levitated cavitation bubble.Melnikov [18] demonstrated that stochastic pulsations of the bubble radically change the form of its dynamic equations.Sugita et al. [19] experimentally extended the classical theory of Bjerknes on a single bubble to an oscillating bubble cluster in a stationary acoustic field.Ma and Chen [20] numerically studied the dynamic response of a translational bubble in a strong acoustic field as well as its influences on the cavitation effect.A trajectory observation performed by Jiao et al. [21] shows that the history force exhibits different behaviors at low and high pressures.Zhang et al. [22] derived the dynamic equation of a gas bubble in a micro-cavity.The accuracies of the time-resolved and timeaveraged methods were compared by Klapcsik and Hegedus [23], who found that the former one is a preferable choice for transient waves. Wang et al. [24] studied the transition mechanisms of the translational motion of bubbles caused by the harmonic, subharmonic resonance and chaos. A comprehensive review of the current references indicates that the ambient acoustic field in most studies are limited to travelling or standing plane waves, in which case only the translational motion parallel to the wave vector needs to be considered.However, the acoustic waves irradiated by transducers in practical applications are often beams with concentrated energy [25].A gas bubble submerged in such an acoustic field can experience non-zero radiation force in both the axial and transverse directions.As a typical of the beams with a finite width, Gaussian standing waves have drawn wide attention in particle manipulation applications.In view of this, the aim of this work is directed towards the analysis of the dynamic responses of a gas bubble in a Gaussian standing wave field.A coupled system of equations of radial and translational motions are derived, followed by a numerical study on the characteristics of its dynamic behaviors.The present work can also be taken as an extension of the existing references to the non-plane wave fields. Theoretical model Let us consider a single spherical gas bubble with the initial radius R 0 surrounded by a viscous liquid medium.The mass density, the compressional wave speed, the viscosity and the surface tension coefficient of the host liquid are denoted by ρ, c, μ and σ, respectively.A monochromatic Gaussian standing wave of the angular frequency ω is incident upon the gas bubble, which can be regarded as the superposition of two oppositely-propagating Gaussian progressive waves.As a measurement of the focusing capability, the beam waist of the Gaussian wave is W 0 .The gas bubble will undergo radial oscillation and translational motion in response to the external pressure field.It is assumed that the gas bubble always maintains a spherical shape without deformation.A Cartesian coordinate system (x, y, z) originates from the beam center, with the z axis coinciding with the beam axis.At any instant of time, the gas bubble has a radius of R and the center of the bubble has the coordinate of (x, y, z).Fig. 1 shows the schematic diagram of the gas bubble. For simplicity, we assume that the Gaussian standing wave satisfies the weakly focused approximation such that kW 0 ≫1.In that event, the acoustic pressure field can be simplified as [26] where t is time, A is the pressure amplitude along the beam axis, k = ω/c is the wave number in the liquid.As is shown in Eq. ( 1), the phase front of propagation for the Gaussian standing wave is approximately equal to that for a plane standing wave.Also, the origin coincides with the pressure node.It is presumed that the wavelength of the Gaussian standing wave is much greater than the bubble radius, so that the ambient pressure is not affected by the presence of the gas bubble.Another presumption is that the gas bubble is assumed to be located at the plane y = 0 initially.Hereafter, only the translational motion in the x and z directions is required to be analyzed due to the circumferential symmetry of the acoustic field.The acoustic pressure field can be further simplified as Dynamic equations for radial oscillation Based on the theoretical model given by Doinikov [13], the dynamic equation governing the radial oscillation for the gas bubble can be expressed as where the overdot denotes the time derivative, the detailed expression of the pressure p sc is given by where P 0 is the hydrostatic pressure, P v is the saturated vapor pressure, γ is the polytropic exponent of the gas.Note that the left-hand side of Eq. ( 3) is simply the classical Rayleigh-Plesset equation describing the radial oscillation without considering the translational motion.The right-hand side of Eq. ( 3) reflects the effect of translational motion on the radial oscillation.Using the Lagrangian formalism, Doinikov [13] first proved that this feedback term is of great importance especially for high-intensity pressure fields.Since the gas bubble moves in both the axial and transverse directions, this term contains the x and z components of the translational velocity.It is also worth mentioning that the Rayleigh-Plesset equation only holds up when the velocity of radial oscillation is relatively low compared with the sound speed.For large forcing amplitudes, the left-hand side of Eq. ( 3) must be replaced by the Keller-Miksis equation with the right-hand side left untouched, which is expressed as [8] ( ) Dynamic equations for translational motion As for the translational motion of the gas bubble, force analysis is required to be performed in both the axial and transverse directions.All the forces acting on the gas bubble in the z direction include the gravitational force, the acoustic radiation force induced by the pressure gradient, namely the primary Bjerknes force F pr,z , the buoyant force F bu and the viscous drag force F vis,z .Besides, considering that the density of gas is much lower than that of liquid, the virtual mass effect must be taken into account, which can be equated to the virtual mass force F vir,z .A detailed analysis of each force is given as follows. The gravitational force and the buoyant force of the gas bubble are expressed, respectively as F bu = ρVg, (7) where m b is the mass of the gas inside the bubble, g is the local gravitational acceleration, V = 4πR 3 /3 is the volume of the gas bubble.The primary Bjerknes force is given by The viscous drag force is defined as [27] F where v ex,z is the z component of the particle velocity generated by the acoustic pressure field, which is calculated from Eq. ( 2) to be The virtual mass force is described as Based on the Newton's second law, the equation of motion in the z direction can be written as Considering that the mass of the bubble is negligible compared with the virtual mass, the inertia and gravity terms can be omitted, further simplifying Eq. ( 12) as The translational motion in the x direction is driven by the primary Bjerknes force F pr,z , the viscous drag force F vis,z and the virtual mass force F vir,z .In analogy to the force analysis above, it is easy to obtain their expressions as where v ex,x is the x component of the particle velocity generated by the acoustic pressure field, which can also be calculated from Eq. ( 2) as Hence, the equation of motion in the x direction can be expressed based on the Newton's second law as Likewise, omission of the inertia and gravity terms yields a simplified form of Eq. (18) as Eq. ( 13) and ( 19) govern the translational motion in the z and x directions for the gas bubble, respectively.Inspection of Eq. ( 4), ( 13) and (19) or Eq. ( 5), ( 13) and (19) indicates that the radial and translational motions are coupled through the volume of the gas bubble. Results and discussion Numerical computations are performed based on the theoretical analysis above to investigate the dynamic behaviors for a single gas bubble in a Gaussian standing wave field.The ordinary differential equations given by Eq. ( 4), ( 13) and (19) or Eq. ( 5), ( 13) and ( 19) are solved through the fourth-order Runge-Kutta method.For the radial oscillation of the gas bubble, the time step of the numerical solution is set to 1/1000 of the acoustic period to avoid missing details of the radius variation.When studying the translational motion of the gas bubble, however, we increase the time step to the acoustic period as thousands of acoustic cycles are going to be investigated.The values for liquid density, wave speed, surface tension coefficient, hydrostatic pressure, liquid viscosity and specific heat ratio are set to ρ = 1000 kg/m 3 , c = 1500 m/s, σ = 0.072 N/m, P 0 = 1.013 × 10 5 Pa, μ = 1.0 × 10 − 3 Pa⋅s, γ = 1.4,respectively so as to simulate an air bubble immersed in water at atmospheric conditions. To verify the validity of the present theorem, the radial response of a gas bubble is first studied in a plane standing wave field by setting the beam waist to infinity, with the driving frequency f = 20 kHz, the pressure amplitude A = 1.32 × 10 5 Pa and the initial radius of the bubble R 0 = 8μm.At t = 0, the bubble is located at a distance of 1/50λ from the pressure antinode without any initial velocity.Fig. 2 displays the instant bubble radius versus time within the first acoustic cycle.As is shown in the figure, the variation of bubble radius can be generally divided into three processes including expansion, contraction and oscillation.The simulated plot consists well with the result given in Ref. [20] except for some minor differences due to negligence of the saturated vapor pressure in that work. In all of the following computations, the gas bubble has an initial radius of R 0 = 10μm and no initial velocity in both the radial and translational directions.The resonance frequency of the gas bubble can be obtained from the linear oscillation theory, which is given by [28] Computational results of radial oscillation The radial oscillation for the gas bubble is first examined in this section.Fig. 3 shows the instant bubble radius versus time during the first thirty acoustic cycles.The bubble radius R denoted by the vertical axis is normalized by the initial radius R 0 , and the time t denoted by the horizontal axis is normalized by the acoustic period T. Initially, the gas bubble is at z 0 = λ/4 and x 0 = 0.This setting of parameters corresponds to the first pressure antinode along the beam axis, where the acoustic pressure reaches its highest value and the axial and transverse radiation forces both vanish due to symmetry.While the buoyant force is nonzero in the z direction, the calculating period is so short that the translational displacement can be neglected for the bubble.The beam waist of the Gaussian standing wave is set to W 0 = 3λ and the driving frequencies in panel (a), (b), (c), (d) and (e) are f = 137 kHz, 274 kHz, 342 kHz, 411 kHz, 547 kHz, equal to f = 0.4f res , 0.8f res , f res , 1.2f res , 1.6f res , respectively.For each driving frequency, the numerical computations are performed at A = 0.2 bar, 0.5 bar, 0.8 bar, respectively.After a few periods of transient process, the gas bubble oscillates around its equilibrium radius with a decaying amplitude due to viscosity.However, since the expansion and contraction processes are not exactly symmetrical with respect to its equilibrium radius, nonlinearity of the bubble oscillation is also clearly exhibited in the simulated results.As the pressure amplitude rises, the amplitude of the radial oscillation also increases significantly amid the steady-state response.Generally, the oscillation amplitude grows as the driving frequency approaches the resonance frequency.However, the maximum amplitude occurs at f = 0.8f res (Fig. 3(b)) rather than f = f res (Fig. 3(c)).This phenomenon can also be attributed to nonlinearity of the radial dynamic equations.Since the gas bubble is assumed to be located at the beam axis without initial velocity, we can safely arrive at the conclusion that the radial oscillation follows the same rules as that in a plane standing wave field.As is shown in Fig. 4, the radial velocity of the gas bubble at A = 0.8 bar around the resonance frequency is no more than 150 m/s, which is much smaller than the sound speed of the surrounding liquid.Hence, it is reasonable for us to utilize Eq. ( 3) to describe the radial oscillation of the gas bubble. Unlike the case of plane wave incidence, the dynamic behaviors for a gas bubble vary at different transverse positions.The normalized bubble radius versus the normalized time plots for the gas bubble are displayed in Fig. 5 within the first five acoustic cycles.The beam waist and the driving frequencies of the Gaussian standing wave in panel (a), (b), (c), (d) and (e) remain unchanged, while the pressure amplitude is fixed to A = 0.5 bar amid the numerical computation.With the initial z coordinate still satisfying z 0 = λ/4, the initial x coordinate is set to x 0 = λ, 2λ, 3λ, respectively for each panel.It is shown that the radial oscillation generally possesses the same growing trend as those in Fig. 3.The oscillation amplitude, however, is lowered due to the damping effect of acoustic power in the off-axial configuration.A brief quantitative analysis can also be given on the attenuation of the oscillation amplitude.For instance, the pressure amplitude at x 0 = 3λ is equal to e -1 of its maximum value along the beam axis based on Eq. (2).Therefore, the amplitudes of the forced radial vibration also approximately decrease to e -1 of the counterparts in Fig. 3. Computational results of translational motion In this section, the translational motion for the gas bubble is going to be investigated based on the dynamic equations above.Considering that the translational velocity is much lower than the radial one, the system of equations is required to be solved in a much longer period so as to give a more comprehensive display of the bubble dynamics.Fig. 6 presents the translational motion of the gas bubble subject to a Gaussian standing wave field with W 0 = 3λ.The bubble is positioned at z 0 = λ/4 and x 0 = 0 without any initial velocity at t = 0. Therefore, its trajectory is confined to the z axis due to symmetry.The pressure amplitude is fixed to A = 0.5 bar.The driving frequency in panel (a) is set to f = 68.4kHz, 137 kHz, 205 kHz, corresponding to f = 0.2f res , 0.4f res , 0.6f res , respectively, which are much lower than the resonance frequency.The driving frequency in panel (b) is set to f = 274 kHz, 342 kHz, 411 kHz, corresponding to f = 0.8f res , f res , 1.2f res , respectively, which are near the resonance frequency.The driving Y. Zang frequency in panel (c) is set to f = 479 kHz, 547 kHz, 616 kHz, corresponding to f = 1.4f res , 1.6f res , 1.8f res , respectively, which are much higher than the resonance frequency.According to the theory in former references [1,13], when the driving frequency is lower than the resonance frequency, bubbles will move to pressure antinodes and gather there.However, the buoyant force is included in our discussion and will drive the gas bubble to ascend in the + z direction as is shown in Fig. 6 (a).Once leaving the pressure antinode, the bubble will be slowed down by the primary Bjerknes force and viscous drag force.Finally, it reaches a new equilibrium position above the pressure antinode.A Gaussian standing wave with a higher driving frequency induces a stronger primary Bjerknes force and thus the translational motion will be stopped Y. Zang much earlier.It is also noted that the displacement of the gas bubble is no more than 1/1000 of the wavelength, which is completely negligible for a naked-eye observation.As the driving frequency grows to 0.8 f res in Fig. 6(b), however, the gas bubble will not be trapped close to the pressure antinode by the primary Bjerknes force.Instead, it moves towards the next pressure node at z = 0.5λ and then executes dramatic oscillation between z = 0.25λ and z = 0.5λ with a period of around 5000 T. It has been investigated in former studies that this erratic translational motion can be ascribed to the reversal of the primary Bjerknes force due to the change of phase shift between the gas bubble and the incident pressure [13,29].Unfortunately, problems are encountered during the numerical calculation which only allow the bubble path to be accurately followed until about 8000 acoustic cycles.Therefore, our calculation is terminated much earlier at f = 0.8f res to avoid divergence.For f = f res or f = 1.2f res in Fig. 6(b), the gas bubble is directed towards the pressure node and then settled there without any oscillation, which is in good agreement with Ref. [13].Compared to the case of f = f res , a decrease of the primary Bjerknes force will delay the arrival time in the case of f = 1.2f res .With the driving frequency further increasing to f = 1.4f res , 1.6f res , 1.8f res in Fig. 6(c), the pressure node at z = 0.5λ remains the stable equilibrium position for the gas bubble and the arrival time still has a positive correlation with the driving With the beam waist, pressure amplitude and driving frequencies remaining unchanged, Fig. 7 shows the translational motion for the gas bubble initially at z 0 = 0 and x 0 = 0, corresponding to the first pressure node of the Gaussian standing wave.As can be expected from the analysis above, the gas bubble in Fig. 7(a) is directed towards the pressure antinode and a pressure field with a higher driving frequency generates a larger translational velocity and an earlier arrival time.Exerted by the buoyant force, the gas bubble driven above the resonance frequency in Fig. 7(b) and (c) will move in the + z direction and finally reaches equilibrium very close to the pressure node, similar to the simulated plots in Fig. 6(a).When the driving frequency amounts to 0.8 f res in Fig. 7(b), the gas bubble should have ascended to the pressure antinode.Nevertheless, the numerical calculation is terminated much earlier in this case to avoid divergence. Fig. 8 extends the discussion above to the off-axial configuration, where the translational motion in both the x and z directions is required to be taken into account.The initial z and x coordinates are set to z 0 = λ/4 and x 0 = λ, 2λ, 3λ, respectively.With other physical parameters remaining the same as in Fig. 7, the cases of f = 137 kHz, 274 kHz, 342 kHz, 411 kHz, 547 kHz are investigated in panel (a), (b), (c), (d) and (e), corresponding to f = 0.4f res ,0.8f res ,f res ,1.2f res ,1.6f res , respectively.At f = 0.4f res (Fig. 8(a)), the gas bubble is pulled towards the beam axis by the transverse primary Bjerknes force within 10 4 acoustic cycles.Meanwhile, a tiny displacement in the axial direction is also observed for the bubble, which finally reaches equilibrium at the same position irrespective of its initial off-axial distance.In Fig. 8(c), (d) and (e) when the driving frequency is not less than the resonance frequency, the transverse primary Bjerknes force reverses it sign, pointing outwards from the beam axis.A combination of the radiation force and viscous drag force results in a decelerating motion in the + x direction.As for the z component, the bubble nearly makes slow uniform linear motion, indicating that equilibrium is reached in the axial direction.It is also interesting to find that, for bubbles driven above the resonance frequency, the translational motion is not significantly affected by the initial off-axial distance.Actually, in this case, the buoyant force and viscous drag force dominate the bubble dynamics.When the driving frequency is equal to 0.8 f res (Fig. 8(b)), the problem of divergence occurs only at x 0 = 3λ, forcing us to terminate the calculation ahead of time.Whatever the initial off-axial distance is, the gas bubble will quickly move to x 0 = 2.25λ and exhibit sophisticated oscillation about it resulting from nonlinearity of the dynamics equations. Similarly, we switch to the study of the bubble dynamics when it is located at z 0 = 0 initially corresponding to the pressure node.As is shown in Fig. 9(a), the gas bubble driven much below the resonance frequency makes for the first pressure antinode under the primary Bjerknes force.It takes the bubble a longer time to reach equilibrium with increasing x 0 values due to attenuation of the acoustic power.As the bubble approaches z = 0.25λ, it also moves towards the beam axis driven by the transverse primary Bjerknes force, with the transverse velocity synchronously changing with the axial one.In Fig. 9(c), (d) and (e) when the bubble is driven above the resonance frequency, since the axial primary Bjerknes force almost vanishes, only a small perturbation of its z coordinate occurs due to presence of the buoyant force and viscous drag force.Note that the transverse radiation force reverses its direction again, driving the bubble outwards from the beam axis at a low speed.The time scale at f = 0.8f res in Fig. 9(b), though much shortened due to divergence of calculation, is enough for us to observe the entire process of the bubble returning the beam axis. Effects of the beam waist on radial and translational motions The effects of the beam waist are studied in Figs. 10 and 11.The radial response of the gas bubble is investigated in Fig. 10 for the cases of W 0 = 2λ,4λ,6λ, respectively, with the initial coordinate of the bubble set to z 0 = λ/4, x 0 = 0 and the pressure amplitude fixed to A = 0.5 bar. Apparently, the growth of W 0 does not influence the general trend of the simulated plots but leads to an increase in the oscillation amplitude.This is not surprising as the widening of the Gaussian standing wave corresponds a stronger acoustic field around the off-axial sphere and thus the radial response will be intensified.Fig. 11 shows the translational motion at W 0 = 2λ, 4λ, 6λ with the bubble initially put at the same position and the pressure amplitude also fixed to A = 0.5 bar.For f = 0.4f res (Fig. 11(a)), the widening of the incident wave only affects the transverse motion of the gas bubble.As W 0 increases, a longer time is required for the bubble to be attracted towards the beam axis due to reduction of the pressure gradient.For the same reason, when f = f res , 1. 2f res , 1.6f res (Fig. 11(c), (d) and (e)), the bubble possesses a smaller transverse velocity for a larger beam waist, whereas it is directed towards the +x direction in this case.At f = 0.8f res (Fig. 11(b)), one can observe the irregular oscillation around certain positions though the calculation is terminated again due to divergence.Moreover, the axial motion is hardly affected by changing W 0 irrespective of the driving frequency because the buoyant force and viscous force are the dominant factors for the bubble. Conclusions This study presents a comprehensive formalism on the radial and translational responses of a gas bubble in a Gaussian standing wave field.A coupled system of equations of radial and translational motions are derived.Compared with former studies of the bubble dynamics in plane waves, translational motion in both the axial and transverse directions are required be taken into consideration as the pressure gradient is nonzero in the transverse plane.Based on the theoretical results, numerical solutions are obtained for radial and translational dynamic behaviors of a gas bubble.The influences of varying the initial position of the bubble and the beam waist of the Gaussian standing wave are also investigated. Nonlinear radial oscillation is observed in the simulated plots for the gas bubble, which can be intensified by a higher amplitude pressure amplitude and a smaller off-axial distance.For a gas bubble driven much below the resonance frequency, it will reach equilibrium close to the pressure antinode under the action of the buoyant force, viscous drag force and primary Bjerknes force.However, when the driving frequency is not less than the resonance frequency, the bubble will move towards the pressure node under the primary Bjerknes force.Irregular translational oscillation occurs at f = 0.8f res , in which case the numerical solution must be terminated ahead of time due to divergence of calculation.In the off-axial configuration, the bubble is pulled towards the beam axis when driven much below the resonance frequency and repulsed away from the beam axis when driven above the resonance frequency.As the beam waist grows, the Gaussian standing wave is widening, which will weaken the radial oscillation of the gas bubble.Besides, a wider wave field induces a smaller pressure gradient and costs the bubble a longer time to reach equilibrium whatever the driving frequency is. Declaration of competing interest The authors that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 1 . Fig. 1.Schematic configuration of a spherical gas bubble in a Gaussian standing wave field.
6,645.8
2023-12-01T00:00:00.000
[ "Physics" ]
INTERNATIONAL ECONOMICS AND INTERNATIONAL RELATIONS . In this article, the integration processes between Azerbaijan, Ukraine and Georgia are considered through the indicators of integratedness of Azerbaijan’s GDP and the trade turnover of this country with the other two. All considered time series are non-stationary. So, there are problems of correct modeling of the corresponding time series, the components of which lead to a deviation from stationarity. The publication uses an econometric cointegration methodology for modeling the relationship between the non-stationary time series. A dynamic model of the long-term equilibrium is built, allowing to qualitatively forecast the state of foreign trade integration of the three countries under consideration and analyze the openness of the Azerbaijani economy in the regional aspect. No 155 from time to time.The research (Kalyuzhna,2020) carried out a retrospective analysis of Ukraine's foreign trade in goods in the deepening of interstate economic contradictions.An econometric analysis in (Orudzhev&Alizade,2020) of the mutual influence of the GDP of Azerbaijan and the trade turnover of this country with Ukraine was carried out.Estimates of the parameters of the econometric model are found, after which, with the execution of the model, the correction of errors of high quality qualitative indicators.From the calculations for the lag (1)(2)(3) of this work, it can be seen that α(-2.147153;-1.755) is a correction vector.At such coordinates, the equilibrium is off the scale.However, in the case of lag (2 2), there are estimates of the cointegrating vector β(1; 13.7559; -0,148975) and cointegrating vector α(-0.017035;-0.146521),where (21.7) (-1.03) (-1.94) (-20.2) t-values are indicated in brackets under the coordinates. The research (Kovtun&Matviienko,2019) considers current trends in international trade in goods and features of Ukraine's foreign trade.The reorientation in the geographical scope of exports from the CIS countries towards the country application was analyzed, and foreign trade relations of Ukraine with Azerbaijan and Georgia were not considered.The main environmental factors in (Baliuk,2021;Baliuk,2022), that are characteristic for the development of export-import activities of domestic industrial enterprises in modern conditions are summarized, and priority economic strategies for the development of export-import industrial activity in Ukraine are also substantiated.The article (Dyogtev et al.,2016) analyzes the economic connection of Georgia with Russia, Turkey, Iran and Kazakhstan using indicators of the trade turnover of foreign direct investment, cross-border movement of finance, tourism and transport development.The issue of interconnection of Georgia's trade and economic relations with Azerbaijan and Ukraine was not touched upon.The current trends in the world economy, the EU economy, the post-Soviet economy, mainly the economy of Georgia, are considered and the main aspects of the socioeconomic development of Georgia are identified in (Silagadze,2020). We also note that , the regional organization GUAM (Georgia, Ukraine, Azerbaijan and Moldova) is an organization for democracy and economic development, established in 1997, which is still practically inactive in the implementation of a free trade zone between countries, sectoral cooperation, in the fuel and energy sector , in the field of cross-border transportation, logistics and communication, which could give an incentive to free economic and trade interaction between the GUAM member states in the relevant areas. The main results of the research.In this study, with a more detailed change from (Orudzhev&Alizade,2020; Orudzhev&Alizade,2021), a new specification of models for the relationship between Azerbaijan's GDP and trade turnover with Ukraine, supplemented with Georgia is defined.Forecasting the qualitative indicators of these countries in terms of export-import operations was carried out by the method of No 155 logarithmic approximation of actual data with reproduction by extrapolation.This study is an addition to the article (Alizade,2022).Descriptive statistics on the logarithms of given indicators from (www.stat.gov.az;www.geostat.ge/ka;www.ukrstat.gov.ua) are shown in Table 1: Table 1 shows, that the elements of the 2nd, 3rd, and 4th columns have a slight left-sided asymmetry of the empirical curves relative to the theoretical one, and for the element of the 5th column, the vertex is significantly shifted to the left.The excesses for the elements of columns 2, 3, and 4 show that there is a slight peak of the empirical curve, and for the element of the 5th column, this value increases approximately four times, which leads to a significant deviation of the empirical distribution of residuals from the normal one. Based on a comparative analysis with the results of (Orudzhev&Alizade,2020; Orudzhev&Alizade,2021) and Table 1, it can be assumed that the dependence of the logarithm of Azerbaijan's GDP on the logarithms of Azerbaijan's foreign trade turnover with Ukraine and Georgia is described by a linear regression model As can be seen from the results obtained in Table 2, the general formal model is the most accurate, the coefficient of determination has a higher value of 94%. The coefficient at LN_UKR_T means that each percent of The stability analysis of the model parameters is explained by the representation of the cumulative sum of squares of the residuals, and the graphical description of the CUSUM test demonstrates that all parameters are dynamically stable, since the curves lie within the critical limits in the 5% region. Let's pay attention to the correlation coefficients between the factors, presented by the correlation matrix of The correlation coefficients are close to 1, demonstrating an almost complete positive correlation in Table 3.In other words, Azerbaijan's GDP is strongly linked to the growth of trade with regional strategic partners Georgia and Ukraine. Here, the correlation captures the proximity of shortterm relationships between variables and does not take into account the stationarity or non-stationarity of these indicators.Therefore, building a model based on correlationregression analysis gives biased estimates of the model coefficients.Based on this, it is necessary to consider models based on cointegration analysis, which make it possible to analyze series with non-stationary components, both in the short-term and in the long-term periods.The construction scheme will be described below. To test the significance of the constructed model (2), the observed and critical values of the Fisher criterion were calculated.These values are respectively equal to 117.2768 and 3.44 at a significance level of 5% and degrees of freedom 1 = 2, 2 = 22.Since 117.2768 > 3.44, the model is considered significant.The significance of the regression coefficients is also confirmed using t-statistics.Autocorrelation had checked using Durbin-Watson d-statistics.According to the table of critical values of d-statistics for the number of observations 25, the number of explanatory variables 2 and the given significance level 0.05, the values = 1.21 and = 1.55, which divide the segment [0.4] into five regions, the observed value = 0.75.Since = 0.75 < , then there is an autocorrelation of the residuals.Now consider the problem of heteroscedasticity.Heteroskedasticity leads to the fact that the estimates of the regression coefficients are not effective, the dispersions of the distributions of the coefficient estimates increase.Here, the heteroscedasticity of the residuals is verified by the White test, 2 = * − , где = 25, according to which the value , 2 −where is the coefficient of determination for the auxiliary regression of the squares of the residuals on all regressors, their squares, pairwise products and a constant, is 5.230084, and this value is less than the value χ 0,16 2 (5)= 5.303272885.The corresponding P-value is greater than 0.05, ithe null hypothesis that the random term is homoscedastic is not rejected. The results of the extended Dickey-Fuller test showed that the original series and their first differences are not stationary, while the second-order difference operators are stationary.The test results are shown in Table 4: The results of the above tests show that the estimates of the regression coefficients are poor.The reason for this is the nonstationarity of the studied series.One of the No 155 approaches to the correct mathematical description of series is the cointegration approach of Engle-Granger and Johansen.This approach can be applied to build an error correction model if the time series are integrable of the same order.In our case, all series are second-order integrable.For complete information content of the research, in addition to the Granger causality test, it is necessary to analyze the response of impulse functions.These functions No 155 represent the median estimate with a 90% confidence interval of the endogenous variable for a positive shock of one standard deviation of the exogenous variable and indicate the time to return to the equilibrium trajectory.Confidence intervals were obtained by bootstrapping with 100 replications, as described in (Hall,1992).unit of its standard deviation for the entire period.The responses of variables to these shocks in the periods t=1,2,…,10 are estimated.The values of the variables in these time periods represent the corresponding impulse response functions. Reactions of impulse response functions It is clear from Figure 1 that the response of the variables to the deviation from the general stochastic trend is not the same.In the case of response to shocks, the endogenous variable goes its part of the way to equilibrium. To study the influence of exogenous variables on an endogenous variable over the next 10 years, an econometric Table 6.1 shows that in the annual forecast ΔLN_GDP_AZ, the largest errors fall on the shocks ΔLN_GDP_AZ, ΔLN_UKR_T and ΔLN_GEO_T, respectively, in the amounts of 88.4% on a two-year horizon, 90% on a ten-year horizon, and 9.3% on a ten-year horizon; for ΔLN_UKR_T these values will be 38.87% on the horizon of two years, 75.7% on the horizon of one year, 10.56% on the horizon of six years, and for ΔLN_GEO_T 32.14% on the horizon of ten years, 6.31% on the horizon four years, 90.46% over the horizon of one year.These figures indicate that the greatest uncertainty in the forecast for ΔLN_GDP_AZ, ΔLN_UKR_T and ΔLN_GEO_T during the first five years is given by their own changes. Based on the Angle-Granger and Johansen test, to obtain the specification of the Vector Error Correction Model (VECM) at a significance level of 1%, all 5 options were analyzed: no free member; the data does not have a deterministic trend, the cointegration relation contains an intercept and does not contain a trend; the data contains a deterministic trend, the cointegration equation contains an intercept and does not contain a trend; the data has a deterministic linear trend, the cointegration relation has both a trend and an intercept; the data contains a deterministic quadratic trend, the cointegration equation contains a trend and an intercept.In the case of the information criteria, Akaike and Schwartz, respectively, had low values of 1.899115 and 3.297802.All variables are cointegrated, which certifies their long-term relationship and the authenticity of the correlation.Taking into account the information criteria of Akaike and Schwartz, the lag equal to 2 turned out to be the best.One cointegration relation with the degree of integration 2 and the cointegration rank equal to 1 was obtained. In tables 7.1 and 7.2, to determine the number of cointegration vectors in the time series, we first tested the null hypothesis that there are no cointegration vectors, against the alternative hypothesis that there is one such vector.We rejected the null hypothesis, since the calculated values were greater than the critical values, from which we concluded that there is one cointegration vector.We then tested the hypothesis that there is one vector against the alternative hypothesis that there are two vectors of cointegration.Here, the calculated criteria are less than the critical values, and we accept the null hypothesis.Thus, we concluded that there is one vector of cointegration.According to (Verbeek,2012), the system of integrated order 2 and cointegrated series can be represented in the form of a vector error correction model (VECM) with a lag equal to 2 and rank 1, which expresses a long-term equilibrium relationship of variables and authenticity their correlations.Performing the procedures of the Eviews 8 program, the following error correction equation was found for the second-order differences of the logarithmic values of Azerbaijan's GDP: where in parentheses under the coefficients are the standard errors of the estimate, ∆(•) = ∆ (•); ∆(−) = ∆ − (•) , = 1,3 , "•"corresponding variable is marked. Above, when implementing the Granger causality test, we showed that there are feedbacks between variables.In the Eviews 8 program procedures, following similar procedures, it is easy to obtain error correction models for the remaining variables: No 155 the importance of mutual trade in goods. 6.The proposed assessments can be used to determine the significant factors of interaction in the dynamics of trilateral trade relations and to assess the growth trend in the openness of the economies of these countries in the regional context.The analysis performed allows us to identify a number of stable and unstable features that indicate the possibility of modeling reasonable predictive scenarios for further mutually beneficial trade and economic ties in the membership of the GUAM project in the conditions of the Ukrainian-Russian and Georgian-Russian military-political and economic crisis. Table 2 Estimated multiple regression model with logarithms of variables (2)wth in foreign trade turnover between Azerbaijan and Ukraine is followed by an increase in Azerbaijan's GDP by 0.23% per year, and a 1% increase in Azerbaijan's trade with Georgia leads to an increase in Azerbaijan's GDP by 1.04 % in(2).At the same time, the indicator of foreign trade turnover does not reflect multidirectional shifts in exports and This work is distributed under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License (https://creativecommons.org/licenses/by-sa/4.0/).Proceedings of the 1st International Scientific and Practical Conference «Modern Knowledge: Research and Discoveries» (May 19-20, 2023).Vancouver, Canada No 155 the imports.It should be noted that this model does not take into account changes in other important indicators that directly affect the size of Azerbaijan's GDP.Nevertheless, the results achieved during the construction of the model can be correlated, for example, with the GDP forecast according to the IMF (http://www.imf.org/external/Publs/ft/weo/2017/weodata/weoselagr.aspx). Table 6 .3 Decomposition of the variance of forecast errors ΔLN_GEO_T by shocks This work is distributed under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License (https://creativecommons.org/licenses/by-sa/4.0/).
3,172.4
0001-01-01T00:00:00.000
[ "Economics" ]
Design and prototype of an augmented reality display with per-pixel mutual occlusion capability : State-of-the-art optical see-through head-mounted displays for augmented reality (AR) applications lack mutual occlusion capability, which refers to the ability to render correct light blocking relationship when merging digital and physical objects, such that the virtual views appear to be ghost-like and lack realistic appearance. In this paper, using off-the-shelf optical components, we present the design and prototype of an AR display which is capable of rendering per-pixel mutual occlusion. Our prototype utilizes a miniature organic light emitting display coupled with a liquid crystal on silicon type spatial light modulator to achieve an occlusion capable AR display offering a 30° diagonal field of view and an angular resolution of 1.24 arcminutes, with an optical performance of > 0.4 contrast over the full field at the Nquist frequency of 24.2 cycles/degree. We experimentally demonstrate a monocular prototype achieving >100:1 dynamic range in well-lighted environments. Introduction Augmented Reality (AR) is viewed as a transformative technology in the digital age, enabling new ways of accessing and perceiving digital information essential to our daily life.It is well embraced that the integration of AR technology with mobile computing will become as integrated as smart phones to all walks of life.A see-through head-mounted display (HMD) is one of the key enabling technologies for merging digital information with a physical scene in an AR system [1].While both video see-through and optical see-through displays have their unique advantages, optical see-through HMDs (OST-HMD) tend to be preferred when it comes to real scene resolution, viewpoint disparity, FOV and image latency [1]. Developing OST-HMDs, however, presents many technical challenges [2,3], one of which lies in the challenge of correctly rendering mutual occlusion relationships between digital and physical objects in space.Mutual occlusion is the light blocking behavior when intermixing virtual and real objects-an opaque virtual object should appear to be fully opaque and occlude a real object located behind it and a real object should naturally occlude the view of a virtual object located behind the real one.There are two types of occlusion: that of real-scene objects occluding virtual ones, and of virtual objects occluding the real scene.While the occlusion of a virtual object by a real object can be achieved straightforwardly, by simply not rendering the virtual object where the occluding real object sits, when the location of the real object relative to the virtual scene is known, the occlusion of a real object by a virtual one presents a much more complicated problem because it requires the blocking of light in the real scene.The state-of-the-art OST-HMDs typically rely upon a beam splitter (BS) to uniformly blend the light from the real scene with the virtual objects, and lack the ability to selectively block out the light of the real world from reaching the eye.As a result, the digitally rendered virtual objects viewed through OST-HMDs typically appear "ghost-like," always floating "in front of" the real world.Figure 1 shows an un-edited AR view captured by a camera through a typical OST-HMD lacking occlusion capability where the virtual airplane appears not only washed out and non-opaque but also low-contrast.Creating a mutual occlusion-capable optical see-through HMD (OCOST-HMD) poses a complex challenge.In the last decade, few OCOST-HMD concepts have been proposed, with even fewer designs being prototyped [4][5][6][7][8].The existing methods for implementing OCOST-HMDs fall into two types: direct ray blocking and per-pixel modulation.The direct ray blocking method selectively blocks the rays from the see-through scene without focusing them.It can be implemented by selectively modifying the reflective properties of physical objects or by passing the light from the real scene through a single or multiple layers of spatial light modulators (SLM) placed directly near the eye.For instance, Hua et al. investigated the idea of creating natural occlusion of virtual objects by physical ones via a head-mounted projection display (HMPD) device, which involved the use of retroreflective screens onto non-occlusion physical objects and thus can only be used in limited setups [4].Tatham demonstrated the occlusion function through a transmissive SLM directly placed near the eye with no imaging optics [5].The direct ray blocking method via an SLM would be a straightforward and adequate solution if the eye were a pinhole aperture allowing a single ray from each real-world point to reach the retina.Instead, the eye has an area aperture, which makes it practically impossible to block all the rays seen by the eye from an object without blocking the rays from other surrounding objects using a single-layer SLM.Recently, Maimone and Fuchs proposed a lensless computational multi-layer OST-HMD design which consists of a pair of stacked transmissive SLMs, a thin and transparent backlight, and a highspeed optical shutter [6].Multiple occlusion patterns can be generated using a multi-layer computational light field method [7] so that the occlusion light field of the see-through view can be rendered properly.Although the multi-layer light field rendering method can in theory overcome some of the limitations of a single-layer ray blocking method, it is subject to several major limitations such as the significantly degraded see-through view, limited accuracy of the occlusion mask, and the low light efficiency.The unfavorable results can be attributed to the lack of imaging optics, low light efficiency of the SLMs, and most importantly the severe diffraction artifacts caused by the fine pixels of the SLMs located at a close distance to the eye pupil. The per-pixel occlusion method, as illustrated in Fig. 2, is to form a focused image of the see-through view at a modulation plane where an SLM is inserted and renders occlusion masks to selectively block the real-world scene point by point.Based on this principle, the ELMO series of prototypes designed by Kiyokawa et.al. in the early 2000's perhaps are still the most complete demonstration of OST-HMDs with occlusion capabilities [8,9], all of which were implemented using conventional lenses, prisms and mirrors.The ELMO-4 prototype contains 4 lenses, 2 prisms and 3 optical mirrors arranged in a ring structure that presents a very bulky package blocking most of the user's face.Limited by the microdisplay and SLM technologies at that time, the ELMO prototypes have fairly low resolutions for both the see-through and virtual display paths, both of which used a 1.5-inch QVGA (320x240) transmissive LCD module [8,9].Using a transmissive LCD as a SLM becomes problematic because when coupled with a polarizing beamspliter (PBS), it allows for minimal light (<20%) from the real scene to pass through to the user, causing the device to become ineffective in dim environments.Cakmakci et al attempted to improve the compactness of the overall system by utilizing polarization-based optics and a reflective SLM [10].They used a reflective liquid crystal on silicon (LCoS) in conjunction with an organic light emitting device (OLED) display to give an extended contrast ratio of 1:200.An x-cube prism was proposed for the coupling of the two optical paths to achieve a more compact form factor.However, the design failed to erect the see-through view correctly [10].Recently, Gao et al. proposed to use freeform optics, a two-layer folded optical architecture, along with a reflective SLM to create a compact high resolution, low distortion OCOST-HMD [11,12].With the utilization of a reflective LCoS device as the SLM, the system allowed for a high luminance throughput and high optical resolution for both virtual and see-through paths.The optical design and preliminary experiments demonstrated great potentials for a very compelling form factor and high optical performances, but the design was dependent on the use of expensive freeform lenses and, regrettably, was not prototyped.Although freeform lenses can make it possible to create compact, wide field-of-view (FOV) eyepiece designs needed for occlusion-capability, these lenses are often expensive and challenging to design and fabricate [13][14][15][16][17]. In this paper, based on the two-layer folding optics architecture by Gao et al. [11,12], we present the design and prototype of a high-resolution, affordable OCOST-HMD system using off-the-shelf optical components.Our prototype, capable of rendering per-pixel mutual occlusion, utilizes an OLED microdisplay for the virtual display path coupled with a reflective LCoS as the SLM for the see-through path to achieve an occlusion capable OST-HMD offering a 30 degree diagonal FOV and 1920x1080 pixel resolution, with an optical performance of greater than 20% modulation contrast over the full FOV.We experimentally demonstrate a monocular prototype achieving >100:1 dynamic range in well-lighted environments.We further experimentally compared the optical performance of an OST-HMD with and without occlusion capability. System optical design Figure 2 illustrates a schematic diagram of our proposed OSOST-HMD optical architect.The design uses two folding mirrors, a roof prism and a PBS to fold the optical paths into a twolayer design, where the occlusion and the virtual display modules share the same eyepiece, giving a compact form factor and enabling per-pixel occlusion capability.The light path for the virtual display is highlighted with blue arrows, while the light path for the real-world view is shown with red arrows.An objective lens collects the light from the physical environment and forms an intermediate image at its focal plane where an amplitude-based SLM is placed to render an occlusion mask for controlling the opaqueness of the real view.The modulated light is then folded by a PBS toward an eyepiece for viewing.The PBS acts as a combiner to merge the light paths of the modulated real view and virtual view together so that the same eyepiece module is shared for viewing the virtual display and the modulated real-world view.The focal planes of the eyepiece and objective are optically conjugate with each other, which makes it possible to individually control the opaqueness of each individual pixel of the virtual and real scenes for pixel-by-pixel occlusion manipulation.A right-angle roof prism is utilized to not only fold the optical path of the real view for compactness but also to ensure an erected see-through view which is another critical requirement for an OCOST-HMD system.The system may further integrate a depth sensor that obtains the depth map of a real-world scene in order to generate a scene-dependent occlusion mask in real time.After comparing several candidate microdisplay technologies, we chose a 0.7" Sony color OLED microdisplay for the virtual display path.The Sony OLED, having an effective area of 15.5mm and 8.72mm and a pixel size of 12μm, offers a native resolution of 1280x720 pixels and an aspect ratio of 16:9.Ideally we would need an SLM of the same dimension, aspect ratio and pixel resolution to achieve pixel-by-pixel occlusion capability within the entire FOV of the virtual display.Limited by the availability of an SLM of the same specifications, we selected a 0.7" LCoS as the SLM for the see-through path.The LCoS, recycled from a Canon projector, offers a native resolution of 1400x1050 pixels, a pixel pitch of 10.7μm, and an aspect ratio of 4:3.A reflective SLM provides a substantial advantage in light efficiency and contrast over a light transmitting SLM.Typically, the light efficiency of the see-through path can be as high as 45% with a reflective LCoS but about 10% or less with a transmissive SLM, while the blocking efficiency is about 0.009% for a reflective SLM and 0.02% for a transmissive SLM [11].Consequently, by using a reflective type SLM, twice the blocking efficiency can be achieved.In addition, diffraction artifacts resulted from the propagation of light through an aperture is negligible for an SLM with a high fill factor while it is substantially noticeable for a transmissive LCD which typically has a low fill factor. Based on the choices of microdisplay and SLM, we aimed to achieve an OCOST-HMD prototype with a diagonal FOV of 30°, or 26.5° horizontally and 15° vertically, and an angular resolution of 1.24 arcmins per pixel, corresponding to a Nyquist frequency of 24.2 cycles/degree in the visual space.We also set the goal of achieving an exit pupil diameter (EPD) of 9-12mm, allowing eye rotation of about ± 25° within the eye socket without causing vignetting of the optical system, and an eye clearance distance of at least 18mm.In order to develop a high-performance prototype with substantially much less cost than that of freeform optics in [11,12], we chose to carry out the entire optical design using available stock lenses, which makes the task substantially more challenging due to very limited choices of lens shapes and glass types.These constraints need to be carefully considered during the optimization process when creating lens forms for the eyepiece and objective designs.Furthermore, an optimized design obtained via an optical design software needs to be carefully matched and replaced by catalog lenses, which typically is subject to an iterative process of optimization and replacement.The design was further complicated due to the choice of a reflective SLM which requires an image-space telecentricity for both the eyepiece and objective designs to achieve high contrast, light efficiency and image uniformity.The final challenge of the design is the requirement for a large back focal distance (BFD) to make enough space for combining the two optical paths via a PBS. Figure 3 shows the lens layout of the final OCOST-HMD design.The light path for the virtual display (eyepiece) is denoted by the blue rays, while the light path for the see-through view is shown in red rays.It should be noted that the red rays for the see-through view overlap with the blue rays of the eyepiece after the PBS and thus only the blue rays are traced to the eye pupil in Fig. 3.The final design consists of 11 glass lenses (2 flint and 9 crown glass), 2 folding mirrors, 1 PBS, and 1 roof prism, all of which are stock components except for the meniscus which is made of flint glass with an aperture diameter greater than 40mm.Chromatic aberrations were optimized for 465, 550, and 615nm with weights of 1, 2, and 1, respectively, according to the dominant wavelengths of the microdisplay.The objective was optimized to have the chief ray deviated less than ± 0.5° from a perfect telecentric system while ± 1° deviation was allowed for the eyepiece.After properly cropping the eyepiece lenses, we were able to achieve an eye clearance of 18mm and a 10mm EPD. The optical performance of the virtual display and see-through paths were assessed over the full field of view in the visual space where the spatial frequencies are characterized by the angular size in terms of cycles per degree.Figure 4 shows the polychromatic modulation transfer function (MTF) curves, evaluated with a 3-mm eye pupil, for several weighted fields of both the virtual display and the see-through paths.The virtual display path preserves roughly 40% modulation at the designed Nyquist frequency of 24.2 cycles/degree, corresponding to the 12μm pixel size of the OLED display.It can even maintain about 20% modulation at the frequency of 36 cycles/degree for the potential to update to an OLED of 8μm pixel size and 1920x1080 pixels.The performance of the see-through path has dropped slightly to an average modulation of 35% for the frequency of 25 cycles/degree and maintains about 30% modulation at the frequency of 30 cycles/degree for >90% of the entire seethrough field except that the MTF of the very far edge field drops to about 15%.Such optical performance is comparable to or even better than many custom HMD optics of similar resolution.Along with the MTF, the wavefront error plot and spot diagram for the see-through and virtual display paths were used to characterize the performance of the optical design.For the virtual display path, the dominating aberrations are coma and lateral chromatic aberration.While lateral chromatic aberration can be digitally corrected, much like distortion correction, by pre-warping the image for the red and blue color channels individually based on their laterial displacements from the reference green color channel, coma is exceptionally hard to correct.This is due to the non-pupil forming, telecentric design of the eyepiece and the inability to move the stop position to balance off-axis aberrations.Overall, the wavefront aberration in the eyepiece is sufficiently low, being under 1 wave.The average root mean square (RMS) spot diameter across the field is 15μm.Although it appears to be larger than the 12μm pixel size, this difference is largely due to lateral chromatic aberration, which as stated earlier, can be corrected.The dominating aberration in the objective lens design is axial chromatic aberration, which is typically corrected by using different glass types to balance the optical dispersion.Unfortunately, due to the limited flint glass selection of off-the-shelf lenses, this aberration is unavoidable.Nevertheless, the maximum wavefront aberration in the real image is still below 2 waves at the far field, and the average RMS spot diameter across the field is about 19μm.Compared to the 10.7 μm pixel pitch of the LCoS being used in the system, a 19μm RMS spot diameter in the objective design indicates that the actual occlusion mask resolution is limited by the objective lens resolution and is lower than the pixel resolution of the SLM. System prototype and experimental demonstration Figure 5(a) shows the sectional view of the mechanical housing with the light path of the real scene superimposed.For the mechanical design, lens cell stacks were used and inserted into a larger housing, where they were held by set screws to achieve more compensation in the optical design and meet the maximum MTF. Figure 5(b) shows the monocular prototype of the OCOST-HMD system built upon the optical design in Fig. 3.The prototyped system was measured as 82mm in height, 70mm in width, 50mm in depth.The vertical and horizontal FOV was determined for both the virtual and real paths by viewing a ruler through the optical system.It was determined that the see-though FOV was 27.69° horizontally and 18.64° vertically with an occlusion capable see-through FOV 22.62° horizontally and 17.04° vertically, while the virtual display had an FOV of 26.75° horizontally and 15.19° vertically, giving a measured diagonal Full FOV of 30.58 °.Due to the slightly mismatched aspect ratio between the OLED and LCoS, we anticipated that the LCoS would not be able to occlude the real scene in the same FOV of the virtual display in the horizontal direction. For the purpose of qualitative demonstration of the occlusion capability of the OCOST-HMD prototype, we created a real-world scene composed of a mixture of laboratory objects with a well-illuminated white background wall (~300-500 cd/m 2 ) while the virtual 3D scene was a simple image of a teapot.Figures 6(a) through 6(f) show a set of images captured with a digital camera placed at the exit pupil of the eyepiece.The camera lens has a focal length of 16mm with its aperture set at about 3mm to match the F/# setting equivalent to that of human eyes under typical lighting conditions.Figure 6(a) is the view of the natural background scene only captured through the occlusion module when the SLM is turned on for light pass-through without a modulation mask applied and with the OLED microdisplay turned off.Several different spatial frequencies and object depths were portrayed in the background scene to display image quality and depth cues.Figure 6(b) is the view of the virtual scene captured through the eyepiece module when the real-word view was completely blocked by the SLM.Fig. 6.Experimental demonstration of mutual occlusion capability in our OCOST-HMD prototype with photographs captured with a digital camera placed at the exit pupil of the system: (a) view of a natural background scene through the occlusion model for light passthrough with the SLM turned on; (b) view of the virtual scene through the eyepiece with the see-through path being blocked by the SLM; (c) augmented view of the natural and virtual scenes without occlusion capability enabled; (d) View of the natural scene with an occlusion mask rendered on SLM; (e) augmented view with occlusion capability enabled where the virtual teapot is inserted in front of the background scene; (f) augmented view with occlusion capability enabled where the virtual teapot is inserted between two real objects for mutual occlusion demonstration. Figure 6(c) shows the augmented view of the real-world and virtual scenes without the occlusion capability enabled (i.e., no modulation mask was applied to the SLM) by simply turning on the OLED microdisplay.Due to the bright environment, the teapot looks washed out without a mask occluding the see-though path.Not only does the teapot appear unrealistic and ghost-like, but it is also spatially unclear where the teapot sits in the image.Clearly, the virtual and real objects are mixed in very low contrast, which is the expected effect obtained through a typical OST-HMD without occlusion capability.Figure 6(d) shows the view of the real-world scene when the occlusion mask was displayed on the SLM and no virtual content shown on the OLED display.Apparently, the mask could effectively block the portion of the see-through view.Figure 6(e) is a view captured with the mask on the SLM and the virtual scene displayed on the OLED display.The result clearly demonstrates improved contrast and quality for the virtual view.We can observe that a realistic virtual image with obvious depth cues is now present.When virtual objects occlude the real scene, viewers can seamlessly transfer from AR to VR environments.To demonstrate the full capability and correct depth perception the occlusion display can render, Fig. 6(f) shows the view captured with the seethrough path, where the virtual teapot is inserted between two real objects, demonstrating the mutual occlusion capability of the system.In this case, knowing the relative location of the can which is meant to occlude part of the teapot, we removed the pixels that correspond to the projection of the occluding can on the virtual display from the teapot rendering.The significance of the result is that correct occlusion relationships can be created and used to give an unparalleled sense of depth to a virtual image in an OST-HMD.With a dynamic range of the virtual scene in bright environments, our OCOST-HMD system using stock lenses achieved a high optical performance, one that has significantly increased over that of non-occlusion-capable HMD designs. Optical performance test To further quantify the optical performance of the prototype system, we started with characterizing the MTF performance of virtual and real light paths through the prototype.A high-performance camera, consisting of a nearly diffraction-limited 16mm camera lens by Edmund Optic and a 1/3" Point Grey image sensor of a 3.75 μm pixel pitch was placed at the exit pupil of the system.It offers an angular resolution of about 0.8 arcminutes per pixel, significantly higher than the anticipated performance of the prototype.Therefore, it is assumed that no loss of performance to the MTF was caused by the camera.The camera then captures images of a slanted edge target, which is either displayed by the microdisplay or a printed target placed in the see-through view.To provide a separable quantification of the performance for the virtual and see-through path, the virtual image of a slanted edge was taken while the see-through scene was completely blocked by the SLM.Similarly, the seethrough image of the target was taken with the microdisplay turned off.The captured slantededge images were analyzed using Imatest software to obtain the MTF of the corresponding light paths.Fig. 7. Measured MTF performance of the OCOST-HMD prototype for the on-axis field of the virtual display, see-through view as well as the camera used for measurement. Figure 7 shows that the measured on-axis MTF performance of both virtual and real paths, along with the MTF of the camera itself without the system for comparison, which match closely with the nominal performance shown in Fig. 4. Due to the magnification difference between the pixel pitch of the camera sensor and the microdisplay and SLM, the horizontal axis of the MTF measurement by Imatest was scaled by the pixel magnification difference between the camera and display and then converted to define the spatial frequency in the visual space in terms of cycles/degree by computing the angular size of a spatial feature, making it directly comparable with the plots in Fig. 4. The prototyped design was able to achieve a contrast greater than 40% at the Nyquist frequency 24.2 cycles/degree of the virtual display and similar performance for the see-through path.We then directly measured the spatial and angular resolutions of the see-though path using a printed US1951 resolution target.The target was set at 60cm away from the exit pupil and the same camera was used to capture a see-through image of the target to directly determine the smallest resolvable group.A contrast ratio above 0.1 was determined to be resolvable.The resolvable spatial frequency was determined to be at the Group 2 Element 5 for both horizontal and vertical lines, corresponding to 6.35 cycles/mm.At a distance of 60cm, this element gives an angular resolution of 66.49 cycles/degree, indicating that the resolvability of see-through path through the occlusion module is nearly intact to a human viewer. We further measured the image contrast between the virtual display and the real-world scene as a function of the real-world scene brightness for different spatial frequencies.A grayscale solid image, ranging from black to white in 10 linear steps, was displayed on an LCD monitor to create a controlled background scene with varying luminance from 0 to 350cd/m 2 .The monitor was placed roughly 10cm in front of the OCOST-HMD system to simulate an array of real scene brightness.A sinusoidal grating pattern with a spatial frequency ranging from 0.7 to 24.2 cycles/degree was displayed on the OLED microdisplay (virtual path) to evaluate the effect of scene brightness on the image contrast of the virtual scene at different spatial frequencies.The fall-off in contrast to the virtual scene was then plotted and compared with occlusion enabled (SLM blocking see-through light) and without occlusion (SLM passing see-through light).Figures 8(a) and 8(b) show the captured images of a 12 cycles/degree spatially varying virtual image superimposed on a background image of full brightness with and without occlusion, respectively.Without occlusion, the virtual target was nearly washed out completely with a background as bright as 350 cd/m 2 .Figures 9(a) and 9(b) plotted the contrast of the virtual object contrast with the see-through path un-occluded and occluded, respectively.We can observe that the contrast of the virtual object without occlusion is quickly deteriorated to zero for a well-lit environment luminance above 200 cd/m 2 , while the contrast of the virtual target with occlusion of the real scene is nearly constant over an increased brightness.We further measured the obtainable contrast ratio of the occlusion system is greater than 100:1.The contrast ratio of the occlusion capable display was obtained by measuring a collimated depolarized light source through the system with full occlusion being enabled and disabled. Conclusion This paper presents a novel design and implementation of an occlusion-capable optical seethrough head-mounted display system using off-the-shelf optical components.A comprehensive description of the design and the monocular prototype was included, and the performance of the prototype was analyzed and evaluated.The system offered a 30° diagonal FOV and an angular resolution of 1.24 arcmins, with an optical performance of > 0.4 contrast over the full FOV at the Nyquist frequency of the display.By using the combination of a reflective type SLM and OLED display, we demonstrated a contrast ratio greater than 100:1 for the occlusion module.We also demonstrated that our prototype could be used in bright environments without loss of contrast to the virtual image.This study demonstrates that an OCOST-HMD system can achieve a high optical performance and a compact form factor for bright environments while using off-the-shelf components. Disclaimer Dr. Hong Hua has a disclosed financial interest in Magic Leap Inc.The terms of this arrangement have been properly disclosed to The University of Arizona and reviewed by the Institutional Review Committee in accordance with its conflict of interest policies. Fig. 1 . Fig. 1.Superimposing a virtual airplane in a well-lit real world environment: AR view captured through a typical OST-HMD without occlusion capability. Fig. 2 . Fig. 2. Schematic diagram of the proposed OCOST-HMD design based on two-layer folded architecture. Fig. 8 . Fig. 8. Sample images of a grating target of 12 cycles/degree displayed by the virtual display superimposed onto a bright background of 350cd/m 2 (a) with occlusion enabled to block the see-through light and (b) without occlusion. Fig. 9 . Fig. 9. Image contrast degradation of the virtual target of different spatial frequencies as a function of background scene brightness for (a) occlusion-disabled; and (b) occlusion-enabled displays.
6,519.8
2017-11-27T00:00:00.000
[ "Computer Science", "Engineering" ]
Artificial Intelligence Techniques for Bankruptcy Prediction of Tunisian Companies: An Application of Machine Learning and Deep Learning-Based Models : The present paper aims to compare the predictive performance of five models namely the Linear Discriminant Analysis (LDA), Logistic Regression (LR), Decision Trees (DT), Support Vector Machine (SVM) and Random Forest (RF) to forecast the bankruptcy of Tunisian companies. A Deep Neural Network (DNN) model is also applied to conduct a prediction performance comparison with other statistical and machine learning algorithms. The data used for this empirical investigation covers 25 financial ratios for a large sample of 732 Tunisian companies from 2011–2017. To interpret the prediction results, three performance measures have been employed; the accuracy percentage, the F1 score, and the Area Under Curve (AUC). In conclusion, DNN shows higher accuracy in predicting bankruptcy compared to other conventional models, whereas the random forest performs better than other machine learning and statistical methods. Introduction Predicting bankruptcy has always been of great importance and a huge challenge for banks and lending institutions.Therefore, financial analysts and credit experts look for the best techniques that can help them in decision making.For a long time, the traditional approaches have been widely used for bankruptcy prediction.These techniques are based on the financial ratios analysis, statistical models, and expert judgment.However, these models have limitations in predicting bankruptcy accurately (Hamdi 2012;Altman et al. 1994;Hamdi and Mestiri 2014). Over recent years, several research studies have been focused on bankruptcy forecasting using artificial intelligence and machine learning models.The research paper of Ravi Kumar and Ravi (2007) summarizes existing researches on bankruptcy prediction studies using statistical and intelligence techniques during 1968-2005.For the same objective, Gergely (2015) has also presented a rich bibliographic review.He summarizes the short evolution of bankruptcy prediction and presents the main critiques made on modeling process for bankruptcy prediction.Furthermore, the author announces avenues of future research recommended in these studies.More recently, a systematic literature was presented by Clement (2020) to predict bankruptcy.His review was conducted based on published papers between 2016 and 2020.In the same context, Kuizinien ė et al. (2022) present another systematic review covering 232 research studies spanning from 2017 to February 2022 that use artificial intelligence techniques to identify financial distress. A more advanced model is applied in this study, specifically, the concept of deep learning.For more details about deep learning approaches refer to the studies of Deng and Yu (2014) and LeCun et al. (2015).Deep learning approaches have been extensively employed in the field of computer vision (Kamruzzaman and Alruwaili 2022), speech recognition (Roy et al. 2021), natural language programming (Xie et al. 2018), and medical image analysis (Suganyadevi et al. 2022).However, few are the studies which have been focused on the use of deep learning in finance (Qu et al. 2019). This study is organized as follows: Section 2 provides a pertinent literature review related to bankruptcy prediction.Section 3 presents the different statistical and artificial intelligence techniques applied in this work.The data used are identified in Section 4. The Section 5 is devoted to the empirical investigation to predict the bankruptcy of Tunisian companies.And finally, the conclusion of this research study is presented in Section 6. Related Literature In past decades, the discriminant approach (Beaver 1966;Altman 1968;Deakin 1972) and the logistic regression method (Ohlson 1980;Pang 2006) were the two well-known and most popular statistical methods for predicting corporate bankruptcy.More recently, Mestiri and Hamdi (2013) used the logistic regression with random effect to predict the credit risk of Tunisian banks.For bankruptcy prediction, several more developed methods have been employed.Some authors apply the decision trees method (Aoki and Hosonuma 2004;Zibanezhad et al. 2011;Begović and Bonić 2020), some others utilize various machine learning techniques such as genetic algorithm (Shin and Lee 2002;Kim and Han 2003;Davalos et al. 2014), support vector machine (Shin et al. 2005;Härdle et al. 2005;Dellepiane et al. 2015) and random forest (Joshi et al. 2018;Ptak-Chmielewska and Matuszyk 2020;Gurnani et al. 2021).Recently, several comparative analyses of machine learning models have been carried out to predict bankruptcy (Narvekar and Guha 2021;Park et al. 2021;Bragoli et al. 2022;Máté et al. 2023;Martono and Ohwada 2023). As a matter of fact, with the invasion of the artificial intelligence modeling algorithms since the 1990s in diverse domains, artificial neural networks were the most famous and well-used machine learning tool to predict financial distress (Odom and Sharda 1990;Atiya 2001;Anandarajan et al. 2004;Hamdi 2012;Aydin et al. 2022).However, despite the good forecasting results observed by applying this tool, deep learning models are the most applied today.This comes down to the ability of deep learning approach to overcome some limitations by training the neural network which includes a significant number of hidden layers, such as the vanishing gradient, overfitting problem and the computational load (Kim 2017). Until now, few are the works which have been focused on applying deep learning models to predict bankruptcy.Addo et al. (2018) used seven methods (LR, RF, boosting approach and 4 deep learning models) to predict loan default probability.Based on AUC and RMSE performance criteria, they concluded that the gradient boosting model outperforms the other models in solving the binary classification problem.In another study, Hosaka (2019) proposed a convolutional neural network to forecast the bankruptcy of Japanese firms.This model is specifically effective for image recognition, therefore the author has converted the financial ratios in order to train and test the network.The prediction performance results showed higher performance with the use of deep neural network compared to other employed tools. For the same purpose, Noviantoro and Huang (2021) used machine learning as well as deep learning approaches to predict bankruptcy of Taiwanese companies between 1999 and 2009.They compared the best prediction performance of decision tree, random forest, k-nearest neighbour algorithm, support vector machine, artificial neural network, Naïve bayes, logistic regression, rule induction and deep neural network.To evaluate the classifier's performance of these models, they computed the accuracy rate, F score and AUC of each technique.They found that random forest demonstrated the highest accuracy and AUC, as well as the highest F score, and this was followed by the deep learning approach. Very recently, Shetty et al. (2022) utilized deep neural network, extreme gradient boosted tree and support vector machine in order to predict the bankruptcy of 3728 Belgian firms for the period from 2002 to 2012.The authors concluded that the use of these different techniques yields roughly the same bankruptcy prediction accuracy rate of approximately 82-83%.Elhoseny et al. (2022) applied an adaptive whale optimization algorithm combined with deep learning (AWOA-DL) to predict bankruptcy.They evaluated the ability of the proposed new approach, to predict the failure of any company compared to logistic regression, the RBF Network, the teaching-learning-based optimization-DL (TLBO-DL) and the deep neural network.The empirical results show that the new deep learning-based approach (AWOA-DL) allows better predictions.More recently, Ben Jabeur and Serret (2023) proposed a Fuzzy Convolutional Neural Networks (FCNN) to predict corporate financial distress.They used eight evaluation measures in order to compare the performance of the new adopted method to other traditional and machine learning techniques.They found that the combined new approach outperforms traditional methods.In another study, Noh (2023) tested the accuracy performance of Long Short-Term Memory (LSTM), Logistic Regression (LR), K-Nearest Neighbour (k-NN), Decision Tree (DT), and Random Forest (RF) models for corporate bankruptcy prediction.On the basis of five performance measures, the author concluded that the proposed technique can enhance the prediction accuracy by using a small sample of an unbalanced financial dataset. Table 1 provides a literature review summary of the main research studies that apply deep learning to predict bankruptcy. Statistical, Machine Learning and Deep Learning Techniques 3.1.Linear Discriminant Analysis (LDA) Ronald Fisher (1933) pioneered work on discriminant analysis.In his work, he developed a statistical technique for defaults prediction, by developing a linear combination of quantitative predictor variables.The output of LDA is a score that classifies data observations between the good and bad classes. where a i : are the weights associated with the quantitative input variables X i . The study of Altman (1968) is considered as the reference work that uses the LDA to classify default and health companies based on five financial ratios. Logistic Regression (LR) LR is a statistical method used for binary classification tasks (e.g., 0 or 1, bad or good, health or default, etc.).Corresponding to Ohlson (1980), the outcome of the LR model can be written as: where P(y = 1|X) is the probability of y being 1, given the input variables X, z is a linear combination of Where a 0 is the intercept term, a 1 , a 2 , . . ., a p are the weights, and X 1 , X 2 , . . ., X p are the inputs. Decision Trees (DT) DTs proceed recursively partitioning the data into subsets based on the values of the input variables, with each partition represented by a branch in the tree (Quinlan 1986).The function of DTs is aimed at training a sequence of binary decisions that can be utilized to forecast the value of the output for a new observation.In the tree, each decision node corresponds to a test of value for one of the input variables, and the branches correspond to the possible outcomes of the test.The leaves of the tree denote the predicted values of the output variable for each combination of input values.For each step, the algorithm identifies the input variable that provides the best split of the data into two subsets which are as homogeneous as possible in relation to the output variable.The quality of a split is typically measured using information gain or Gini impurity, which quantifies the reduction in uncertainty about the output variable achieved by the split. Decision trees are typically not formulated in terms of mathematical equations, but rather as a sequence of logical rules that describe how the input variables are used to predict the output variable.However, the splitting standard utilized to select the best split at each decision node can be expressed mathematically.Suppose having a dataset with n observations and m input variables, denoted by X 1 , X 2 , . . ., X p , and a binary output variable y that takes values in 0.1.Let S be a subset of the data at a particular decision node, and let p i be the part of observations in S that belong to class i.The Gini impurity of S is calculated as follows: The Gini impurity measures the probability of misclassifying an observation in S if randomly assign it to a class corresponding to the observations proportion for each class.(Gelfand et al. 1991).A small value of G(S) indicates that the observations in S are well-separated by the input variables. To split the data at a decision node, consider all possible splits of each input variable into two subsets, and choose the split that minimizes the weighted sum of the Gini impurities of the resulting subsets.The weighted sum is given by: where S 1 and S 2 are the subsets of S resulting from the split, and |S 1 | and |S 2 | are their respective sizes.The split with the smallest value of ∆G is chosen as the best split.The decision tree algorithm proceeds recursively, splitting the data at each decision node based on the best split, until a stopping criterion is met, such as reaching a maximum depth or minimum number of observations at a leaf node. Support Vector Machine (SVM) SVM is a supervised learning model used for classification, regression, and outlier detection, developed by Vapnik and Vapnik (1998).The basic idea of this technique is to determine the best separating hyperplane between two classes in a given dataset.The mathematical formulation of SVM is divided into two parts: optimization problem and decision function (Hearst et al. 1998). Given a training set (x i , y i ) where x i is the ith input vector and y i is the corresponding output: y i = (−1, 1).Then, SVM seeks to find the best separating hyperplane defined by: where w is the weight vector, b is the bias term, and x is the input vector.SVM algorithm aims to determine the optimal w and b that maximize the margin between two classes.The margin is the distance between the hyperplane and the nearest data point from either class.Then, SVM optimization problem can be formulated as: where ||w|| 2 is the L2-norm of the weight vector, C is a hyperparameter that controls the tradeoff between maximizing the margin and minimizing the classification error, ξ i is the slack variable that allows for some misclassifications, and the two constraints enforce that all data points lie on the correct side of the hyperplane with a margin of at least 1 − ξ i . The optimization problem can be solved by using convex optimization methods, for example the quadratic programming.Once the optimization problem is solved, the decision function can be defined as: where sign is the sign function that returns +1 or −1 depending on the sign of the argument. The decision function takes an input vector x and returns its predicted class label based on whether the output of the hyperplane is positive or negative.For more details about the optimization process, refer to (Chang and Lin 2011;Cristianini and Shawe-Taylor 2000;Gunn 1998). Thereafter, SVM finds the best separating hyperplane by solving an optimization problem that maximizes the margin between the two classes, subject to constraints that ensure all data points are correctly classified with a margin of at least 1 − ξ i .The decision function then predicts the class label of new data points based on the output of the hyperplane. Random Forests (RF) RF is an ensemble of learning algorithm.It is a type of ensemble learning algorithm, developed by Breiman (2001), which combines multiple decision trees to make predictions.The algorithm is called "random" because it uses random subsets of the features and random samples of the data to build the individual decision trees.The data is split into training and testing sets.The training set is used to build the model, and the testing set is used to evaluate its performance.At each node of a decision tree, the algorithm selects a random subset of the features to consider when making a split.This helps to reduce overfitting and increase the diversity of the individual decision trees. A decision tree is built using the selected features and a subset of the training data.The tree is grown until it reaches a pre-defined depth or until all the data in a node belongs to the same class.Suppose having a dataset with n observations and p features.Let X be the matrix of predictor variables and Y be the vector of target variables. To build an RF model, start by creating multiple decision trees using a bootstrap sample of the real data.This means that we randomly sample n observations from the dataset with replacement to create a new dataset, and for k times this process is repeated to create k bootstrap samples.For each bootstrap sample, we then create a decision tree using random subsets of p features.For each node of the tree, we select the optimal feature and threshold value to divide the data based on a criterion, for example; the information gain or Gini impurity.We repeat the mentioned steps k times to create k decision trees.To make a prediction for a new observation, we pass it through each of the k decision trees and therefore obtain k predictions.For more details about the technical analysis of random forests, see Biau (2012). Deep Neural Network (DNN) DNN is an enhanced version of the conventional artificial neural network with at least two hidden layers (Schmidhuber 2015).Figure 1 illustrates the standard architecture of deep neural network. To fully understand how DNN works, a thorough knowledge of the basics of artificial neural network is then necessary.For more information, readers can look at the studies of Walczak and Cerpa (2003) and Zou et al. (2008).According to Addo et al. (2018), the DNN output is computed as: where W k is the matrix weights of the layer, X k (k = 1, . .., L) is the total number of sequence of real values called events during an epoch and f is the activation function. Deep Neural Network (DNN) DNN is an enhanced version of the conventional artificial neural networ least two hidden layers (Schmidhuber 2015).Figure 1 illustrates the standard arc of deep neural network.To fully understand how DNN works, a thorough knowledge of the basics cial neural network is then necessary.For more information, readers can loo studies of Walczak and Cerpa (2003) and Zou et al. (2008).According to Ad (2018), the DNN output is computed as: where Wk is the matrix weights of the layer, Xk (k = 1, ..., L) is the total number of of real values called events during an epoch and f is the activation function. Data A series of financial ratios was calculated using balance sheets and income statements of 732 firms from different sectors of activity for the period between 2011-2017.A total of 4925 credit files, provided by a private Tunisian bank, constitute the database used in this empirical study.Table 2 presents the input ratios.In our research study, the same financial ratios considered by the previous works (Hamdi 2012;Mestiri and Hamdi 2013;Hamdi and Mestiri 2014) are used and demonstrated a high prediction accuracy in predicting bankruptcy of Tunisian firms.We excluded only one non-significant ratio (Raw stock/Total assets) in our empirical investigation. On the other hand, the estimated output (Y) can be written as binary values: Following this classification criterion, the out-of-sample test is composed of 488 healthy companies and 244 are bankrupt companies. Predictive Performance Measures There are several criteria that can be utilized to compare and evaluate the predictive ability of the employed techniques including accuracy rate, F1 score and AUC. Accuracy Rate The accuracy rate is the most famous performance metric, deduced from the confusion matrix (see Table 3) and calculated following this formula: Table 3. Confusion matrix. F1 Score The F1 score is also computed from the confusion matrix.The value of F1 score varies between 0 and 1, since 1 is the best possible score.The model can correctly identify positive and negative cases with a high F1-score, meaning that the model has high precision and high recall. AUC Area Under Curve (AUC) is a synthetic indicator derived from the ROC curve.This curve is a graphical indicator utilized to assess the model forecasting accuracy (Pepe 2000;Vuk and Curk 2006).Specificity and sensitivity are the two relevant indicators on which ROC curve is based (see Zweig andCampbell 1993 andMestiri andHamdi 2013 for further details).This curve is characterized by the 1-specificity rate on the x axis and by sensitivity on the y axis.Where Moreover, AUC measure reflects the quality of the model classification between heath and default firms.In the ideal case, AUC is equal to 1, i.e., the model makes it possible to completely separate all the positives from the negatives, without false positives or false negatives.According to Table 4, the deep neural network significantly outperforms other techniques.DNN shows the highest accuracy rate with 93.6% whereas 88.2% for RF and 85.8% for LR.The lowest rate of prediction accuracy was found by the use of DT (74.3%).For the same objective to assess the predictive ability of the proposed algorithms, F1-score equal to 0.964 proves DNN's ability to identify with a great precision healthy companies from bankrupt companies.Since 1 is the best desired F1 score, DNN reaches the highest score while F1 score values were equal to 0.933, 0.922, 0.910, 0.890 and 0.838 for RF, LR, SVM, LDA and DT, respectively. Results &Discussion Another graphical indicator was also used to evaluate the quality of classification of the models under study, is the ROC curve (see Figure 2).The AUC measure is deduced from this curve.A model with AUC value near to unity shows high quality of classification between health and default firms.Based on Table 4, the AUC of DNN yields 0.888.In the second rank, RF was found with AUC equals to 0.815.The RL and ADL models present the worst classification results as the AUC is 0.633 and 0.574, respectively, in the testing sample. J. Risk Financial Manag.2024, 17, x FOR PEER REVIEW 10 of 14 from this curve.A model with AUC value near to unity shows high quality of classification between health and default firms.Based on Table 4, the AUC of DNN yields 0.888.In the second rank, RF was found with AUC equals to 0.815.The RL and ADL models present the worst classification results as the AUC is 0.633 and 0.574, respectively, in the testing sample.Similar conclusions were provided by Hosaka (2019).The study's findings indicate that the convolutional neural network has better prediction performance than statistical and conventional machine learning methods.Furthermore, the work of Efron (1975) proved the robustness of the LR model compared to the LDA.Barboza et al. (2017) obtained similar results in predicting bankruptcy of North American firms.Their empirical findings indicate that RF is the most accurate prediction model compared to LR and Similar conclusions were provided by Hosaka (2019).The study's findings indicate that the convolutional neural network has better prediction performance than statistical and conventional machine learning methods.Furthermore, the work of Efron (1975) proved the robustness of the LR model compared to the LDA.Barboza et al. (2017) obtained similar results in predicting bankruptcy of North American firms.Their empirical findings indicate that RF is the most accurate prediction model compared to LR and ADL.They found that RF reaches 87% accuracy, whereas LR reach 69%and LDA reach 50%. As a final conclusion, the ability of DNN outperforms the traditional statistical models and the conventional machine learning techniques in forecasting bankruptcy.In the second rank, RF has a significantly higher prediction accuracy compared to other employed techniques.Based on our empirical investigation, the DNN can be considered as the best technique to detect a company's financial distress and therefore can help to make managerial decisions. In our empirical study, we have used 20% of the sample (985 firms) as a test data set in order to check the prediction accuracy and classifier's quality of the models.The type of deep neural network used in our study is a recurrent neural network with three hidden layers.Nodes per layer are 200,100,40,1('output' layer).Activation function is ReLU and Loss function is binary cross entropy.The output unit is Sigmoid.Backpropagation training algorithm was used and a stopping criteria equal to 10 −3 was set. Conclusions There are considerable consequences of a company's financial default on several financial and economic actors such as investors, creditors, managers, shareholders, financial analysts, auditors, employees and government.Prediction bankruptcy has become of great importance and concern.By developing accurate bankruptcy prediction techniques, many advantages and benefits can be achieved, such as cost reduction and rapidity in recovery and credit file analysis, gaining time and better reimbursement monitoring of loan files.The machine learning models are widely used and applied in the literature of bankruptcy prediction.These models demonstrate performance in terms of prediction accuracy which explains our choice to adopt these models and compare them with the deep learning approach.The main contribution of this present work is to identify the appropriate model able to predict financial distress with high precision in the Tunisian context. Statistical, machine learning and deep learning models such as the ADL, LR, DT, SVM, RF and DNN are applied to predict the financial distress of 732 Tunisian companies from different activity sectors.The empirical findings showed that DNN is a highly suitable tool for studying financial distress in Tunisian credit institutions.Compared to past work, this study is distinguished from other references in predicting bankruptcy that employed an interesting number of input features (25 ratios) as well as a large sample of firms in training phase (3940 ≈ 80% of total sample of firms).Wilson and Sharda (1994) used only five ratios (same input ratios employed by Altman 1968) to predict the bankruptcy of 169 firms.The machine learning models applied in their work are the shallow neural network and multidiscriminant analysis.In a related study, Chen (2011) utilized a set of eight selected features as inputs of machine learning models and an evolutionary computation approach was used for predicting business failure of 200 Taiwanese companies.To forecast the bankruptcy of Korean construction companies, Heo and Yang (2014) used a total of 2762 samples and 12 ratios for training several models such as adaptive boosting with DT, SVM, DT and ANN.For future research studies, we can apply hybrid learning techniques by combining the DNN with other machine learning model which can provide higher performance than when using a single model.In this context and for the same purpose to forecast bankruptcy, Ben Jabeur and Serret (2023) utilized the fuzzy convolutional neural networks.The present work as well as previous research supports the idea that artificial intelligence models perform better than traditional methods.However, it will be interesting for further research to diversify the data sources and not only use standard financial ratio data, by adding miscellaneous textual data (e.g., news, companies' public report, notes and comments from experts, auditors' reports and managements' statements) that can enhance the forecasting accuracy of financial distress (Mai et al. 2019;Matin et al. 2019).Furthermore, it is of great interest to integrate sector diversification as an input variable to predict company default and to subsequently study the impact of changing industry on the accuracy of predictions.Another concern that should be studied in the future, is the occurrence of several recent crises such as the COVID-19 crisis.It is interesting to apply artificial intelligence models to investigate the crisis impact on the performance of financial distress prediction methods (Sabir et al. 2022). Figure 1 . Figure 1.The Standard architecture of DNN. Figure 1 . Figure 1.The Standard architecture of DNN. Figure 2 . Figure 2. ROC curve for the five machine learning models and DNN. Figure 2 . Figure 2. ROC curve for the five machine learning models and DNN. Table 1 . A summary of literature review on bankruptcy prediction using deep learning. Table 2 . The series of financial ratios. Table 4 presents the empirical results of the accuracy rate, F1 score and AUC criteria used to judge the classifier's performance of the applied methods. Table 4 . Prediction results and models accuracy.
6,067.6
2024-03-22T00:00:00.000
[ "Computer Science", "Business" ]
Random User Activity with Mixed Delay Traffic This paper analyses the multiplexing gain (MG) achievable over a general interference network with random user activity and random arrival of mixed-delay traffic. The mixed-delay traffic is composed of delay-tolerant traffic and delay-sensitive traffic where only the former can benefit from receiver cooperation since the latter is subject to stringent decoding delays. Two setups are considered. In the first setup, each active transmitter always has delay-tolerant data to send and delay-sensitive data arrival is random. In the second setup, both delay-tolerant and delay-sensitive data arrivals are random, and only one of them is present at any given transmitter. The MG regions of both setups are completely characterized for Wyner’s soft-handoff network. For Wyner’s symmetric linear and hexagonal networks inner bounds on the MG region are presented. I. INTRODUCTION This paper presents coding schemes for the transmission of heterogeneous traffic with delay-sensitive and delay-tolerant data over interference networks with random user activity and random data arrivals. Delay-sensitive data, called "fast" messages, are subject to stringent delay constraints and their encoding and decoding processes cannot be delayed. Delaytolerant data, called "slow" messages, are subject to softer delay constraints and can benefit from receivers cooperation. Such mixed-delay constraints have been studied in [1]- [6]. This paper further takes into account random activities of the users. Specifically, in each transmission block only a subset of the users has a message to convey to its corresponding Rxs. The impact of random user activity on cellular networks has been previously studied in [7]- [9]. In this work, we combine random user activity with such mixed-delay constraints. We specifically consider two setups. In both setups, receivers (Rxs) can cooperate to decode their desired "slow" messages but not to decode "fast" messages. Each transmitter (Tx) is active with probability ρ ∈ [0, 1], and the goal is to maximize the average expected "slow" rate of the network, while the rate of each "fast" message is fixed to a target value. In the first setup, each active Tx transmits a "slow" message, and with probability ρ f ∈ [0, 1] also transmits an additional "fast" message. In the second setup, each active Tx sends either a "fast" message with probability ρ f or a "slow" message with probability 1 − ρ f . For both setups, we propose general coding schemes and characterize their achievable multipleing gain (MG) regions for three networks: Wyner's soft-handoff network, Wyner's symmetric network and the hexagonal network. The achievable MG region is shown to be optimal for Wyner's soft-handoff network. In both setups, the obtained MG regions show that the average "slow" MG decreases i) with increasing number of interfering links, and ii) with increasing activity parameter ρ. The obtained MG regions also show that in the first setup, the maximum sum-MG is always attained at 0 "fast" MG, and increasing the "fast" MG decreases the sum-MG by a penalty that roughly speaking increases with the number of interference links in the network and with the activity parameter ρ. In contrast, in the second setup, for certain parameters the sum-MG is achieved at maximum "fast" MG and thus increasing the "fast" MG provides a gain in sum-MG, where we observe that the gain decreases with the number of interferers and the activity parameter ρ. II. RANDOM "FAST" ARRIVALS ONLY Consider a cellular network with K Tx-Rx pairs k = 1, . . . , K. Each Tx k ∈ K {1, . . . , K} is active with probability ρ ∈ [0, 1], in which case it sends a so called "slow" message M k . Given that Tx k is active, with probability ρ f ∈ [0, 1], it also sends an additional "fast" message M (F ) k to Rx k. These "fast" messages are subject to stringent delay constraints, as we describe shortly, and uniformly distributed over the set M (F ) {1, . . . , 2 nR (F ) }. "Fast" messages are thus all of same size and same rate R (F ) . We introduce the i.i.d Bernoulli-ρ random variables A 1 , . . . , A K and the i.i.d Bernoulli-ρ f random variables B 1 , . . . , B K and define the active Tx-set as and the "fast" Tx-set as Then, for each k ∈ K, Tx k computes its channel inputs X n k (X k,1 , . . . , X k,n ) ∈ R n as for some encoding functions f To describe the interference network, let X n k denote Tx k's input signal and Y n k (Y k,1 , . . . , Y k,n ) Rx k's output signal and define the interference sets I Rx,k {k ∈ K\{k} : X n k interferes Y ñ k }, I Tx,k {k ∈ K\{k} : X ñ k interferes Y n k }. The input-output relation of the network is then described as where {Z k,t } are independent and identically distributed (i.i.d.) standard Gaussians for all k and t and independent of all messages; hk ,k > 0 is the channel coefficient between Txk and Rx k and is a fixed real number smaller than 1; and X 0,t = 0 for all t. Each Rx k ∈ T fast decodes the "fast" message M (F ) k based on its own channel outputs Y n k . So, it produces: M for some decoding function g (n) k on appropriate domains. It is assumed that receivers can fully cooperate on all receive signals when decoding their "slow" messages. So, where c (n) k is a decoding function on appropriate domains. and encoding, cooperation, and decoding functions satisfying constraint (4) and so that the probability of error goes to 0 as n → ∞. An MG pair (S (F ) , S (S) ) is called achievable, if for all powers P > 0 there exist achievable average rates The closure of the set of all achievable MG pairs (S (F ) , S (S) ) is called fundamental MG region and is denoted S (ρ, ρ f ). The MG in (11) measures the average expected "slow" MG on the network. Since the "fast" rate is fixed to R (F ) at all Txs in T fast , we multiply the MG in (10) by ρρ f to obtain the average expected "fast" MG of the network. A. Achievable MG Region and Coding Schemes In this section, we propose two schemes, one with large "fast" MG and the other with zero "fast" MG. 1) Transmitting at large S (F ) : Since we wish to transmit at maximum "fast" MG, each "fast" transmission should not be interfered (except for signals up to noise level) by any other ("fast" or "slow") transmission. Therefore, we partition K into δ subsets K 1 , . . . , K δ , for some positive integer δ, in a way that all the signals sent by Txs in a given subset K i do not interfere, i.e., for each i ∈ {1, . . . , δ}: We divide the total transmission time into δ equally-sized phases. In the i-th phase, otherwise it does not send anything. Condition (13) ensures that transmissions of "fast" messages are not interfered at all. By (12), the condition is in particular satisfied for all k ∈ K i ∩ (T active \T fast ). The described scheme achieves a "fast" rate of R (F ) = 1 δ 1 2 log(1 + P), and thus by (10), a "fast" MG of It also achieves an expected "slow" MG of 2) Transmitting at S (F ) = 0: Each Tx k ∈ T active sends only a "slow" message but no "fast" message. Since perfect cooperation is assumed at the Rxs, each of the "slow" messages can be transmitted with MG 1. The average expected "slow" MG over the network is thereforē while S (F ) = 0. Time sharing the two schemes establishes the following: Proposition 1 (Achievable MG Region): The inner bound on S (ρ, ρ f ) contains the region: In the following three sections we specialize this proposition to different interference networks. As we will see, for our first network, we can also prove a corresponding converse result, and thus Proposition 1 exactly characterizes S (ρ, ρ f ). B. Wyner's soft-handoff network Consider Wyner's soft-handoff network shown in Figure 1. Interference is short-range in the sense that the signal sent by Tx k is observed only by Rx k and by the neighbouring Rx k + 1. Thus I Rx,k = {k + 1}. For this network, we can exactly characterize the fundamental MG region S (ρ, ρ f ): Theorem 1: The fundamental MG region S (ρ, ρ f ) of Wyner's soft-handoff network is the set of all nonnegative pairs (S (F ) , S (S) ) satisfying Proof: The achievability part follows by specializing Proposition 1 to δ = 2 and to boundary of the region. The maximum-S (S) boundary, which characterizes the maximum achievable "slow" MG S (S) in function of the "fast" MG S (F ) , is fully characterized by ρ: it's a line segment with slope −(1 + ρ). In general, this slope determines the penalty that the maximum "slow" MG S (S) incurrs when one increases the "fast" MG. This penalty increases with increasing ρ because with more active Txs in the network the probability increases that a given active "fast" Tx is interfered by other active transmitters, which then have to be forced to send at "slow" MG 0 in order not to harm the achievable "fast" MG. C. Wyner's symmetric network Consider Wyner's symmetric network in Figure 2, where the signal sent by Tx k is observed by Rxs k and k + 1, and also by Rx k − 1. Thus I Rx,k = {k − 1, k + 1} for each k ∈ K. Proof: Specialize Proposition 1 to δ = 2 and to the sets K 1 and K 2 in (22). For this choice |I Rx,k ∩ K i | = 2. The region in above corollary is again a quadrilateral, but the maximum-S (S) boundary is now determined by both parameters ρ and ρ f as its slope is − (1 + ρ(2 − ρρ f )). The dependency on ρ f however vanishes as ρ · ρ f → 0 in which case the slope approaches −(1 + 2ρ). Interestingly, this asymptotic slope shows a factor 2 compared to the slope of the maximum-S (S) boundary in Wyner's soft-handoff network. The reason is that in Wyner's symmetric network |I Tx,k | = 2 whereas in Wyner's soft-handoff network |I Tx,k | = 1. In the next subsection, we will see that in the hexagonal network where |I Tx,k | = 6, this asymptotic slope is −(1 + 6ρ). D. Hexagonal network Consider the hexagonal network in Figure 3 with K hexagonal cells and each cell including one Tx and one Rx. The signals of Tx/Rx pairs that lie in a given cell interfere with the signals sent in the 6 adjacent cells. The interference pattern is depicted by the dashed black lines in Fig. 3. Corollary 2: The multiplexing gain region S (ρ, ρ f ) includes all nonnegative pairs (S (F ) , S (S) ) satisfying Proof: Follows by specializing Proposition 1 to δ = 3 and to appropriate sets K 1 , K 2 and K 3 shown in Fig. 3. Figure 4 evaluates the regions in Theorem 1 and in Corollaries 1 and 2 for ρ = 0.8 and ρ f either 0.3 or 0.6. We observe the quadrilateral shapes of all three regions. III. RANDOM "FAST" AND "SLOW" ARRIVALS The setup considered in this section differs from the previous setup only in that Txs in T fast only send a "fast" message but no "slow" message. Thus, defining we have for some function f on appropriate domains that satisfy the average block-power constraint (4). All other definitions are as in the previous Section II. We denote the fundamental MG region for this setup by S * 2 (ρ, ρ f ). A. Achievable MG Region and Coding Schemes We again propose two schemes, one for large "fast" MG and the other for zero "fast" MG. 1) Transmitting at large S (F ) : Similar to the scheme presented in Subsection II-A1, we partition K into sets K 1 , . . . , K δ and divide the total transmission time into δ equally-sized phases. In the i-th phase, each Tx k in K i ∩ T fast sends its "fast" message and each Tx k ∈ T slow sends its "slow" message if I Rx,k ∩ T fast ∩ K i = ∅; otherwise it does not send any message. The described scheme achieves a "fast" MG of S (F ) max = ρρ f δ , and an expected "slow" MG of 2) Transmitting at S (F ) = 0: Each Tx k ∈ T slow sends a "slow" message with MG 1. The average expected "slow" MG over the network is thereforē Each Tx k ∈ T fast remains silent and thus S (F ) = 0. We specialize this result to the interference networks introduced in Sections II-B, II-C and II-D using the same choices for δ and the sets {K i } δ i=1 . For Wyner's soft-handoff network this inner bound is again tight. 4) Wyner's Soft-Handoff Network: Theorem 2: The fundamental MG region S 2 (ρ, ρ f ) is the set of all nonnegative pairs (S (F ) , S (S) ) satisfying Proof: Achievability follows by specializing Proposition 2 to δ = 2 and to the sets K 1 and K 2 in (22). For this choice |I Rx,k ∩ K i | = 1. The proof of the converse is omitted. Like in the previous setup, the fundamental MG region S 2 (ρ, ρ f ) is a quadrilateral. Interestingly, now all boundaries depend on both activity parameters ρ and ρ f , in particular the maximum "slow" MG equals ρ(1 − ρ f ). Moreover, the maximum sum-rate is not achieved for this maximum "slow" MG anymore. Formally, this holds because the slope of the maximum-S (S) boundary is −ρ(1 − ρ f ) and thus larger than −1. So, the maximum sum-rate point is obtained for maximum "fast" MG S (F ) = ρρ f 2 . The underlying intuition is that for ρ(1 − ρ f ) < 1 it may occur that a "fast" MG can be accommodated without the need to sacrifice a "slow" MG when the single interferer is not active anyways. Figure 5 illustrates S 2 (ρ, ρ f ) for Wyner's soft-handoff network as well as the inner bounds we obtain for Wyner's symmetric network and the hexagonal network under activity parameters ρ = 0.8 and ρ f is either 0.3 or 0.6. 5) Wyner's Symmetric Network: Corollary 3: The MG region S 2 (ρ, ρ f ) includes all nonnegative pairs (S (F ) , S (S) ) satisfying Proof: Specialize Proposition 2 to δ = 2 and to K 1 and K 2 as in (22). For this choice |I Rx,k ∩ K i | = 2. Here the slope of the maximum-S (S) boundary is −ρ(1 − ρ f )(2 − ρρ f ) and can be larger or smaller than −1 depending on the activity parameters. So, depending on these parameters, the maximum sum-MG is either achieved for zero "fast" MG or for maximum "fast" MG. Typically, for large values of ρ f , i.e., when most of the "active" Txs send "fast" messages, the maximum sum-MG is achieved at maximum "fast" MG. When ρ f is small and ρ sufficiently large, then most of the users are active and intend to send "slow" messages. In this case, scheduling "fast" messages most likely comes at the expense of silencing active neighbours that wishing to send "slow" messages. It is further interesting to notice that in the limiting regime ρρ f → 0, the slope of the maximum-S (S) boundary approaches −2ρ(1 − ρ f ) and is thus 2 times the slope in Wyner's soft-handoff network. As we will see, the hexagonal model treated next shows a factor 6. For all three networks, the asymptotic slope in the limit ρρ f → 0 is thus given by −|I Tx,k |ρ(1 − ρ f ). 6) The Hexagonal Model: Corollary 4: The MG region S 2 (ρ, ρ f ) includes all nonnegative pairs (S (F ) , S (S) ) satisfying IV. CONCLUSIONS We considered two different setups to simultaneously transmit delay-sensitive and delay-tolerant traffic over interference networks with randomly activated users. Under both setups, we characterized the multiplexing gain region of Wyner's softhandoff network and derived an inner bound on the MG region of any general interference network. Our results show that in the first setup, where each active Tx always has "slow" (delay-tolerant) data to send, the sum-MG is decreased with increasing "fast" (delay-sensitive) MG. The corresponding penalty mostly depends on the activity parameter ρ and the interference set size |I Tx,k | of the network. It increases with both parameters, intuitively because more Txs have to be silent when accommodating "fast" transmissions. In contrast, in the second setup where each active Tx has either a "slow" or a "fast" message to send, depending on the values of the activity parameters ρ and ρ f , the sum-MG is either achieved at maximum "fast" MG or at 0 "fast" MG. The former holds for small values of ρ where only few Txs in the network are active and thus "fast" transmissions often can be accommodated without silencing active "slow" Txs. An interesting line of future work consideres buffers to store not yet transmitted "slow" messages similar to [11]. Fix K and realizations of the sets T active and T fast . Following the steps in [4, Section V], we prove that for each k ∈ T active : ≤ 1 2 log(1 + (1 + |h k,k+1 | 2 )P) + 1 2 log(1 + |h k,k+1 | 2 ) + max{− log |h k,k+1 |, 0} + n n , where R (F ) k+1 is the rate of the "fast" message at Rx k+1, which is either 0 or equal to R (F ) . For simplicity, we abbreviate the RHS of (39) by ∆, and we sum up this bound for all values of k ∈ T active : Taking expectation over (39) and dividing by K, we obtain: because the expected number of indices k ∈ T active for which R (F ) k = R (F ) equals ρ·ρ f and the expected numbers of indices k ∈ T active for which R (F ) k+1 = R (F ) equals ρ 2 · ρ f . Dividing by 1 2 log P and letting P → ∞ proves (21).
4,467.4
2021-04-11T00:00:00.000
[ "Computer Science" ]
Single-ion addressing via trap potential modulation in global optical fields To date, individual addressing of ion qubits has relied primarily on local Rabi or transition frequency differences between ions created via electromagnetic field spatial gradients or via ion transport operations. Alternatively, it is possible to synthesize arbitrary local one-qubit gates by leveraging local phase differences in a global driving field. Here we report individual addressing of $^{40}$Ca$^+$ ions in a two-ion crystal using axial potential modulation in a global gate laser field. We characterize the resulting gate performance via one-qubit randomized benchmarking, applying different random sequences to each co-trapped ion. We identify the primary error sources and compare the results with single-ion experiments to better understand our experimental limitations. These experiments form a foundation for the universal control of two ions, confined in the same potential well, with a single gate laser beam. I. INTRODUCTION The ability to produce arbitrary one-qubit rotations on each individual qubit is a requirement for a universal quantum computer [1]. Single-qubit addressing is also crucial to characterization of quantum processes via quantum process tomography (QPT), randomized benchmarking (RB), and related techniques. Regardless of the physical qubit implementation, single-qubit addressing requires either (1) a differential Rabi frequency, (2) a differential qubit frequency, or (3) a phase shift at each of the qubit sites. Ion-trap and neutral-atom optical-lattice systems have achieved differential Rabi frequencies using tightly focused laser beams where the beam waists are smaller than the typical interatom spacings. However, closely spaced atoms (separations are typically 1-4 µm in iontrap systems) are required for the fastest and highest fidelity entangling gates. Maintaining a high degree of optical isolation between such tightly spaced qubit locations is challenging. In practice, the atoms often are moved to larger separations prior to one-qubit gates [2,3], or a composite pulse sequence is used to compensate the effect of finite beam waist on neighboring qubits [4,5]. The first option isolates neighboring qubits from neighboring laser beams, but the required ion transport operations can be costly both in time and in atom motional excitation [6]. The second option increases operation time, can increase error on the target qubit, and still requires tightly focused beams. An alternative addressing technique in trapped-ion systems relies on micromotiondependent Rabi frequencies. Here, an ion is displaced within the trap's radio-frequency electric-field gradient to alter the Rabi frequency of a micromotion sideband [7], and this sideband transition is driven with a global laser beam. However, experimental demonstrations report errors of ≈ 10 −2 , the technique suffers from pronounced sensitivity to the local electrostatic environment (requiring frequent recalibration), and the requirement of displacing only one ion within a longer chain places undesired constraints on the trapping potentials [8,9]. Differential qubit-frequency shifts have also been achieved with an auxiliary field gradient that generates spatially dependent Zeeman or Stark shifts. Here, individual addressing is accomplished by tuning the control field to the local qubit resonance; related techniques have been demonstrated in neutral atom [10], ion-trap [11][12][13], solid-state, and superconducting qubit [14,15] experiments. Accurate spectral resolution of individual qubits imposes fun-damental limits on the minimum gate time with these schemes because of the finite Fourier frequency width of a gate pulse. Additionally, the number of qubits that can be individually addressed is limited by the frequency tuning range of the control field, and the auxiliary field adds complexity and requires a precisely controlled amplitude. Furthermore, these gradient techniques are incompatible with long-coherence, field-insensitive qubit transitions [16]. Tightly-focused optical beams can also be used to impart a differential phase at individual qubit sites [17], but such an approach faces similar challenges to those using differential Rabi frequencies. Alternatively, differential phase shifts can be achieved in a global beam through changes in ion position. Basic demonstrations of this idea have been performed both with two widely spaced ions [18] as well as with ions confined within the same potential minimum [19][20][21]. Here we generalize these concepts to achieve arbitrary, individually addressed one-qubit rotations on a two-ion crystal, leveraging controlled changes in trapping confinement to vary the spacing between ions. This differential phase technique does not require tightly focused beams, ion chain split and merge operations, global inhomogeneous auxiliary fields, or sensitive ion micromotion sidebands, and it may be readily incorporated into other ion trapping experiments using common laboratory equipment. In this manuscript we describe the theory of single-ion addressing via trap potential modulation in global optical fields (Sec. II), and we detail our experimental apparatus (Sec. III) including the GTRI-fabricated ion trap used to validate the theory. We present the results of individually-addressed Ramsey experiments on a pair of ions in the same potential minimum (Sec. IV A), and we further characterize the performance of the technique using RB with different random gate sequences applied to each ion (Sec. IV B). As an additional diagnostic, we compare the results of these experiments to the results of experiments on a single ion (Sec. IV C). II. THEORY OF OPERATION Differential phase shifts can be generated by varying the positions of trapped-ion qubits between gate pulses. Laser-cooled ions naturally form ordered crystals, and in this crystalline phase, ions are trapped near equilibrium positions controlled by the trap geometry and operating parameters [22]. For the special case of two identical ions forming a linear crystal in a harmonic Paul trap, the equilibrium ion separation is d 0 = [Z 2 e 2 0 /(2π 0 mω 2 0 )] 1/3 where e 0 is the elementary charge, 0 is the permittivity of free space, m and Z are the one-ion mass and charge, and ω 0 /2π is the axial secular frequency of a single ion in the harmonic well. If the trap secular frequency is changed by a value of ∆ω, the ion separation becomes d = d 0 (∆ω/ω 0 + 1) −2/3 . Given k z is the projection of the laser wavevector along the crystal axis, making such a secular frequency change between laser pulses leads to an effective differential phase shift of ∆φ = k z (d − d 0 ) between the two ions for a second pulse (with respect to the phases experienced in a pulse before the secular frequency change). As a concrete example, consider two 40 Ca + ions confined in a 2 MHz harmonic potential with a 729 nm quadrupole-transition gate beam oriented at 45 • to the axis. A shift of ∆ω/ω 0 = 0.22 here is sufficient to produce a differential π phase shift and is sufficiently small that the ion pair can remain stably trapped. Such a differential phase can be used to construct arbitrary one-qubit rotations (achieving different rotations on each ion in the pair) even with a global laser beam. A straightforward construction of this kind consists of an initial laser pulse, a controlled change in ion separation, and a final laser pulse as follows: The first pulse achieves a rotation is the Bloch sphere rotation operator and X k and Y k are the Pauli operators for qubit k. The trap potential is then adjusted to produce a differential phase shift ∆φ = π, so that the second laser pulse effects a modified rotation where φ can be fixed to φ by changing the global laser phase. The net unitary is then If the second laser pulse duration is chosen such that θ 2 = θ 2 (including compensation for possible spatial gradients in the laser beam intensity), the resulting net gate rotation is where I (2) is the identity gate on the second ion. By choosing φ and scaling θ 1 and θ 1 appropriately, an arbitrary rotation on the Bloch sphere can ideally be realized on the first ion without affecting the second ion. A different choice of φ = φ + π allows for a rotation on only the second ion. In the absence of background electric fields, the center-of-mass (COM) motional mode is not coupled to an overall change in axial secular frequency (realized by scaling the DC trap potentials). However, abrupt changes in confinement do produce motional squeezing [23,24]. If this squeezing were sufficiently large, the resulting motional excitation would degrade future coherent laser operations. In practice, this is not a concern because ion trap electrode filters (with cutoff frequencies lower than ω 0 ) typically remove spectral content high enough in frequency to create significant squeezing. Furthermore, even if the potentials could be altered abruptly, the resulting excitation would be small: 0.02 phonons would result from a step change of ∆ω/ω 0 = 0.25 on a ground-state-cooled mode. We conclude that individual ion addressing via confinement potential modulation can be performed in a duration commensurate with the duration of the individual laser pulses involved. These concepts can be extended to more than two ions co-trapped in a linear chain. In a three-ion chain, a π phase shift is realized between adjacent ions with an adjustment ∆ω/ω 0 = 0.25 in the axial potential, only slightly larger than the adjustment for two ions. However, even longer chains require larger confinement changes to produce useful displacements between nearest-neighbor ions [25], and arbitrary one-qubit rotations in longer multi-ion chains require correspondingly more complex sequences of primitive pulses. III. EXPERIMENTAL APPARATUS The ion-trapping apparatus previously used for the experiments described in Ref. [26] was modernized to perform this work. It incorporates a surface-electrode ion trap with DC electrode potentials controlled by National Instruments 16-bit PXI-6733 digital-to-analog converter (DAC) cards clocked at 100 kHz. The output of each DAC is filtered with a 530 kHz reactive low-pass filter. The qubit for all of the experiments presented here is the |S 1/2 , m j = − 1 / 2 − |D 5/2 , m j = − 1 / 2 transition in 40 Ca + . One-qubit rotations are achieved via optical pulses from an ultra-stable laser resonant with this transition at 729 nm. The ions are confined radially by a radio-frequency (RF) potential (peak magnitude ≈ 176 V) at 56.4 MHz applied to the RF electrodes. A single ion in this potential is confined with an axial secular frequency ω z = 2π × 2.05 MHz and radial frequencies ω r 1 (r 2 ) = 2π × 7.89(5.88) MHz. A bias magnetic field of 11.37 G is provided by two rare-earth permanent magnets outside the vacuum chamber. The GTRI-fabricated "Gold Trap" used here is an improved iteration of the GTRI Microwave Trap described in Ref. [27]; it is a planar linear Paul trap with 42 segmented electrodes, two outer rail electrodes, and integrated microwave waveguides. Ions are confined nominally 58 µm above the trap surface. In contrast with earlier versions, the segmented DC electrodes are located between the microwave waveguides to allow for stronger harmonic confinement. Gold vias (lateral dimension 20 × 40 µm) connect the top gold electrode layer to an underlying fan-out 1.5-µm thick aluminum layer. Each electrode benefits from 70 pF of capacitance to ground, an intentional side-effect of stray capacitance in the fanout layer, which reduces undesired RF pickup. The top layer of gold is designed with a moderately high aspect ratio to reduce variations in trapping potential caused by dielectric charging (e.g. from UV exposure). Specifically, the gold layer is 10 µm thick, and the nominal gap betweeen electrodes is 6 µm; it is fabricated via an electroplating process with a photoresist electroplating mold. Both linear ion transport and axial potential modulation are effected with an assortment of waveforms applied to the DC electrodes. These waveforms are calculated using the methods described in Ref. [28] with an additional quadrupole rotation designed to rotate the radial axis by ≈ 7.3 • to aid Doppler cooling of the vertical radial motional mode. The potential modulation of ∆ω/ω 0 = 0.22 required for a differential π phase shift between the ions in a pair is achieved by scaling the DC potentials up or down, and we interpolate between these initial and final potential configurations with a user-defined number of intermediate configurations (waveform points). In practice we find that, in addition to uniformly scaling the potential, we must also apply a carefully tuned compensation electric field (most importantly along the crystal axis) in order to reduce motional excitation. A. Individually Addressed Ramsey Experiments In order to demonstrate single-ion addressing via potential modulation (SIAPM), we first perform a pair of Ramsey experiments in which only one of two co-trapped ions is addressed. The ions are Doppler cooled initially via laser beams at 397 nm (red detuned from the S 1/2 − P 1/2 transition) and 866 nm (blue detuned from the D 3/2 − P 1/2 transition). The Ramsey sequences here consist of two composite π/2 gates separated by a delay. The composite π/2 gates are comprised of a π/4 optical pulse, a modulation in the trapping potential, and a second π/4 optical pulse as described in Sec. II. After the second π/4 optical pulse, we return the axial potential to its initial value (although this is not strictly and initialized into a single S 1/2 Zeeman state. A random sequence of one-qubit operations (described below) is then applied, after which the ions are split into separate wells and measured. The RB sequences are structured as described in Ref. [29] and elaborated in Ref. [30]. These sequences are composed of a given number (length) of steps, each of which consists in turn of a Pauli gate (π-rotations about the X axis, Y axis, Z axis, or the identity) and a Clifford gate (comprised of between zero and three π/2-rotations about the X axis or Y axis with 1.5 on average). The X-axis and Y -axis π (π/2) gates are implemented as follows (see Sec II): a π/2 (π/4) optical pulse, a modulation in the trapping potential, a second π/2 (π/4) optical pulse, and a return to the initial trapping potential. The Z-axis π and identity gates are implemented simply by adjusting the phases of subsequent gates. For these experiments we interleave two independent, random sequences of steps, each sequence targeting only one of the two ions. The results of this experiment appear in Fig. 2. The observed decay in average fidelity is not purely exponential; it falls off more quickly at long sequence lengths than at short ones, a behavior that we attribute to unwanted motional excitation (heating). On average, performing one RB step on each of the two ions requires eight optical pulses and eight potential modulations. Although these potential modulations are calibrated to minimize ion heating, some undesired excitation is present; we explore the origins of this heating in greater detail in Sec. IV C. To account for this ion heating during longer RB sequences, we fit the data to an alternative function (derived in detail in Appendix A) to that used in Refs. [29,30]. A more rigorous statistical approach to fitting RB to alternative models such as this one is described in Ref. [31]. Assuming that the ion motional states are thermal, that the temperature grows linearly with RB sequence length, and that only the COM mode 1 is excited, we are led to a modified fidelity fit function where ε SPAM is the state preparation and measurement (SPAM) error, ε step is the error per sequence step in the absence of heating, and l is the sequence length. ε m,P (ε m,C ) is the error of the Pauli (Clifford) gate due to finite temperature at the m th sequence step. We have fit the data in Fig. 2 to Eqn. 1 with the results listed in Table I Fig. 3 were fit to Eqn. 1. Fit-coefficient errors represent statistical fit uncertainty. C. Error Diagnostics To better characterize the errors observed in the two-ion, individually addressed RB results, we perform RB experiments using only a single ion both without and with the addition of a time delay after each optical gate for comparison. For both of these experiments, we use two π/2 pulses for the Pauli π rotations and construct the Clifford gates from (noncomposite) π/2 pulses without any modulation in trap potential. Other experimental details are identical to those in the two-ion experiments. The red data ("Standard") in Fig. 3 gives the results of the experiment with no additional time delay after each optical gate. In comparison with the two-ion results, the one-ion data agree far better with a simple exponential model. We nevertheless fit the data as before, with the results listed in Table II ("Standard" row). The per-step error of 6.7(2) × 10 −4 is below the two-ion per-step error by more than an order of magnitude and forms a good baseline for the earlier results: we should expect the one-ion rate to be a lower-bound for the two-ion rate. We believe that residual laser noise and magnetic-field noise are the limiting factors in our one-ion fidelity. The observed SPAM error is consistent with spontaneous emission limits. The fit motional excitation from background heating is 6.4(1.2) × 10 −3 quanta/step, which corresponds to an axial heating rate of 0.26 (5) of the axial heating rate (0.16(5) quanta/ms) made via the more conventional technique of observing red and blue sideband ratios [32]. The two-ion individually addressed RB sequences include various modulations in the trapping potential which significantly increase the overall sequence durations beyond those of the one-ion case. These additional delays could be expected to contribute significant errors. For our second one-ion diagnostic experiment we intentionally insert an additional 25 µs delay after each optical pulse. This delay is chosen to match the duration of the potential modulations in the two-ion experiment ("Wait" RB data in Figure 3; fit coefficients in Table II). We observe only a small increase in gate error from 6.7(2)×10 −4 to 7.4(3)×10 −4 due to the additional delay. Again we can estimate an axial heating rate and obtain 0.29(2) quanta/ms, in agreement with the previous value and still roughly consistent with independent measures. This consistency strengthens our confidence in the modified randomized benchmarking fit function (Eq. 1) and indicates that these fits represent a previously unexplored method to quantify ion-trap heating rates. The higher heating rate observed in RB experiments could be due to the contributions of radial-mode heating, which would not affect the axial sideband ratio measurement. V. OUTLOOK AND CONCLUSION We have individually addressed each ion in a two-ion string using axial potential modulation and a global gate beam, and we have characterized the resulting gate performance via simultaneous randomized benchmarking using independent sequences for each ion. This experiment is a prerequisite for the universal control of two ions confined in the same potentional well. The SIAPM technique we used avoids several of the challenges associated with other individual addressing techniques. We have also compared our SIAPM results to the results of a series of one-ion experiments in order to better understand our primary sources of gate error. The SIAPM technique works with sufficient fidelity at small numbers of gates to perform useful quantum process tomography of a two-qubit gate. To extend this technique to longer gate sequences, we will suppress the heating induced by the potential modulations. We anticipate improvements through a combination of (1) more careful calibration, (2) finer waveform control with the addition of more waveform points (at the expense of speed), (3) faster DAC's to shorten waveform intervals and (4) precompensation of the waveforms for the low-pass filter response. where θ n = θ 0 L n (η 2 ) is the reduced rotation angle due to the finite Fock state [33], L n is the Laguerre polynomial of order n, and η is the Lamb-Dicke parameter for the given motional mode. The fidelity is then Thermally averaging the above expression with the Boltzmann weighting function results in the expression for the one-qubit rotation fidelity: The above expressions can be generalized to a series of l one-qubit rotations with an increasing motional temperature. The fidelity expression then becomes We know of no analytic solution to a thermal average of Laguerre polynomials as in Eqs. A6 and A7 (these expressions can still be used for numerical fits); making the assumption that η 2 1 (L n (η 2 ) ≈ 1 − nη 2 ), the Boltzmann-averaged fidelity can be expressed as F (θ 0 ,n, η) = 1 2   1 + 1 2 (n + 1) − 2n+1 1+n(1−cos(θ 0 η 2 ))   . (A9) Multiple Motional Modes To account for multiple motional modes of N ions, The fidelity of a one-qubit rotation with a given set of Fock state occupations is given by F n 1 ,n 2 ,...,n 3N = 1 2 1 + cos Here, the Boltzmann weighting function includes all motional modes: The resulting expression for the one-qubit rotation fidelity is then W n 1 ,n 2 ,...,n 3N F n 1 ,n 2 ,...,n 3N = 1 2 where and bold type denotes variables with 3N components. 3N . (A16) The notation of the above expression can be simplified by converting to and taking the real part of complex exponentials (we note that a similar mathematical treatment is presented in the appendix of Ref. [34]): The thermally-averaged fidelity can then be simplified to F ≈ 1 2 1 + R 3N k=1 1 n k + 1 1 1 −n k n k +1 exp (iη 2 k θ 0 ) . RB Fit Function To account for the ion heating during longer RB sequences, we use a function adapted from that used in Refs. [29,30]. Assuming that the ion motional states are thermal (Eqn. A5), that the temperature grows linearly with RB sequence length, and that only the COM mode is excited, we are led to a modified fidelity fit function where ε SPAM is the state preparation and measurement (SPAM) error, ε step is the error per sequence step in the absence of heating, and l is the sequence length. ε m,P (ε m,C ) is the error of the Pauli (Clifford) gate due to finite temperature at the m th sequence step. We model the temperature asn m =n 0 + m · ∆n, which describes the initial temperaturen 0 (we measuren 0 ∼ 0.01) and heating of m · ∆n at step m. Each step in the RB sequence contains on average one π or identity gate and one and a half π 2 gates. However, not all gate implementations incur motional excitation errors. Since only two of the four Pauli gates incur errors due to motional excitation and the gates are chosen at random with equal probability, we choose (1 − 2ε m,P ) = 1 2 [1 + χ (π,n m , η)] . Therefore, the error per Pauli gate on average is half what we would have calculated if we had assumed that each Pauli gate incurred the same error.
5,313
2019-11-11T00:00:00.000
[ "Physics" ]
A first-in-human phase 1 trial to evaluate the safety and immunogenicity of the candidate tuberculosis vaccine MVA85A-IMX313, administered to BCG-vaccinated adults Introduction There is an urgent need for a new and effective tuberculosis vaccine because BCG does not sufficiently prevent pulmonary disease. IMX313 is a novel carrier protein designed to improve cellular and humoral immunity. MVA85A-IMX313 is a novel vaccine candidate designed to boost immunity primed by bacillus Calmette-Guérin (BCG) that has been immunogenic in pre-clinical studies. This is the first evaluation of IMX313 delivered as MVA85A-IMX313 in humans. Methods In this phase 1, open-label first-in-human trial, 30 healthy previously BCG-vaccinated adults were enrolled into three treatment groups and vaccinated with low dose MVA85A-IMX313 (group A), standard dose MVA85A-IMX313 (group B), or MVA85A (group C). Volunteers were followed up for 6 months for safety and immunogenicity assessment. Results The majority of adverse events were mild and there were no vaccine-related serious AEs. Both MVA85A-IMX313 and MVA85A induced a significant increase in IFN-γ ELISpot responses. There were no significant differences between the Ag85A ELISpot and intracellular cytokine responses between the two study groups B (MVA85A-IMX313) and C (MVA85A) at any time point post-vaccination. Conclusion MVA85A-IMX313 was well tolerated and immunogenic. There was no significant difference in the number of vaccine-related, local or systemic adverse reactions between MVA85A and MVA85A-IMX313 groups. The mycobacteria-specific cellular immune responses induced by MVA85A-IMX313 were not significantly different to those detected in the MVA85A group. In light of this encouraging safety data, further work to improve the potency of molecular adjuvants like IMX313 is merited. This trial was registered on clinicatrials.gov ref. NCT01879163. Introduction The lack of a safe and effective tuberculosis (TB) vaccine is a public health emergency. TB causes a major global health burden, with an estimated 9.0 million incident cases and 1.5 million deaths each year [1]. In addition, the emergence of drug resistant forms of TB further magnifies the difficulty of TB control. The only currently available licensed vaccine is Mycobacterium bovis (M. bovis) bacillus Calmette-Guérin (BCG). BCG prevents disseminated disease in childhood [2,3], but does not provide sufficient or consistent protection against pulmonary TB [3][4][5]. It confers varying effectiveness across different populations for reasons that are not well understood. An improved TB vaccine is urgently required. Technologies that increase vaccine immunogenicity and effectiveness are critical to achieving this goal. One example is the immunogenicity-enhancing protein technology, IMX313. IMX313 is a small protein domain that self-assembles into a nanoparticle with seven identical chains. The 55 amino acid sequence is a hybrid of the oligomerisation domains of two chicken C4b-binding proteins, both distant homologues of human Complement 4 binding protein (C4bp) [6]. In pre-clinical studies, IMX313 has an adjuvantlike effect when fused with protein antigens [6]. Here we assess the safety and immunogenicity of IMX313 with a clinically advanced candidate TB vaccine -Modified Vaccinia virus Ankara expressing the immunodominant Mycobacterium.tuberculosis (M.tb) antigen 85A, MVA85A. MVA85A was designed to boost BCG induced protection and is the only TB subunit vaccine to be evaluated in an efficacy trial. In phase I trials, MVA85A was highly immunogenic and induced potent Ag85A specific CD4+ T-cell responses in BCG-vaccinated adults [7][8][9]. Despite this, in a South African phase IIb trial in 2797 South African, BCG-vaccinated infants, MVA85A was safe, but did not improve protective efficacy above the level achieved by BCG alone [10]. The reasons for this could be manifold, but one hypothesis is that MVA85A elicited insufficient IFN--producing T helper (Th1) and/or IL-17-producing CD4+ (Th17) cell responses. After vaccination with MVA85A there was only a modest induction of Th1 and Th17 antigen specific T-cell responses, which was 10-fold lower than that seen in UK adults [9]. Efforts to improve the immunogenicity of MVA85A are ongoing. In one strategy, MVA85A has been combined with IMX313. In both mice and non-human primate studies IMX313 improved the immunogenicity of MVA85A [11]. Here we describe the first clinical evaluation of IMX313, administered as MVA85A-IMX313, and compared the safety and immunogenicity profile with MVA85A in a phase I trial in BCG-vaccinated UK adults. Study design We undertook a phase I, randomised, open-label, first-in-human clinical trial in 30 BCG-vaccinated adults to assess the safety and immunogenicity of a candidate TB vaccine, MVA85A-IMX313. We enrolled volunteers following their written informed consent under a protocol approved by the UK Medicines and Healthcare products Regulatory Agency (EudraCT 2013-000678-31) and the NRES South Central -Oxford Research Ethics Committee (ref. 13 Participants We enrolled volunteers from the general population around Oxford and Birmingham. Volunteers were healthy, aged between 18 and 55 and had received BCG at least 6 months prior to their date of enrolment. They had normal baseline haematology and biochemistry and were hepatitis B, C and HIV negative. Latent M.tb infection was excluded by a negative ex vivo IFN-ELISpot response to M.tb early secreted antigenic target 6 kDa (ESAT6) and the 10-kDa culture filtrate protein (CFP10) peptides. The full inclusion and exclusion criteria are described in Supplementary Methods 1. Clinical procedures The first 6 volunteers were assigned to the starter group (group A), who were administered a low dose of MVA85A-IMX313 (1 × 10 7 pfu) delivered intradermally in a volume of 150 L into the upper arm (all injections were administered by a 29G diameter, 12.7 mm length needle). These group A (low dose MVA85A-IMX313) vaccinations occurred step-wise in order to assess safety. The safety of the first volunteer was assessed and 48 h passed before the next two volunteers in group A were vaccinated. The remaining volunteers in group A were vaccinated once the Chief Investigator decided it was safe to proceed. Once all 6 volunteers in group A had been followed up for 14 days, the dose was escalated to 5 × 10 7 pfu. One volunteer (the first group B volunteer) was assigned to receive the higher dose MVA85A-IMX313 (5 × 10 7 pfu, delivered intradermally in a volume of 76 L) and 48 h after vaccination their safety was reviewed before we proceeded to randomisation of the remaining 23 volunteers. We randomly allocated the remaining 23 eligible volunteers (1:1) to receive intradermal MVA85A-IMX313, 5 × 10 7 pfu (Group B) or intradermal MVA85A, 5 × 10 7 pfu, delivered in a volume of 60 L (Group C). Randomisation was done with sequentially numbered, opaque, sealed envelopes, prepared by an independent statistician, opened by the study clinician at enrolment. Volunteers and laboratory staff were blinded to intervention assignment. Following vaccination and safety reviews at 30 and 60 min, all volunteers were followed up for a period of 6 months, with clinic visits at days 2, 7, 14, 28, 84 and 168. Volunteers completed diary-cards for the recording of adverse events (AEs) for 7 days post-vaccination. Symptoms were reviewed at each clinic visit, and vaccination site observations (redness, swelling) and vital signs (blood pressure, heart rate, oral or tympanic temperature) were recorded. Safety bloods (full blood count, urea and electrolytes, liver enzymes) were collected on D7 and D84 post-vaccination. Solicited (local injection site: pain, redness, swelling, warmth, itch, scaling; and systemic: documented fever, feverishness, malaise, arthralgia, headache, myalgia, nausea/vomiting, fatigue) and unsolicited AEs were recorded in line-listings for later analysis. Assignment of a causal relationship for AEs was conducted according to predefined criteria specified in the protocol. Blood for immunological assessment was taken at all follow-up visits and peripheral blood mononuclear cells (PBMC) and serum were isolated and cryopreserved. Ex vivo IFN Enzyme-Linked ImmunoSpot (ELISpot) assay ELISpot assays were performed on freshly isolated PBMC, from all volunteers at screening or on day of vaccination and on days 7, 14, 28, 84 and 168 post-vaccination, as previously described [13]. Ag85A single pool of 66-peptides (Peptide Protein Research (PPR), UK), IMX313 peptides and C4bp (IMAXIO, France) were all used at a final concentration of 2 g/ml. Purified protein derivative (PPD), (Statens Serum Institute, Denmark) was used at 20 g/ml. Whole Blood (WB) Intracellular Cytokine Staining (ICS) Ag85A-specific intracellular cytokines were measured in WB samples as previously described [14]. Blood samples were stimulated with Ag85A peptides, or SEB (Sigma-Aldrich), or left without stimulation as a negative control. ␣CD28 and ␣CD49d (BD) were used as co-stimulatory antibodies and samples were incubated at 37 • C/5% CO 2 for 6 h. Following this, Brefeldin A (Sigma-Aldrich) was added before a further 6 h incubation. Samples were then treated with 2 mM EDTA (GIBCO). Red blood cells were lysed using FACS Lysing solution (BD) and samples were frozen for batched ICS analysis. Antibody Enzyme Linked Immunosorbent assay (ELISA) Levels of Immunoglobulin G (IgG) were measured in serum samples collected on days 0, 14 and 28 as previously described [15]. Insert-specific IgG responses were measured to recombinant Ag85A (Lionex, Germany). Anti-vector IgG responses were measured to wild-type MVA (Vector Core Facility, Jenner Institute, Oxford), IMX313 (IMAXIO, France) and hC4bp proteins (provided by Anna Blom, Sweden). Results are presented as fold change in antigen-specific response of IgG (Optical Density OD) from the value at D0. IgG subclasses levels to 85A or IMX313 antigen were determined in the sera by an indirect plate ELISA. The wells of ELISA plates (Nunc) were sensitised with Ag85A, IMX313 and hC4bp proteins (100 L/well in carbonate buffer pH 9.6 at 4 • C, overnight), followed by blocking with 10% bovine foetal serum (non-USA origin, sterile-filtered, Sigma) in PBS (pH 7.4) for 2 h at room temperature. Plates were washed twice with PBS containing 0.05% Tween 20 (PBS/T), followed by addition of 100 L/well of optimally diluted sera (1/50), in duplicate wells. Plates were incubated at 37 • C for 1 h, then washed three times with PBS/T. After washing, 100 L of optimally diluted (1/1000) anti-human IgG peroxidase conjugate (anti-human IgG1, anti-human IgG2, anti-human IgG3, anti-human IgG4, Fisher Scientific) in FBS-PBS was added to each well. Plates were incubated for 1 h at 37 • C, then washed five times with PBS/T before the addition of 100 L per well of TMB HRP Substrate for ELISA (UP664780, Interchim). The reaction was stopped after 20 min with 2 N H 2 SO 4 , and OD values were read at 405 nm. Results are presented as fold change in OD from the value at Day 0. All serum samples from each volunteer were run on the same ELISA plate to control for assay variability. A pool of Ag85A-positive sera was included in all plates. Statistical analysis All volunteers were randomised and vaccinated according to protocol, and so our analyses were per protocol. The primary study outcome was safety as assessed by the frequency and severity of vaccine-related local and systemic AEs. Safety data was summarised by frequency and severity of AEs using descriptive statistics. Statistical analyses to compare the standard dose groups were performed using GraphPad Prism. The Mann-Whitney U-test was used to determine differences between groups. The secondary study outcome was immunogenicity. ELISpot, WB ICS and antibody response statistical analyses were performed using GraphPad Prism. The Mann-Whitney U-test and unpaired t-test were used to determine differences between groups. The Wilcoxon matched pairs signed rank test was used to detect differences between time points in the same group. Area Under the curve (AUC) was used to examine overall responses during followup period. Results Between July 17 2013 and July 17 2014, 30 of the 42 volunteers screened for eligibility were enrolled in the trial (Fig. 1). The baseline demographics were similar between groups (Table 1). Vaccine safety Group A (the low dose MVA85A-IMX313 safety group) proceeded to completion without safety concerns. The numbers of AEs considered at least possibly related to vaccination were broadly similar between groups with medians of 10 in group A (low dose MVA85A-IMX313), 8.5 in group B (MVA85A-IMX313) and 9 in group C (MVA85A). There was no statistical difference between group B and C (p = 0.5024). All related AEs and their severities are shown in Table 2. There were no documented temperature rises following vaccination in any group. The majority of AEs were mild in nature. There was no significant difference in the local injection site reaction between groups, with median diameters of redness and swelling of 28.5 mm and 9 mm in group A (low dose MVA85A-IMX313), 40 mm and 11.5 mm in group B (MVA-IMX313) and 32 mm and 14 mm in group C (MVA85A). There was no statistical difference between group B and group C with p values of 0.1236 (redness) and 0.6430 (swelling) (Fig. 2). There were four severe AEs that were considered possibly related to vaccination (3 in a group B, MVA85A-IMX313 volunteer and 1 in a group C, MVA85A volunteer). The group B (MVA85A-IMX313) volunteer reported severe nausea from D2 to D3, severe malaise from D2 to D4 and severe fatigue from D2 to D4 following vaccination. The volunteer from group C (MVA85A) had vaccination site swelling that met the criteria for a severe AE that peaked at 60 mm diameter on D4 post-vaccination. There was one laboratory AE considered related to vaccination. A group C (MVA85A) volunteer had a mildly raised alanine transferase at screening and D7 post-vaccination, which was moderately elevated at D28, mildly elevated at D84 and normalised by D168 post-vaccination. The volunteer was asymptomatic and the AE was considered possibly related to vaccination. There was one unrelated serious AE reported in the study in a group B (MVA85A-IMX313) volunteer who was admitted overnight to hospital for investigation of chest pain 157 days after vaccination with MVA85A-IMX313. There were no other serious AEs. b One group B (MVA85A-IMX313) volunteer had 2 mild 'other' AEs: they reported cold symptoms from D6-10 and an 'upset tummy' on D1. c 2 group C (MVA85A) volunteers had mild 'other' AEs. One volunteer reported ipsilateral axillary tenderness from D1 to D11, another volunteer reported lightheadedness on the day of vaccination and ipsilateral axillary tenderness on D4-6. d 3 group C (MVA85A) volunteers had moderate 'other' AEs. One volunteer had a mildly raised alanine transferase at screening and D7 post-vaccination, which was moderately elevated at D28, mildly elevated at D84 and normalised by D168 postvaccination. Another volunteer reported cold/flu symptoms from D1 to D6. Another volunteer reported ipsilateral axillary tenderness from D2 to D17. Vaccination of BCG-primed UK adults with MVA85A-IMX313 or MVA85A, both at 5 × 10 7 pfu, induced a significant increase in Ag85A-specific IFN-responses that was durable up to 6 months following vaccination. These responses peaked at D7 postvaccination with a median IFN-response of 515 SFC/10 6 PBMC in the MVA85A-IMX313 group compared to 1107.5 SFC/10 6 PBMC in the MVA85A group (Fig. 3A). There were no significant differences between the Ag85A ELISpot responses in the two groups B (MVA85A-IMX313) and C (MVA85A) at any time point postvaccination, or in the overall Ag85A-specific IFN-response (AUC, p = 0.5059). There was a significant increase in PPD-specific IFN-ELISpot responses in both groups at D7 in comparison to baseline with a median IFN-␥ response of 542.5 SFC/10 6 PBMC in group B (MVA85A-IMX313) and 780 SFC/10 6 PBMC in group C (MVA85A) (Fig. 3B). The PPD response was not significantly different between the MVA85A-IMX313 and MVA85A groups (AUC, p = 0.5059). ELISpot IFN-responses to MVA CD4+ T cell epitopes peaked at D14 post-vaccination, with median responses of 19 SFC/10 6 PBMC in group B (MVA85A-IMX313) and 31 SFC/10 6 PBMC in group C (MVA85A) (Fig. 3C). ELISpot responses to MVA CD8+ T cell epitopes were significantly induced in both groups and these responses also peaked at D14 with a median response of 117 SFC/10 6 PBMC in group B (MVA85A-IMX313) and 282 SFC/10 6 PBMC in group C (MVA85A), remaining significantly increased until D28. There were no significant differences in responses to MVA epitopes between the two study groups (AUC, p = 0.2721 and p = 0.1585 for MVA CD4+ T cells epitopes and MVA CD8+ T cells epitopes respectively) (Fig. 3D). Total whole blood intracellular cytokine response Intracellular Ag85A-specific IFN-, TNF-˛, IL-2 and IL-17 were examined in stimulated whole blood from volunteers in the two study groups B (MVA85A-IMX313) and C (MVA85A) at baseline, D7 and D168 post-vaccination. No significant differences in percentages of cells producing IFN-, TNF-˛, IL-2 and IL-17 were detected between the two groups (AUC, p = 0.4799, p = 0.5987, p = 0.5575, p = 0.3241, p = 0.6943 and p = 0.8325 for CD4+ T cells IFN-, TNF-, IL-2 and IL-17 and CD8+ T cells IFN-and TNF-˛, respectively) (Fig. 4). CD4+ T cell IFN-and IL-2 significantly increased in both groups at D7 and D168 (Fig. 4A and C). Group B (MVA85A-IMX313) volunteers had significantly higher percentages of CD4+ TNF-˛+ T cells at D7 and D168 compared to baseline. A significant increase in percentages of CD4+ T cells producing TNF-˛ was detected in group C (MVA85A) at D168 (Fig. 4B). Ag85A-specific IL-17+ CD4+ T cell responses were detectable following vaccination in both study groups (Fig. 4D). CD8+ T cells producing IFN-and TNF-˛ were detected at very low percentages in both groups B (median D7 responses of 0.006 and 0.005 for CD8+IFN-+ and CD8+TNF-˛+ respectively) and C (median D7 responses of 0.003 and 0.0027 for CD8+IFN-+ and CD8+TNF-˛+ respectively) (data not shown). We focussed on the dominant functional phenotypes and found polyfunctional CD4+ T cells, making two or more cytokines simultaneously, cells producing IFN-in combination with TNFand IL-2 ( Fig. 5A) or double positive for IFN-and TNF- ( Fig. 5B) were all induced following MVA85A-IMX313 or MVA85A vaccination. The magnitude of these responses was not significantly different between the two study groups (AUC p = 0.863 and p = 0.518 for CD4+ T cells simultaneously producing IFN-, TNF-˛ or double positive for IFN-and TNF-˛ respectively (Mann-Whitney)) (Fig. 5). No polyfunctional CD8+ T cells were detected. Serum IgG responses after MVA85A-IMX313 and MVA85A vaccination Levels of serum IgG were assessed in volunteers in the MVA85A-IMX313 and MVA85A (both at 5 × 10 7 pfu) study groups. There was an increase in serum Ag85A-IgG at D28 in both groups. MVA85A-IMX313 vaccination induced a median OD 405 nm fold change of 1.640 in group B volunteers and MVA85A vaccination induced a median OD 405 nm fold change of 1.227 in group C volunteers (Fig. 6A). These responses were not different between groups (p = 0.343). When individual IgG subclasses were examined, in both groups IgG1 responses to Ag85A increased at D28 post-vaccination in both groups (median OD 405 nm fold change = 2.19 and 1.83 in group B and C, respectively). Ag85A-specific IgG2 increased in the MVA85A-IMX313 group with a median OD 405 nm fold change of 1.355. Other 85A-specific IgG subclasses responses did were not induced (Fig. 7A). Anti-MVA IgG was detected in both groups at D14 and D28 postvaccination. Group B median OD 405 nm fold change was 3.966 at D14 and 4.506 at D28 compared to group C median OD 405 nm fold change of 2.347 at D14 and 2.891 at D28. No differences were detected in MVA-specific IgG responses between the two study groups (Fig. 6B). IMX313-specific IgG responses were detectable in the MVA85A-IMX313 vaccination group (B) at D14 post-vaccination and significantly increased at D28 (p = 0.0028). As expected, these responses were not detected in the MVA85A alone group (C) (Fig. 6C). When IgG subclasses were examined, the IMX313-specific IgG responses in the MVA85A-IMX313 Group (B) were mainly IgG1 (Fig. 7B). There were no detectable responses to human C4bp at any time point post-vaccination (Fig. 6D). Discussion In this phase I clinical trial we assessed the safety and adjuvant effect of IMX313, which was delivered for the first time to humans, as MVA85A-IMX313. We demonstrated that this vaccine was well tolerated. The adverse event profiles of both MVA85A-IMX313 and MVA85A were similar and acceptable. Importantly, there were no documented high temperatures in any volunteers. The only severe systemic AEs considered possibly related to vaccination were in a group B volunteer who developed nausea, fatigue and malaise during a diarrhoeal illness that her family also became unwell with. These AEs were included in the analysis because it was not possible to fully distinguish symptoms that were potentially related to the vaccine from symptoms arising from the volunteer's probably infectious pathology. In this study we show that both MVA85A and MVA85A-IMX313 significantly induce mycobacteria-specific immune responses. At the peak time point 1 week post-vaccination both vaccines induced Ag85A-specific IFN-ELISpot responses that were not significantly different. This finding is in contrast with previously published data comparing the same vaccines in animal models, where MVA85A-IMX313 was more immunogenic than MVA85A [11]. This may be due to differences in vaccine regime, dose and/or species. In mice and non-human primates that had not received a prior BCG immunisation, two doses of MVA85A-IMX313 were required to demonstrate a significant enhancement of immune responses over MVA85A [12]. It has previously been shown that anti-vector immunity can interfere with insert-induced responses [16]. We studied vaccine-induced cellular anti-vector immunity and demonstrated that both studied vaccines induced comparable anti-MVA T cell responses. Intracellular whole blood cytokines were comparable between the two study groups and polyfunctional CD4+ T cells were induced by both vaccines. While there are no defined correlates of protection against TB, work on a Leishmania major pre-clinical model has demonstrated the importance of the quality of vaccine-induced T cells as measured by cytokines polyfunctionality in protection [17,18]. A successful vaccine against TB, might require the induction of potent polyfunctional T cells. In the present study we could detect vaccine-induced CD4+ T cells that can simultaneously make IFN-, TNF-˛ and IL-2 as well as those double positive for IFN-and TNF-˛. However, the role of these cells in protection against M.tb remains to be investigated. The functional quality of these vaccine-induced CD4+ T cells was not enhanced by fusion to IMX313. We detected comparable increased levels of anti-85A IgG in both study groups. These responses were predominantly IgG1, and the fold change IgG1 responses were higher in the MVA85A-IMX313 group than after MVA85A. The role of humoral immunity is increasingly recognised as important in TB vaccine development. It was previously shown that mycobacteria-specific human IgG could modulate both cellular and humoral immune responses to mycobacteria [19] [20]. IgG1 was reported to be the predominant antibody Isotype present in sera of TB Patients [21]. Hussein et al. [22] suggested a role of IgG1 in TB by enhancing release of TNF-˛ in active patients. It was suggested that the presence of IgG1 and IgG3 antibodies might enhance bacterial uptake and clearance of pathogen via macrophages Fc receptor [22]. In this trial, both vaccine groups had a comparable increase in serum MVA-specific IgG, while IgG responses to IMX313 increased in the MVA85A-IMX313 group and were not detectable in the MVA85A group. The role of these antibody responses is unclear and needs to be further investigated. No antibody cross-reactivity was detected in any of the study groups to the oligomerisation domain of human C4bp, which is likely due to the limited similarity between the IMX313 and human C4bp [11]. Following the failure of MVA85A to enhance efficacy in BCGvaccinated South African infants [10], there has been a refocussing of efforts to develop a diverse range of potent candidate TB vaccines. These approaches include the inclusion of technologies to enhance immunogenicity, novel antigen delivery systems that induce different phenotypes of T cells, novel routes of immunisation and the assessment of novel candidate antigens. It is likely that multiple approaches will be required. Conclusion Given the encouraging safety data in this study, future research optimising molecular proteins for further evaluation in recombinant viral vectors is warranted. This research may have the potential to accelerate not only TB vaccine development, but also that of other pathogens of global importance including HIV, influenza, Staphylococcus aureus and malaria.
5,493.6
2016-03-08T00:00:00.000
[ "Biology", "Medicine" ]
Edinburgh Research Explorer Giant negative magnetoresistance in Ni(quinoline-8-selenoate) 2 The magnetic, structural, conductivity and magnetoresistance properties of [Ni(quinoline-8-selenoate) 2 ] ([Ni(qs) 2 ]) have been studied. Despite the insolubility of the material necessitating its study as a powdered sample, a remarkably high conductivity has been measured. The conductivity is an order of magnitude greater than the thin-film processable thiol analogue previously reported and has been interpreted through the same space-charge limited conduction mechanism with charges injected from the electrodes. The introduction of selenium, results in a material with conductivity approaching metallic due to the enhanced interaction between adjacent molecules. Additionally, under an applied magnetic field, the material displays a negative magnetoresistance effect above 35% at 2 K. The effect can still be observed at 200 K and is interpreted in terms of a double-exchange mechanism. Introduction Molecular electronics, encompassing the use of small molecules or polymers for use as conductors and semiconductors, has seen an incredible expansion in research, both academically and industrially, over a relatively short period of time. While much of the early study has focussed on large area devices, photovoltaics and light-emitting devices for example, and on low-cost transistors there is a great scope for incorporating the appealing properties of molecular materials into novel application for electronic materials. 1,2,3 Indeed, as the field has matured, the development of new devices utilising the intrinsic properties that molecules can offer is an exciting objective. Coinciding with these developments the field of spintronics, the utilisation and manipulation of electronic spin in order to carry information, has come into prominence. 6,7,8,7,8 Beginning with the discovery of giant magnetoresistance (GMR) and with development throughout the 1980s it now underpins the magnetic data storage industry, giving rise to magnetic data reading and hybrid logic-storage devices. 9,10 Investigation into electronic materials for spintronics and related fields, has yielded devices showing magnetoresistance 11 , switching 12 , memory effects 13 and is aimed towards quantum information processing 14 , and can be seen in inorganic, organic and singlemolecule materials. It is therefore somewhat natural that the two fields should combine into the field of molecular spintronics. 6,15,16 Molecular spintronics is still in the early stages of its development. Exploitation of spin in molecular systems demonstrates remarkable potential for merging the higher functionality of spintronics with the molecular design and processing advantages of molecular materials. The most common method of incorporating molecular materials utilises non-magnetic organic spacers between ferromagnetic materials in a spin valve. 11,15,16 This approach is, however, hampered by difficult interface engineering. 17,18,19 Magnetoresistance has also been observed in organic diode arrangements employing a thin film of molecular semiconductor, although the interpretation is still controversial and the sign of the effect changes with experimental conditions. 20 A particularly appealing alternative is utilising the intrinsic GMR possible in a film of paramagnetic molecular materials. This approach has previously been used to generate negative MR up to 95% and relies upon a well-understood double-exchange mechanism, but has so far been limited to very few classes of molecule.21 ([TPP]Dicyano(phthalocyaninato)iron)2 was the first material of its type to be examined for the relationship between molecular magnetism and charge transport. 23,24 A controllable giant negative magnetoresistance (GNMR) of up to 95 % was achieved, using magnetic fields up to 15 Tesla. The benzo-TTF based molecule, bearing a nitronyl nitroxide radical group, ETBN, was discovered to demonstrate both conductivity and magnetism. 25 The diselena analogue (ESBN) was the first example of the coexistence of both conductivity and magnetism based upon organic spins without the presence of inorganic [Ni(qt)2] is paramagnetic, with Ni(II) in a distorted octahedral geometry due to the formation of intermolecular S-N bonds orientated along the a-axis ( Figure 1a). Measurements revealed a drop in electrical resistance of greater than 60% in a magnetic field. This magnetoresistive effect was still discernible up to 200 K, albeit lower than 1 %. The magnetic interactions are explained by the well-understood double exchange mechanism. 27 In an effort to further explore and improve the intrinsic magnetoresistive effect observed in the [Ni(qt)2] material we have now studied the selenium analogue, Ni(quinolone-8selenoate)2, [Ni(qs)2] (Figure 1b). Sulfur and selenium have similar chemical properties however the larger selenium atom has more diffuse orbitals and so it was hypothesised that replacement of S with Se would increase orbital overlap and hence intermolecular interactions enhancing the previously observed effects. Synthesis and Structure Complexation of two-equivalents of quinoline-8-selenoate with Ni(OAc)2·4H2O in EtOH gave [Ni(qs)2] as an insoluble dark blue powder. The elemental analysis (C,H, N) satisfied the formula and the electron impact mass spectrum was in agreement with the [M] + fragment with matching isotope pattern ( Figure S1). It is noted that the material is exceptionally insoluble, hence the limitations on the analytical techniques used in characterisation. Both the microanalysis and the mass spectrum (ESI1) provide evidence that elemental composition is that of the desired product. In order to confirm the chain structure, as observed in the thiolato analogue, powder x-ray diffraction was carried out. The PXRD ( Figure S2) was shown to have an amorphous component in addition to a crystalline component, observed as a bulging baseline with more defined peaks protruding from the baseline. The amorphous character may have arisen due to the polymeric nature of the chain, giving rise to an extremely insoluble material, with the rate at which the product precipitated from the reaction mixture resulting in the appearance of both the amorphous and crystalline character. Repeated measurements of samples prepared under various experimental conditions yielded the same result. Figure 2(a) shows the magnetisation against magnetic field curve measured at 2.0 K, likely arising from a chain structure in analogue to [Ni(qt)2]. The metal has a formal oxidation state of +2 with a d 8 electron count and would be diamagnetic in square planar geometry due to the crystal-field splitting ( Figure S3). As the material is paramagnetic this provides a clear indication that the material synthesized is not in a square-planar geometry, but more likely the desired distorted octahedral geometry, as also seen for the sulfur analogue, since the octahedral crystal-field splitting gives two unpaired electrons for a d 8 configuration ( Figure S3). Figure 2(b) shows the temperature dependence of the magnetic susceptibility for Ni(qs)2 measured at 1000 Oe. Figure 2(b) inset shows the temperature dependence of the product of magnetic susceptibility and temperature (χT) for Please do not adjust margins Please do not adjust margins Ni(qs)2 measured at 1000 Oe. Above 40 K the χT value becomes almost constant after subtracting the temperatureindependent components from χ-1/T plot ( Figure S4), and the value (Curie constant, 1.12) is consistent with the anticipated spin S=1 with g=2.12. The χT value increases with decreasing temperature below 30 K suggesting dominant ferromagnetic interactions, reaching 3.0 at 12 K. The sudden rise and then drop-off with decreasing temperature below 12 K suggests that antiferromagnetic ordering may exist between chains, or may be due to spin-orbit coupling, however without the single crystal structure reliable fitting of this is not possible. Overall however, paramagnetic S=1 character of the Ni(II) centres, attributed to the formation of a chain structure, is apparent from the magnetic data. Conductivity and Magnetoresistance Due to the insolubility of the [Ni(qs)2] in all solvents and that attempts at sublimation resulted in decomposition, the material could not be processed into a thin-film which would have been the most desirable state for measurements. Also due to the insolubility of the material, single crystals could not be grown despite exhaustive attempts and so measurements were carried out on a powdered sample of [Ni(qs)2]. The powdered sample on an intedigitated electrode substrate was used to study the conductivity characteristics ( Figure S5). As expected, the current versus voltage (I-V) characteristics of the sample showed strong non-linear behaviour (Figure 3). With low bias voltage the observed currents are initially very low, increasing exponentially above a certain threshold voltage value (≈50 V at 2 K). Comparable results were measured previously for the [Ni(qt)2] analogue, however in the new [Ni(qs)2] material the observed currents are an order of magnitude greater than those previously observed for [Ni(qt)2]. From measurement of the resistivity temperature-dependence, an activation energy of Ea = 0.11-0.13eV was calculated ( Figure S6). This is very small and comparable to values measured for examples of single-component molecular conductors. 30,31 The small activation energy suggests very strong intermolecular interaction, with a narrow band gap between occupied and unoccupied energy levels. This is in agreement with the visible spectrum measured by diffuse reflectance spectroscopy ( Figure S7), which shows a broad absorption extending beyond the visible into the near-IR. Although an increase in conductivity was expected due to the enhanced interaction caused by the introduction of selenium, it is remarkable that the conductivity is so high in a powder sample where the effect of grain boundaries will likely be significant. A space-charge-limited conduction (SCLC) mechanism with carriers injected from the electrodes can be used to interpret the non-linear behaviour, as is the case in some other previously reported molecular examples. 28,29 It is expected that, as is the case for [Ni(qt)2], the material possesses strong electron-donor property and that the injected carriers are holes and this is consistent with the measured electrochemical behaviour ( Figure S8). Above the threshold voltage, the I-V behaviour can be modelled by an exponential law (I ∝ V m+1 ) with m ranging from 4 to 18 at 200 and 2 K respectively, confirming that the current is dominated by a trapped-charge-limited conduction (TCLC) regime within the (SCLC) mechanism ( Figure S9). In these conditions, the current is due to the bulk properties of the compound rather than the contact effects. The same devices were also used to measure the magnetoresistance effect. A constant bias was applied and the magnetic field modified between -5 and 5 T. In order to ensure that each measurement of the current was made over a suitable region of the non-linear region an appropriate bias was selected for each temperature at which a measurement was made. The experiment was setup with the magnetic field parallel to the current direction in order to avoid the appearance of Lorentz force. The observed giant negative magnetoresistance effect of [Ni(qs)2] is illustrated in Figure 4. The magnetic field dependence of resistance of the [Ni(qs)2] complex reported as a percentage (R-R0T)/R0T, where R and R0T are the resistance with and without applied magnetic field, respectively. At 2 K the resistance decreases by more than 35%. This effect is still apparent (8%) at 50K and still observable at 200 K although it has decreased below 0.5%. This is lower than that observed for the previous thiol analogue, however this may be partially explained by the measurements being made on a powder sample as opposed to ordered single crystals or evaporated multicrystalline thin-film. Calculations and Discussion As the single crystal structure of the molecule is unknown, the structure had to first be modelled. From the empirical observations, it is likely that [Ni(qs)2] has the same general chain structure as the thiolato analogue. Using the structure of [Ni(qt)2] as the base and substituting the sulfur atoms for selenium the structure of a trimer was optimised. Calculations at the unrestricted B3LYP/DZVP level of theory were performed on an 8-mer of the chain, with the length chosen to approximate the spin correlation length at low temperature. The 8-mer was constructed using the geometry of the central molecule of the optimised trimer. The optimised structure reveals a relatively short distance between the selenium atoms of adjacent molecules, 3.51 Å, well within the sum of the Van der Waal radii of two selenium atoms. It is likely that this interaction between the bridging atoms results in the great decrease in solubility of the material over the thio-analogue. The greatly enhanced conductivity of [Ni(qs)2], when compared to the previously reported [Ni(qt)2], analogue is also attributed to this interaction. The calculated spin density is shown in Figure 6 which, as anticipated, is predominantly located around the Ni(II) centre but with some character also on the ligand. This electronic structure resembles that of the thiolato analogue and is consistent with what would be expected for a system which follows the double exchange mechanism. As is the case for observations in nitronyl nitroxide-tetrathiafulvalene radicals, and the thiolato analogue, unpaired spins give rise to a Singly-Occupied Molecular Orbital (SOMO) of lower energy than the HOMO as necessitated by the double exchange mechanism. As shown in the density of states plot ( Figure S10), contribution of the SOMOs of Ni(II) ions are found in the lower energy level (αβ), and the large energy differences (>0.5 eV) between α and β spins (spin-polarization) at the HOMO band is caused by the low-lying SOMOs. Hole transport via the HOMO orbitals based on the ligands is facilitated by the hole doping of the HOMO band at high applied electric field. The HOMO electrons on the ligand couple with the localized SOMO spins of the molecular unit which are aligned by an applied magnetic field. This results in the alignment of the spins of the HOMO electrons between neighbouring molecular units. As favourable spin alignment is maintained upon hole transfer this instigates hole hopping with reduced scattering. Due to co-location on a small molecular unit, the coupling between the HOMO and the SOMO spins is strong and consistent with the magnetoresistance effect persisting to high temperatures. Conclusion In conclusion, the magnetic, structural, conductivity and magnetoresistance properties of [Ni(quinoline-8-selenoate)2] ([Ni(qs)2]) have been studied and the material shows a remarkably high conductivity for a single-component molecular material. As was anticipated, the introduction of selenium results in a great enhancement of the interaction between adjacent molecules and gives a pronounced improvement in conductivity over the thiolato analogue. Inevitably, this strong Please do not adjust margins Please do not adjust margins interaction also results in a reduction in the solubility of the material necessitating all measurements be carried out on a powder sample of the material. The conductivity is an order of magnitude greater than the previously reported, thin-film processable thiol analogue and has been interpreted through a space-charge limited conduction mechanism. [Ni(qs)2] also displays an intrinsic magnetoresistance effect; under an applied magnetic field the material displays a reduction in resistance of greater than 35% at 2 K. By analogy with the thiol analogue and earlier studies on TTF-derivatives, the effect may be attributed to a double-exchange mechanism and is still observable at 200 K. Adding to a family of molecular magnetoresistive materials with only very few examples, [Ni(qs)2] provides further evidence of the importance of strong intermolecular interactions for the development of functional materials. Experimental All chemicals were purchased from Sigma Aldrich and used without further purification. General procedure for the synthesis of[Ni(qs)2]. Under an atmosphere of nitrogen Diquinonyl-diselenide (0.25g, 0.60 mmol) was dissolved in HCl (6M, 5 mL) and degassed ethanol (15 mL). Hypophosphorus acid (50% aq., 2 mL) was added and the mixture stirred at 70 °C for 15 minutes. The solution was allowed to cool, before addition of a saturated solution of sodium acetate (7 mL) was added, followed immediately by a solution of Ni(CH3COO)2·4H2O (0.15g, 0.60 mmol) in degassed water (3 mL) with vigorous stirring. The mixture was stirred for 10 minutes and the precipitate collected by filtration. The solid was washed with water, ethanol, ether and DCM to yield the pure product (0. 24 Magnetic and optical measurements Magnetic susceptibility measurements were performed on powder samples from 1.8 to 300 K using a Quantum Design MPMS-XL SQUID Magnetometer with MPMS MultiVu Application software to process the data. The magnetic field used was 1 T. Diamagnetic corrections were applied to the observed paramagnetic susceptibilities by using Pascal's constants. Diffuse Reflectance measurements were recorded on powdered samples dispersed in BaSO4 using a integrating sphere attachment on a Jasco V-570 UV/Vis/NIR spectrophotometer. Conductivity Measurements Conductivity data were taken on powdered [Ni(qs)2] sample deposited on an interdigitated Pt electrode with a line width of 2 μm and gap of 2 μm in 2 × 2 mm area, corresponding to a 2 μm gap and 2000 mm long electrode, fabricated on a quartz glass substrate. The substrate with the sample was then attached on a hand-made probe and the electrodes were connected to with gold wires with gold paste, and the probe was introduced into the Quantum Design MPMS-XL cryostat with MPMS MultiVu Application software to control temperature from 2 to 300 K and magnetic field from 0 to ± 5 T. Voltage-applying current measurement was carried out with Advantest R6245 Sourcemeter controlled with an original software. Computational Theoretical calculation on an 8mer of [Ni(qs)2] was carried out on UB3LYP/TZVP level with Gaussian09 program package. Since the 3dimensional crystal structure of [Ni(qs)2] has not been solved, the structure of 8mer was prepared based on an optimized structure of 3mer: the structure of the central monomer in the optimized 3mer was replicated 8 times and arranged to have the similar chain structure to [Ni(qt)2]. The Ni-Se distance was also set to the same to that in the optimised structure of [Ni(qs)2] 3mer.
3,967.6
2018-01-07T00:00:00.000
[ "Materials Science", "Physics" ]
Sustainable Development Goals and Islamic Finance: An Integrated Approach for Islamic Financial Institutions : The challenges posed by environmental degradation and abandoning of social rights to secure business interests have highlighted the importance of focusing on sustainable development within the global financial system, especially among citizens and policymakers. The timely declaration of the Sustainable Development Goals (SDGs) by the United Nations is appropriate in addressing environmental degradation. In fact, the SDGs have become part of the fundamental agenda and essential requirements of every business, including Islamic financial institutions. In particular, the concept of sustainable development is parallel with Islamic teachings, which promote welfare, security, and rights for the sake of the current and future generations. Furthermore, Islamic finance and the SDGs are closely associated, as the former is capable of serving a meaningful function in sustainable development to achieve the goals of implementing fair and equitable tools, promoting resource mobilization, and enabling social benefit tools. Therefore, this study highlights an important case for Islamic financial institutions by expounding on the best indicators for sustainable Islamic finance. INTRODUCTION The evolution in the development of certain industries has brought prosperity in the form of economic and population growth. However, the neglect of sensitivity towards social and environmental interests has become a challenge to the global economy (Daly & Farley, 2011). Human development and economic growth have caused the emergence of several debates and issues concerning the deterioration of environmental quality (Twerefoua et al., 2017). The imbalance of the planet, which continues to be challenged by human activities, contributes to deterioration and damage, especially in waste generation and uncontrolled use of resources. Uncontrolled production leads to profit accumulation, resulting in ignorance of social interests that culminates in long working hours, underpayment, child labor, and citizens' health. Pollution and depletion of natural 124 Muhmad et al. Indonesian Journal of Sustainability Accounting and Management, 2021, 5(1), [123][124][125][126][127][128][129][130][131][132][133][134][135][136] resources have caused enormous stress to the earth system (Schoenmaker, 2019). The earth cannot cope if humans do not stop the activities that can cause pollution. The SDGs consist of all the components concerning environmental, social, and economic issues, such as sustainable production and consumption, climate change, poverty, marine conservation, food security, gender equality, and economic growth (Agarwal, 2018). The concept of conventional SDGs is closely related to the Islamic model of sustainable development. The Islamic concept teaches every Muslim to establish harmonic and holistic interactions with other individuals and nature based on al-Qur'an and Sunnah, which consist of the aspects of aqidah, shariah, and akhlaq (Setiawati, 2020). Aqidah is tawhid which is the doctrine of the oneness of Allah within all aspects of a Muslim's life, including one's attitude towards the environment. Shariah guides humankind and it consists of the rules and law relating to the environment, among others. Then, akhlaq is also known as contemporary ethics whereby humankind ought to be good and virtuous in character, including practicing Islamic environmental ethics. The conventional view and Islam share a similar intention to secure human needs and environmental conservation. The utilization of natural resources is permitted but should not involve unnecessary destruction. According to Marsuki (2009), the utilization of natural resources in producing life's essentials should be protected to ensure their continued sustainability. The United Nations has insisted that all governments and sectors, from micro-enterprises to multinational companies, should develop strategies for pursuing the SDGs (Jones et al., 2017). The sustainable development challenge is of no exception to any business, including the financial sector. According to Helleiner (2011) and Weber (2014), financial institutions' awareness about sustainability has increased. In fact, financial institutions have embraced the sustainability concept over the last decade, particularly in insurance, investment, and banking industries (Risi, 2020). This awareness has greatly influenced the current economic situation besides contributing to society and sustainable development. Adoption of the green financial system has become a priority agenda for all financial sectors across the world as regulators and practitioners have started adopting the sustainability concept in the financial system (Zhixia et al., 2018;Park & Kim, 2020). Focusing on the Islamic finance industry is seen as a critical part as this sector's growth is very encouraging, with a total global Assets Under Management (AUM) amounting to US$1.6 trillion to US$2.6 trillion by 2020 (Parker, 2016). Forecasts, too, indicate that the growth of Islamic finance in 2022 will reach US$3.8 trillion (ICD Thomson Reuters, 2017). Given this rapid growth and its well-established presence in the Organization of Economic Cooperation (OIC) region, Islamic finance offers an innovative and effective channel to mobilize capital for the implementation of the SDGs. The social and environmental elements absorbed alongside financial profit would be the best offers in mobilizing resources for development. The Islamic finance industry in Malaysia is in the phase of development at adopting the Strategy Paper on Value-based Intermediation (VBI) issued by Bank Negara Malaysia (BNM) as a guidance document in the adoption of a value-based system for Islamic financial institutions. The VBI concept is an extension of the valuebased banking concept in designing a sustainable financial system, which was triggered by the United Nations Environment Programme (UNEP). The VBI was introduced by the Malaysian government to ensure that the practice of the Islamic financial industry is compatible with the concept of sustainable development. According to Aassouli et al. (2018), the role of Islamic finance in achieving sustainable development is significant as it possesses effective risk management tools, promotes efficient use of resources by using effective instruments, and also acts as a social welfare tool. The VBI acts as a driver for profit and risk, thus has a positive impact on the Islamic finance industry (Ismail et al., 2020). In facilitating the achievement of the SDGs, the VBI is seen as an agent in fulfilling the goals of sustainable development with great impacts on the development of the Islamic finance industry. The world has embarked on a quest of transition to new and different growth, as well as development towards being more socially and environmentally sustainable. As a result, it has created significant opportunities and challenges to Islamic finance. The current business models and strategies for the sustainability of the Islamic finance industry are indispensable. Islamic finance has its specific elements that are similar to the SDGs concept. These elements include maqasid al-shariah, which is protecting the five elements of life and promoting guidelines for happiness in this world and the hereafter. The development of a new set of Sustainable Development Goals (SDGs) by the UN was preceded by the Millennium Development Goals (MDGs), which was committed to achieving eight measurable goals by the year 2015. Most of the MDG targets were achieved, albeit with some room for improvement that needed a detailed plan. Thus, the SDGs are the needed momentum to fit the MDGs with the new global development of people and technology. Besides, the MDGs were seen to be ineffective for some countries such as Nigeria (Ajiye, 2014) and South Africa (Udjo & Lalthapersad-Pillay, 2015). There is a broad acknowledgement nowadays of the need to overcome environmental challenges that involve carbon emission, land-use change, destabilization of natural resources, and biodiversity loss. There are also signs of people living below minimum social standards due to a lack of health care, poverty, and hunger (Schoenmaker, 2019). The SDGs are regarded as a complete set of 17 goals consisting of an integrated concept with three aspects, namely economic, social, and environmental, that might fulfil all the current needs of sustainable development (Ali et al., 2018). The SDGs are more complex and have more details compared to the MDGs in terms of the number of goals, indicators, and reporting. Based on Georgeson & Maslin (2019), the efforts to achieve the SDGs would be inadequate if the focus was only on the framework of finance, implementation, and monitoring. Nevertheless, putting the framework into practice can contribute towards achieving the SDGs (Mawdsley et al., 2014). Strategic practice and an excellent framework could help in the implementation of the SDGs. The framework should be implemented by considering all aspects that contribute to sustainable development. The exploratory study by Mburayi & Wall (2018) found that the term sustainability should be embedded in the accounting and finance curriculum. According to Yatim et al. (2017), financial institutions should introduce various effective financial products and services that focus on green industries. How can finance support sustainable development? According to Schoenmaker (2019), finance can provide the resources for investment and lend for sustainable companies and projects. Besides, strategic modalities of financing is a key factor for sustainable development (Radović et al., 2018). METHODS The methodology adopted in this study was based on the guidelines highlighted by global organizations such as the UN on the SDGs, guidelines by local governments in promoting sustainable practices in Islamic finance, and literature review to identify the common ground for each component of sustainable Islamic finance and to define the best approach for Islamic financial institutions to enhance their effectiveness in fulfilling the goals of sustainable development. In addition, this study also refers to the Quran and Hadith in order to link maqasid al-shariah, Islamic finance and the sustainable development goals to present the pillars and indicators for measurement. RESULTS AND DISCUSSION Strategic finance could lead to the success of sustainable development. Otherwise, ineffective planning for sustainable finance could indirectly lead to environmental, social, and economic degradation. The growing concern among practitioners, regulators, and financial stakeholders about sustainability has led to the adoption of concepts, such as the Triple Bottom Line (People-Planet-Prosperity); Environmental, Social, and Governance (ESG) reporting; and Global Reporting Initiatives (GRI) as a trend worldwide (Ng, 2018). Even though many guidelines and frameworks have been developed to achieve sustainability, the practices of some countries are still found unsatisfactory. In Malaysia, the unsatisfactory result is surprising as banking institutions in Malaysia have been offering green financing or known as Green Technology Financing Scheme (GTFS) since 2010. It is an effort by the Malaysian government to focus on green technology to strengthen and improve utilization and supply. This includes 28 active financial institutions in the GTFS, which offer green financing to eligible projects. Besides, the latest green product offered by Malaysia was green Sukuk in June 2017. Darus et al. (2013) also reported that the environmental disclosure by Islamic financial institutions in Malaysia was still low. It is a critical issue when the financial institutions practicing and offering green products do not adequately disclose their sustainable finance practices in their reporting. As claimed by Şener et al. (2016) the most important stakeholders of the corporate disclosure of sustainability reports are the shareholders and the government due to the use of power and urgency. Thus, the government has a strong role in introducing guidelines and policies on sustainable finance. Financial institutions need to understand that certain incidents concerning environmental issues are caused by their clients to whom they lend money for project financing. These incidents and issues may have direct impacts on the financial situation of the financial institutions, such as non-performing loans due to the inability of the clients to comply with the financing agreement, increased risk of litigation caused by a lack of appropriate disclosure, and a higher cost of capital due to the low quality of the loan book. To address the issues concerning sustainability, all sectors, agencies, and organizations should be more precise in achieving a high level of research process in order to obtain high-quality disclosure and verification practices (Rosati & Faria, 2019). The SDG agenda can only be achieved if all businesses, societies, philanthropies, and academia participate in partnerships with the national and sub-national levels of governments (Ismail et al., 2018). The SDGs that have been established to protect human life in all aspects are closely related to the concept of maqasid al-shariah. The origin of shariah (Islamic law) comes from the sources of al-Quran and Sunnah, and maqasid is the objective of Shariah. Imam Al-Ghazali pronounced that maqasid al-shariah is about the necessity of human beings through the preservation of five elements from any harm which are the protection of faith or religion (din), life (nafs), lineage (nasl), intellect ('aql), and property (mal). It is aligned with the concepts of SDGs which are to preserve lives, descendants, and wealth as well as ensure that the future generation can have a better life and could be sustained longer (Abdullah, 2018). Human beings are the only creations of God who are appointed as vicegerents in this world as administrators and environmental managers, as stated in the al-Quran verse 2:30. According to Ismail et al. (2018), human beings are entrusted with the responsibility of guardians, trustees, and supervisors concerning the use of resources in a sustainable way and the protection of physical property of their own. As good Muslims, the concept of aqidah or tawhid should be applied to establish harmonic interactions between people and nature. According to Ismail (2012), the guiding principles of good deeds and a faithful society will provide good examples to other societies and individuals. Aqidah covers all aspects of a Muslim's life, including one's Indonesian Journal of Sustainability Accounting and Management, 2021, 5(1), 123-136 attitude towards the environment. Then, akhlaq also plays an important role for Muslims to be good and virtuous in their character, especially in practicing Islamic environmental ethics. As God's great creations, humans should be the best catalysts for the management system that is aligned with the shariah on this earth as well as provides guidance on rules concerning the environment according to the main sources which are the al-Quran and Sunnah. One of the stories concerning the society and environment is the instructions given by the caliph, Abu Bakar, to his armies which are not to harm woman, children, and infirm; not to destroy crops and cut down trees; and not to harm animals. There are two lessons from the instructions, which are justice and recognizing the value of nature. These practices should not be limited to war only, but are applicable to our daily lives. Abu Bakar was the rightly guided caliph that incorporated instructions from al-Quran and rules from the Prophet's examples. The Prophet is a role model of a Muslim's life. Thus, all his examples are to be emulated, including forming the basis of Islamic law known as shariah. Shariah becomes sophisticated due to the growth of the Muslim population around the world, which leads to complex requirements from the government (Fatoni et al., 2019). Apart from al-Quran and Sunnah, there are other additional elements to be referred to, which are Ijma' (the consensus of scholar jurists) and Qiyas (the process of analogical reasoning). The development of Islamic law (fiqh) is based on this process with the addition of two other instruments, namely Ijtihad (interpretation in context) and Uruf (custom and practice). This framework is very important and useful in serving Muslims by expanding the shariah in the formulation and setting of standards of Islamic law, such as environmental law. The law concerning the relationship among all creations and human life has been specified since the time of the Prophet. Al-Quran states this relationship in verse 4:134: 'Whosoever seeks the reward of this world let him know that the reward of this world and (that of) the Hereafter is with Allah. Allah is All-Hearing, All-Seeing'. Additionally, the Quran tells us to be fair in all aspects of our life, including natural surroundings (15:85): 'We have not created the heavens and the earth and everything in between except for a purpose. And the Hour is certain to come, so forgive graciously'. This Quranic verse illustrates that all the creations in this world are created for a purpose. The ultimate objective is to be kind to all creations, just in all matters, and enjoin the right and forbid the wrong. According to Alziyadat & Ahmed (2019), the shariah principles, as agreed by jurists, have evolved based on the concept of prioritizing the interest of the community rather than the individuals, relieving hardships rather than promoting benefits, prioritizing bigger benefits and minimizing losses, and avoiding bigger harm. Based on the Quranic view, everything on earth is for humankind and is a gift for them to manage. Humans should strive to do everything in a good manner to maintain harmony among all the creations in this world. Social and economy are a part of the environment, and these are not separate entities in treating the sustainability concept (Sarkawi et al., 2016). Islam has an integrated approach to lead citizens' lifestyles towards sustainable development, and as a trustee of the earth, man should execute his task socially and economically. The concept of sustainable development should be embedded in rules and institutions as a step for the Muslim community to confront any social and environmental challenges. Islamic finance came into existence after the establishment of Islamic banking in the 1970s to offer interest-free commercial products. It has developed gradually to meet the needs of the global financial market worldwide in both Muslim and non-Muslim countries (Myers & Hassanzadeh, 2013). The Islamic economic system derives from the shariah principles that require Muslims to undertake economic activities such as investments and entrepreneurial activities that do not contradict with Islamic values. Islamic economy plays a central and critical role in the global ethical economy. A recent trend in Islamic economy is seeking for ethical products and services that are perfectly aligned with shariah-based principles and ethic-based Islamic finance. The resilient growth of Islamic finance has led to the need for specific care in the aspects of social justice and environmental balance so that the growth would not harm the earth's system as well as the people inhabiting it. According to Biancone & Radwan (2018), Islamic finance has several potentials in promoting sustainable economic development, such as widening access to microfinance, expanding the reach of takaful, and financing infrastructure projects. Institutions that solely focus on religious activities play an essential role as socio-economic institutions in achieving the SDGs, for example, zakat and waqf institutions (Franzoni & Allali, 2018). Zakat (a fundamental pillar of Islam) is one of the obligations of Muslims in terms of income distribution that focuses on helping the poor. Muslims are required to distribute their wealth and income to the poor in the form of zakat, and the system was very effective in the early Islamic history as a distributive scheme in taking care of the poor (Ahmed et al., 2015). Waqf comprises donations from individuals or organizations for perpetual societal benefits that consist of movable or immovable assets, as well as cash, which can be used for welfare purposes such as education, health, and environmental preservation programmes (Ismail et al., 2018). According to Ismail & Shaikh (2017), project financing, especially for development infrastructure in achieving the SDGs such as SDG 6 (air and sanitation), SDG 7 (sustainable and affordable energy), SDG 9 (building resilient infrastructure), and SDG 11 (protection), can use innovative Islamic financial instruments (such as sukuk) to achieve the SDGs targets. The state of the Islamic Economy Report 2018/19 has outlined eleven priority SDGs goals for Islamic finance, as illustrated in Table 1. The enablers recommended by economic participants for the development of Islamic finance in achieving the SDGs are that consumers should demand for social impacts and ethical financial services, investors should be highly concerned about high impact investments for companies, the industry should expand ethical and social finance products and services, and governments need to facilitate ethical finance standards and regulations besides providing incentives to the industry (Ibrahim & Ebrahim, 2018). Sadiq & Mushtaq (2015) outlined five main aspects of the role of Islamic financial institutions towards sustainable development, which are the financial sector's stability and resilience, inclusive finance, developing infrastructure, resolving social and environmental issues, and reducing vulnerability and mitigating risk. The five major aspects are very important in the implementation of the SDGs in the Islamic finance industry. The latest guideline and strategy issued by Bank Negara Malaysia (BNM) 2018, known as Value-Based Intermediation (VBI), was purposely designed to strengthen the ecosystem of sustainable finance by expanding the role of Islamic banking institutions (Arshad et al., 2018). VBI is universally applicable across financial sectors, especially the Islamic banking industry. The guideline focuses on practices that generate community, environmental, and social impacts. VBI is quite similar to Value-Based Banking (VBB) that was introduced by the UN in 2015, which focuses on a sustainable economy. However, VBI's focus is on the intended Indonesian Journal of Sustainability Accounting and Management, 2021, 5(1), 123-136 outcomes of shariah, including the encouragement to generate, accumulate, and distribute wealth in a just manner. The guidelines provided by the government are mostly based on combining the concepts of the SDGs, Islamic finance, and maqasid al-shariah. As shown in Figure 1, maqasid al-shariah is seen as focusing on protecting the fundamentals of human beings' necessity and serving as a guide for efficient resource utilization, which is similar to the concept of the SDGs (Abdullah, 2018). Sustainable Islamic finance is about combining the concepts of the SDGs and maqasid al-shariah with a better guideline provided by international organizations and local governments. Thus, Islamic financial institutions play an important role by focusing on both religious and socio-economic activities to achieve the SDGs highlighted by the global community (Franzoni & Allali, 2018). Figure 1 SDGs, Islamic Finance and Maqasid al-Shariah SDGs came into force on 1 January 2016, and approximately 30 indicators within a comprehensive proposal were submitted to the United Nations Statistical Commission as a starting point for its framework. Many organizations and governments reflect the proposal at the national level by working on data analysis and setting indicators for monitoring progress in ensuring that the SDGs are used as practical tools (Zhou & Moinuddin, 2016). The SDGs Index is an indicator for organizations as a guideline towards fulfilling the need for sustainable development in all sectors. The recent global financial crisis has raised debates regarding the sustainability of the financial system. An alternative system should serve people around the world for a longterm period with prosperity and provide added value to the real economy. The relatively rapid growth of the Islamic finance sector can create an opportunity for the Islamic financial industry globally to bring innovative solutions for the smooth implementation of the SDGs agenda. The Islamic financial system is recognized by researchers as having the potential to mobilize resources, thus making significant contributions to sustainable development (Myers & Hassanzadeh, 2013). Additionally, Nor et al. (2016) found the existence of a positive demand for the social banking model among clients of Islamic banks in Malaysia. Islam has an integrated approach to bring citizens' lifestyles towards sustainable development, especially in serving the needs of the global community, including the middle class and the poor. A framework should be developed as a guideline for Islamic financial institutions in fulfilling the need for sustainable development in their operation and business practice. The establishment of VBI in facilitating the SDGs is seen as an initial step for Islamic financial institutions in contributing towards achieving sustainable development as well as serving as drivers of profit and risk (Abdullah, 2018). The SDGs index for Islamic finance is slightly different as the specificity of the SDGs indicators for Islamic financial institutions is based on the guidelines by the UN, the government of Malaysia, and from Jan et al. (2018). Schoenmaker (2019) stated that economic resiliency consists of four SDGs, which are SDG 8 (Decent Work and Economic Growth), SDG 9 (Industry, Innovation, and Infrastructure), SDG 10 (Reduced Inequality), and SDG 12 (Responsible Consumption and Production). These four indicators are allocated into this category to align Islamic finance and VBI with the SDGs. All the parameters included in this framework are from the United Nations: SDGs Industry matrix for financial services, absorbed with the elements of VBI. The elements added to SDG 8 are organizations should commit to Islamic microfinance (Rashid et al., 2018) and expand savings, credit, and takaful for small business owners, which include the adoption of Islamic instruments such as mudaraba, musharaka, and murabaha, among others. SDGs Indicators for the Measurement of Each Item Prosperity: Economic Resiliency SDG 8: Decent Work and Economic Growth Offer (savings, credit and takaful) for microfinance and small business owners. Leverage new business models and technologies -impact investment, crowdfunding. Reporting on minimum wages paid. Expand finance for young people. SDG 9: Industry, Innovation and Infrastructure Investment in low-carbon infrastructure (green infrastructure project). Offer long-term financing for public-private partnerships in infrastructure. SDG 10: Reduced inequality Leverage new technologies such as mobile money payment services, big data or cloud computing. Pay staff a living wage. SDG 12: Responsible Consumption and Production Statement relevance to sustainability (mission, visions). Established a sustainability department. Subscribing to relevant local or international standards or best practices (ESG, VBI). Partnership with a green organization for sustainable production. Offer green Sukuk for funds, loans and deposits. Donates profits for environmental preservation activities. Source: United Nations Global Compact (2016); Bank Negara Malaysia (2018) SDG 9 promotes sustainable industrialization which has an indirect economic impact, and the initiatives considered in this section are related to investments in low-carbon infrastructure. SDG 10 is about reducing inequality and the point highlighted is leveraging on new technology that creates efficient and effective services for a new market that could extend financial inclusion, pay living wages to the staff, and ensure excellent remittance services. SDG 12 concentrates on sustainable patterns of production and consumption, and the best part to absorb the VBI principles of attaining benefit and preventing harm is the green investment. The activities should be free from israf (unnecessary spending), itraf (self-indulgence), and tabdhir (spending on unlawful activities) (Laldin & Furqani, 2013). Sustainability Accounting and Management, 2021, 5(1), 123-136 The entire Islamic finance landscape is a playground to the SDGs initiatives that promote benefits to mankind. VBI serves as a common vision in the Islamic finance industry and ensures the delivery of applications and services related to the concepts are in line with the shariah principle. The requirements highlighted in VBI were taken from the main part of organizations, which are strategic planning and company's appearance. These indicators are developed to: a) provide effective management and strategic guidance to the organization, and b) clarify the VBI and sustainability practices at the early stages of organization. Indonesian Journal of In presenting the best practices for responsible consumption, the goals are to reduce energy consumption by employees and suppliers, offer green products such as green sukuk, create partnerships with green organizations, form innovative agreements, donate to environmental activities, and publish an integrated report. Collaborate with development finance institutions and governments for education projects financing. Offer Sukuk for education by providing access for personal savings and education loan. Provide training for small-medium enterprises in accounting, business management and customer services. Provide student internships for low-income students (B40). SDG 5: Gender Equality Provide savings, credit and takaful products for women in growing their business. Provide Ar-Rahn to increase lending access for women. Provide women share and roles on company Boards. Provide educational outreach program that targets teenage girls. Provide educational opportunities and talent program supporting women to reach management levels. SDG 7: Affordable and Clean Energy Provide financial expertise in energy pricing models for energy efficiency. Provide capital and invest (green Sukuk) in project related to renewable energy. Provide innovative credit for small-medium enterprise to encourage enhancing energy efficiency. Source: United Nations Global Compact (2016); Bank Negara Malaysia (2018) Six sustainable development goals were included in the section of people: social empowerment, which are SDG 1, SDG 2,SDG 3,SDG 4,SDG 5,and SDG 7. SDG 1 (No Poverty) and SDG2 (No Hunger) are similar to each other. Both elements of Islamic finance with the SDGs are about ending poverty and hunger, which are an organization should offer products that are free of fee or interest such as Qard al-Hasan (Widiyanto et al., 2011); offer direct microcredit, micro saving, micro-takaful, and remittance services across various geographical and socioeconomic contexts; offer financial education to communities; pay zakat to asnaf (Nadzri et al., 2012;Embong et al., 2013); give sadaqah and waqf; initiate programmes to deliver food assistance to the needy; establish responsible business policies that do not violate human and land rights; and collaborate with governments and farm aggregators to avoid the discrimination of women, disabled people, people of certain races and ethnicities, and smallholder farmers. These initiatives would fulfil the principle of VBI, which is to be fair and transparent to everyone. SDG 3 was established to benefit people of all ages towards having a healthy life. Leading companies across the world have highlighted certain approaches towards fulfilling this goal. Most of them have suggested providing and raising capital for investment in healthcare institutions, supporting health promotion activities, conducting programmes with communities to promote the importance of maintaining health, and ensuring healthy and safe work environments for employees. SDG 4 stresses the importance of quality education. Collaboration and transferring knowledge to everyone is the priority agenda in order to maximize benefits across the organization and to the people outside the organization. Organizations should collaborate with governments and across the industry to finance educational projects and explore the best practices in enhancing financial literacy. Offering personal savings and loan products to finance the cost of education, providing business management training to small-medium enterprises (SMEs), mentoring the youth, and providing student internships can generate a positive impact in advancing knowledge and financial literacy. Gender equality has always been a hot issue in all nations. SDG 5 was introduced by the UN to alleviate and eliminate gender inequality in all its forms. The point taken from practitioners around the world is to empower women (Islamic Finance News, 2015). Organizations should offer facilities to women by providing savings, credit, and takaful products to increase women's function in all stages of the organization in terms of participation in company boards, as well as enabling women in emerging markets (Kakabadse et al., 2015), provide educational programmes and opportunities to women and teenage girls, and expand lending to women by providing Ar-Rahn products (Azman et al., 2015). SDG 7 presents a goal that focuses on affordable and clean energy. This goal seeks to ensure access to sustainable and modern energy. VBI emphasizes preventing harm, which may affect financing and investment activities. Relying on that principle, organizations should raise capital and invest (green sukuk) (Muhmad & Muhmad, 2018) in projects related to renewable energy, apply financial expertise to energy pricing models, and design innovative credit and efficient equipment for SMEs by installing sustainable energy that contributes to lessening the carbon footprint (Fresner et al., 2017). The only sustainable development goal that falls into this category is SDG 13. The elements that organizations should employ in their practices are the precautionary steps to lessen the risks of climate change. One of the strategies is to invest and provide a financing plan for climate risk mitigation (Ackerman, 2009), Indonesian Journal of Sustainability Accounting and Management, 2021, 5(1), [123][124][125][126][127][128][129][130][131][132][133][134][135][136] which includes green sukuk as well as debt and equity instruments. Additionally, organizations should provide takaful schemes that cover natural catastrophes and conduct programmes to improve community resilience towards climate change (Crnčević & Lovren, 2018). CONCLUSION The world needs a paradigm shift in the practice of sustainable development for the sake of the welfare of the earth and its inhabitants. From the Islamic perspective, sustainable development emphasizes humankind's responsibility in the utilization, allocation, and preservation of natural resources. Social justice, poverty eradication, and protection of the planet's inhabitants and ecosystems are crucial practices in the ethical Islamic economy. The growth of Islamic finance worldwide has led to the need for specific care in the aspects of social justice and environmental balance so that the growth would not harm the earth system and earth inhabitants. Strategic finance could lead to the success of sustainable development. Thus, Islamic finance is seen to exert a big influence on the fulfilment of the SDGs as its principle is in tune with the SDGs. The services of social impacts and ethical financing have become the primary agenda of each Islamic financial institution. These are meant to provide exclusive offers to consumers by considering all the impacts and risks involved relating to those services to avoid causing detrimental effects on the environment. The Value-Based Intermediation (VBI) designed by Bank Negara Malaysia in 2018 is one of the guidelines to strengthen Islamic banking institutions towards practicing responsible investment and sustainable finance, which can create better and more sustainable economic value in the long run. VBI serves as a guideline to assist in the implementation of the SDGs. The establishment of the guideline has encouraged all practitioners to work towards attaining the SDGs. Thus, this study has highlighted the best indicators that should be featured in sustainable finance that are parallel with Islamic principles.
7,617.4
2021-06-20T00:00:00.000
[ "Environmental Science", "Business", "Economics" ]
Development and validation of spectroscopic simultaneous equation method for simultaneous estimation of betamethasone and luliconazole in synthetic mixture A simple, specific, accurate and precise spectrophotometric method was developed and validated for simultaneous estimation of Betamethasone and Luliconazole in Synthetic Mixture. The wavelength of estimation for Betamethasone was 243.20 nm and for Luliconazole was 225.00 nm. Beer’s law is obeyed in the concentration range of 10-50μg/ml and 20-100μg/ml and correlation coefficient of 0.999 and 0.999 for Betamethasone and Luliconazole respectively. The % recovery for Betamethasone and Luliconazole were found to be 100.49 % & - 100.68 % respectively. Intraday precision of Betamethasone and Luliconazole were found to be 0.10 %– 0.28 % and 0.11 % – 0.24 % RSD and Interday precision were found to be 0.27 %– 0.37 % and 0.23 %– 0.35 % RSD respectively. The proposed method was also evaluated by the Assay of Synthetic mixture containing Betamethasone and Luliconazole. The % Assay was found to be 100.60 % for Luliconazole and 100.52 % for Betamethasone. Validation of proposed methods was carried out according to ICH Q 2 R 1 Guidelines. The proposed methods were found accurate and reproducible for routine analysis of both the drugs in synthetic mixture. Introduction Fungi are eukaryotic, heterotrophic organisms that live as parasites. Complex organisms in comparison to bacteria.Have nucleus and well-defined nuclear membrane, and chromosomes.Have rigid cell wall composed of chitin (bacterial cell wall is composed of peptidoglycan) Fungal cell membrane contains ergosterol, human cell membrane is composed of cholesterol Antibacterial agents are not effective against fungi.Fungal infections are also called as mycoses. Systemic fungal infections are a major cause of death in patients whose immune system is compromised, cancer or its chemotherapy, organ transplantation, HIV-1 infection. The development of antifungal agents has lagged behind that of antibacterial agents.This is a predictable consequence of the cellular structure of the organisms involved. Bacteria are prokaryotic and hence offer numerous structural and metabolic targets that differ from those of the human host.Fungi, in contrast, are eukaryotes, and consequently most agents toxic to fungi are also toxic to the host.Furthermore, because fungi generally grow slowly and often in multicellular forms, they are more difficult to quantify than bacteria.This difficulty complicates experiments designed to evaluate the in vitro or in vivo properties of a potential antifungal agent. Patent No. 46.WO2017/203456A1, Luliconazole stable topical compositions, 2017.Hence, there is a scope to develop analytical methods for Betamethasone and Luliconazole in combination. Literature review reveals that, various analytical methods have been reported for the estimation of Betamethasone & Luliconazole in pharmaceutical formulation and bulk drug include UV spectrophotometric method, High-performance liquid chromatography method (HPLC), NMR Spectroscopy and GC-FID Methods, Stability indicating RP-HPLC method, HPTLC method, TLC method, Capillary electrophoresis and UPLC method in individual and/or in combination of other drug. Literature review shows that, there is no reported method available for simultaneous estimation of both the drugs in combination.Therefore, it is thought of interest to developed simple, accurate, precise and rapid methods for simultaneous estimation of Betamethasone and Luliconazole in combination. Reagents and material All the Reagents and Solvents used were of AR or HPLC grades. Standard Purpose Source Betamethasone Analysis Apex Healthcare Ltd. Preparation of stock solution of LUL Accurately weighed quantity of Luliconazole 10mg was transferred to 100 ml volumetric flask, dissolved and diluted up to 25 ml with methanol to give a stock solution having strength of 100 µg/ml. Standard solution of Betamethasone (BTN) Preparation of stock solution of BTN Accurately weighed quantity of Betamethasone 10 mg was transferred to 100 ml volumetric flask, dissolved and diluted up to 25ml with methanol to give a stock solution having strength of 100µg/ml. Preparation of standard mixture solution (LUL + BTN) From the stock solution of LUL take 1ml and from stock solution of BTN take 2ml and transferred in to 10ml volumetric flask and diluted up to 25ml with methanol to give a solution having strength of LUL was 10 µg/ml and BTN was 20 µg/ml. Preparation of test solution The preparation of synthetic mixture was used which containing Luliconazole (1%w/v) and Betamethasone (2%w/v) (content 15 ml).From which 10ml transferred in 100 ml volumetric flask and made up to 25ml with the Methanol.Final solution contained 100µg/mL LUL and 200µg/mL BTN.From that flask pipette out 1ml and transferred in 10ml volumetric flask and make up the volume with methanol and Concentration of LUL was 10µg/ml and BTN were 20µg/ml. Simultaneous equation method To determine wavelength for measurement, standard spectra of Betamethasone and Luliconazole was scanned between 200-400nm in Methanol.The method was based on the measurement of Betamethasone absorbance of at 243.20 nm and Luliconazole at 225.00 nm and in both wavelengths.. Scan Speed Fast Wavelength Range 400-200 nm Slit width 1 nm Calibration curves for Luliconazole This series consisted of five concentrations of standard LUL solution ranging from 10-50µg/ml.The solutions were prepared by pipetting out Standard LUL stock solution (1ml, 2ml, 3ml, 4ml, 5ml) was transferred into a series of 10 ml volumetric flask and volume was adjusted up to mark with methanol.A zero-order derivative spectrum of the resulting solution was recorded and convert in ratio second order derivative and, measure the absorbance at 225nm against a reagent blank solution (Methanol).Calibration curve was prepared by plotting absorbance versus respective concentration of LUL. Calibration curve for Betamethasone This series consisted of five concentrations of standard BTN solution ranging from 20-100µg/ml.The solutions were prepared by pipetting out Standard BTN stock solution (2ml, 4ml, 6ml, 8ml, 10ml) was transferred into a series of 10 ml volumetric flask and volume was adjusted up to mark with Methanol.A zero-order derivative spectrum of the resulting solution was recorded and convert in ratio second order derivative and, measure the absorbance at 243.20nm against a reagent blank solution (Methanol).Calibration curve was prepared by plotting absorbance versus respective concentration of BTN. 3 Validation of proposed method Linearity and range The linearity response was determined by analyzing 5 independent levels of calibration curve in the range of 10-50µg/ml and 20-100µg/ml for LUL and BTN respectively (n=6). Intraday Precision The precision of the developed method was assessed by analyzing combined standard solution containing three different concentrations 20, 30, 40 µg/ml for LUL and 40,60,80 µg/ml for BTN.Three replicate (n=3) each on same day. For ratio second order derivative spectra absorbance was measured at 225 nm for LUL and 243.20 nm for BTN.The % RSD value of the results corresponding to the absorbance was expressed for intra-day precision. Interday Precision The precision of the developed method was assessed by analyzing combined standard solution containing three different concentrations 20,30,40 µg/ml for LUL and 40,60,80 µg/ml BTN triplicate (n=3) per day for consecutive 3 days for inter-day precision.For ratio second order derivative spectra absorbance was measured at 225 nm for LUL and 243.20 nm for BTN.The % RSD value of the results corresponding to the absorbance was expressed for interday precision. Accuracy The developed UV spectroscopic method was checked for the accuracy.It was determined by calculating the recovery of LUL and BTN from synthetic mixture by standard addition method in the combined mixture solution.The spiking was done at three levels 80 %, 100 % and 120 %.Each solution was scanned between 200 nm to 400 nm methanol as a blank.The spectrum of each was obtained.The amount of LUL and BTN was calculated at each level and % recoveries were computed. LOD and LOQ The Limit of detection and Limit of Quantification of the developed method was assessed by analyzing ten replicates of standard solutions containing concentrations 10µg/ml for LUL and 20µg/ml for BTN LOD The detection limit of an individual analytical procedure is the lowest amount of analyte in a sample which can be detected but not necessarily quantitated as an exact value. LOD was calculated out by using following formula: Where, σ = the standard deviation of the response S = the slope of the calibration curve The slope S may be estimated from the calibration curve of the analyte. LOQ The quantitation limit of an individual analytical procedure is the lowest amount of analyte in a sample which can be quantitatively determined with suitable precision and accuracy.LOQ was calculated out by using following formula: DL = 10 σ /S Robustness & ruggedness  Robustness and Ruggedness of the method was determined by subjecting the method to slight change in the method condition, individually, the:  Change in wavelength (225.00nm and 243.20 to 225±0.2 and 243.20±0.2nm) Change in instrument (UV-Vis Spectrophotometer model 1800 and 2450)  Three replicates were made for the concentration (20,30,40µg/ml of LUL and 40,60,80 µg/ml of BTN) with different stock solution preparation. % RSD was calculated Assay by UV spectrophotometric method The synthetic mixture was used for assay which containing Luliconazole (1%w/v) and Betamethasone(2%w/v) (content 15 ml).From, which 10ml transferred in 100 ml volumetric flask and made up to the mark with the Methanol.Final solution contained 100µg/mL LUL and 200µg/mL BTN and From, that pipette out within the linearity. Results and discussion The methods were validated with respect ICH Q2R1 guidelines. • The combination of Luliconazole and Betamethasone is present in 1:2 ratios, respectively.Ratio Second Order Derivative spectra of both drugs in methanol were shows satisfactory absorbance with respect to selected wavelength so Simultaneous Equation method was developed. Linearity and range Different concentrations of Luliconazole (10-50µg/ml) and Betamethasone (20-100µg/ml) were prepared from respective stock solutions.The absorbances were noted at 225.00 and 243.20 nm.It was noted that at the wavelengths 225.00 and 243.20 nm good linearity was observed and hence these wavelengths were fixed for their simultaneous estimation. Intraday precision Mixed solutions of LUL and BTN containing 20, 30 and 40 µg/ml and 40, 60 and 80 µg/ml respectively series were analyzed three times on the same day using developed spectroscopic method and %RSD was calculated.The % R.S.D was found to be 0.10-0.28%for LUL and 0.11-0.24%for BTN.These %RSD value was found to be less than ±2.0 indicated that the method is precise. Interday precision Mixed solutions of LUL and BTN containing 20, 30 and 40 µg/ml and 40, 60 and 80 µg/ml respectively series were analyzed three times on the different day using developed spectroscopic method and %RSD was calculated.The % R.S.D was found to be 0.27-0.37%for LUL and 0.23-0.35%for BTN.These %RSD value was found to be less than ±2.0 indicated that the method is precise. Accuracy The developed UV spectroscopic method was checked for the accuracy.It was determined by calculating the recovery of LUL and BTN from synthetic mixture by standard addition method in the combined mixture solution.The spiking was done at three levels 80 %, 100 % and 120 %. Percentage recovery for LUL and BTNS by this method was found in the range of 100.07-101.33 %and 100.18-101.61%,respectively. LOD and LOQ The LOD for LUL and BTN was conformed to be 0.0837µg/ml and 0.2498µg/ml, respectively.The LOQ for LUL and BTN was conformed to be 0.2537µg/ml and 0.7570µg/ml, respectively. Conclusion All the parameters for two substances met the criteria of the ICH guidelines for the method validation and found to be suitable for routine quantitative analysis in pharmaceutical dosage forms.The result of linearity, accuracy, precision proved to be within limits with lower limits of detection and quantification.Ruggedness and Robustness of method was confirmed as no significant were observed on analysis by subjecting the method to slight change in the method condition.Assay results obtained by proposed method are in fair agreement.Validation of proposed methods was carried out according to ICH Q2R1Guidelines. Figure 2 Figure 3 Table 7 Figure 2 Calibration graph of Luliconazole at 225.00 nm Table 1 Lists of Instrument and Apparatus Table 4 Solutions for accuracy study Table 6 Average of absorptivity at 225.00and 243.20nm Table 8 Intraday precision data for estimation of LUL and BTN *(n=3) Table 9 Interday precision data for estimation of LUL and BTN *(n=3) Table 12 LOD and LOQ data of LUL and BTN *(n=10) Table 13 Robustness and Ruggedness data of LUL and BTN *(n=3) Assay by UV spectrophotometric method for simultaneous method Table 14 Analysis data of commercial formulation *(n=3) Table 15 Summary of validation parameter
2,797.2
2023-02-28T00:00:00.000
[ "Chemistry" ]
Predicting Heart Cell Types by Using Transcriptome Profiles and a Machine Learning Method The heart is an essential organ in the human body. It contains various types of cells, such as cardiomyocytes, mesothelial cells, endothelial cells, and fibroblasts. The interactions between these cells determine the vital functions of the heart. Therefore, identifying the different cell types and revealing the expression rules in these cell types are crucial. In this study, multiple machine learning methods were used to analyze the heart single-cell profiles with 11 different heart cell types. The single-cell profiles were first analyzed via light gradient boosting machine method to evaluate the importance of gene features on the profiling dataset, and a ranking feature list was produced. This feature list was then brought into the incremental feature selection method to identify the best features and build the optimal classifiers. The results suggested that the best decision tree (DT) and random forest classification models achieved the highest weighted F1 scores of 0.957 and 0.981, respectively. The selected features, such as NPPA, LAMA2, DLC1, and the classification rules extracted from the optimal DT classifier played a crucial role in cardiac structure and function in recent research and enrichment analysis. In particular, some lncRNAs (LINC02019, NEAT1) were found to be quite important for the recognition of different cardiac cell types. In summary, these findings provide a solid academic foundation for the development of molecular diagnostics and biomarker discovery for cardiac diseases. Introduction The heart is a complex organ containing various cardiac cell types, and the interaction between different heart cell types could realize the important functions of the heart. Previous pioneering studies have shown that the heart is composed of approximately 70% non-cardiomyocytes and 30% cardiomyocytes [1]. Cardiomyocytes could be divided into atrial myocytes and ventricular myocytes, while non-cardiomyocytes mainly include fibroblasts, smooth muscle cells, pericytes, and endothelial cells. These cells form four chambers with different morphologies and functions, and they complete the systemic blood circulation [2]. Cardiomyocytes are responsible for contractile function, and they are the most important part. However, they do not function in isolation. Fibroblasts account for more than 40% of the total cells in the ventricle. Their core function is to maintain the cardiac extracellular matrix homeostasis and provide structural and mechanical support for the cardiomyocytes [3]. The mural cells of the vessel wall are mainly composed of smooth muscle cells and pericytes, and these two cell types are important for vascular integrity and heart function [4]. Endothelial cells form the inner layer of blood and lymphatic vessels; they maintain blood circulation by regulating the permeability and caliber of blood vessels and play an important role in controlling and maintaining the growth, contractility, and rhythm of the heart [5,6]. Mesothelial cells are transitional mesodermal-derived cells with similar morphological and functional characteristics to endothelial cells. They can secrete angiogenic factors, which are important for angiogenesis [7]. Heart adipose tissue not only can supply energy locally but also has heart repair functions, such as new blood vessel formation and immune regulation [8,9]. Immune cells and neurons are also very important for the functional homeostasis [10,11]. Identifying cell components and cell types are important for understanding cell functions, especially in complex organs, where multiple cell types work together. There are two types of traditional methods for cell type annotation: (1) cell marker-based methods, such as CellAssign [12], which needs high quality cell specific expressed genes, but most cell types do not have very specific biomarkers. (2) reference dataset-based methods, such as SingleR [13], which compare the scRNA data with reference scRNA data with known cell types and make predictions. Previous studies have reported some markers for cardiac cells, such as in atria (NPPA and SLN), ventricles (MYL2 and MYL3), endothelial cells (FABP4 and AQP7), smooth muscle cells (ACTA2), fibroblasts (COL1A1), pericytes (PDGFRB) or immune cells (PTPRC), neurons (NEXN1), and adipocytes (GPAM and FASN) [14][15][16][17]. Although these genes are very important for each cell type, the maintenance of cell function depends on the interaction among different genes. Therefore, revealing the specific expression patterns of different cell types, especially the expression features that distinguish them from other cell types, is very important for an enhanced understanding of fate decisions and cell functions. On the basis of existing single cell profiling datasets from the Human Cell Atlas study of adult human heart cells [17], machine learning algorithms were used in the present study to analyze data to extract gene expression characteristics and biomarkers to characterize different heart cell types. Machine learning algorithms can extract hidden biomarkers that cannot be found by traditional methods through in-depth research on a large number of cell gene expression data. By using the light gradient boosting machine (lightGBM) algorithm [18], a ranking feature list was generated on the basis of the importance of these features. Then, the incremental feature selection (IFS) method [19] with decision tree (DT) [20] and random forest (RF) [21] algorithms was applied to determine the best number of features and build the optimal classifier. As a result, the most relevant gene features and decision rules were identified. Through these rules, 11 cell types could be accurately classified. Meanwhile, the results of Gene Ontology and Kyoto Encyclopedia of Genes and Genomes pathway enrichment analysis suggested that the selected genes may have important significance for the phenotype of cells or function of the heart. Further research on these genes may help clarify the detailed mechanism of heart development. In short, this research identified a group of potential cardiac cell biomarkers and a precise classifier composed of many decision rules, thus providing insights for further research on cardiac development and function. The method we proposed to identify heart cell types was a type of reference dataset-based method. But unlike traditional reference dataset-based methods which use all the genes, we only use discriminative genes selected with feature selection Life 2022, 12, 228 3 of 17 methods. The method had the merits of both two types of traditional methods: (1) we only considered the discriminative genes; therefore, it was faster and more explainable than the refence dataset-based methods; (2) we quantitively considered the combinations of expression levels, i.e., expression rules, rather than only cell specific markers; therefore, it was more accurate than the cell marker-based methods. The following text was organized as follows: (1) Section 2 lists the dataset analyzed in this study and algorithms used in this study; (2) The results were presented in Section 3; (3) Extensive discussion on results was provided in Section 4; (4) Section 5 summarized this study. Study Design In this study, the interpretation of the model (i.e., classifier) was consisted of two parts, that is, interpretation on (1) single gene expression signatures and (2) combined gene expression rules. Single-gene interpretation explained the effect of single optimal genes on the classification, while combined-gene interpretation focused on exploring how multiple genes contribute to the classification together. The whole framework of this study is shown in Figure 1. dataset-based methods which use all the genes, we only use discriminative genes selected with feature selection methods. The method had the merits of both two types of traditiona methods: (1) we only considered the discriminative genes; therefore, it was faster and more explainable than the refence dataset-based methods; (2) we quantitively considered the combinations of expression levels, i.e., expression rules, rather than only cell specific markers; therefore, it was more accurate than the cell marker-based methods. The follow ing text was organized as follows: (1) Section 2 lists the dataset analyzed in this study and algorithms used in this study; (2) The results were presented in Section 3; (3) Extensive discussion on results was provided in Section 4; (4) Section 5 summarized this study. Study Design In this study, the interpretation of the model (i.e., classifier) was consisted of two parts, that is, interpretation on (1) single gene expression signatures and (2) combined gene expression rules. Single-gene interpretation explained the effect of single optima genes on the classification, while combined-gene interpretation focused on exploring how multiple genes contribute to the classification together. The whole framework of thi study is shown in Figure 1. . Figure 1. Flow chart of the study design. First, lightGBM method was applied to rank the feature of single-cell gene expression profiles into a ranked list. Second, IFS method with machine learning Figure 1. Flow chart of the study design. First, lightGBM method was applied to rank the features of single-cell gene expression profiles into a ranked list. Second, IFS method with machine learning algorithms was used to detect the best number of features and build the optimal classifiers and decision rules. Finally, functional enrichment analysis was performed on the optimal gene feature set. Data Collection Raw dataset was downloaded from the publicly available Human Cell Atlas Data Coordination Platform, with accession number: ERP123138 (https://www.ebi.ac.uk/ena/ browser/view/ERP123138, accessed on 29 January 2021) [17]. The processed 10X Genomics dataset included the expression profiles of 33,538 genes in 451,513 cells from 11 cell types. The 11 major cell types and the corresponding sample sizes are presented in Table 1. Feature Ranked by LightGBM LightGBM is a well-known boosting learning machine [18] that combines many weak classifiers to achieve a single strong one. It could be regarded as an improved version of gradient boosting DT (GBDT) [22], which recurrently fits a new DT by using the negative gradient of the loss function of the current DT as the approximate value of the residual. The main differences between lightGBM and gradient-based one-side sampling (GOSS) lie in the two new strategies adopted by lightGBM. These are GOSS and exclusive feature bundling (EFB), which both greatly improve the efficiency and ensure the accuracy of classification. In GBDT, the gradient of a sample in calculating the residual error of a DT reflects the contribution of the sample to subsequent classification. Therefore, GOSS down samples the training data by randomly screening out most of the samples with small gradient and keeping a small number of them to maintain the distribution of the data. EFB bundles the mutually exclusive features together to reduce the dimension of the data. The mutually exclusive features are those that rarely take nonzero values simultaneously, and no or very little information is lost by bundling them as a new feature. EFB is realized by solving a graph coloring problem with the use of a greedy algorithm with a constant approximation ratio. As described in LightGBM's documentation (https://lightgbm.readthedocs.io/en/ latest/, accessed on 10 May 2020), the advantages of lightGBM include faster training efficiency, low memory usage, higher accuracy, support for parallel learning, and being able to handle large-scale data. In addition to classification, lightGBM sorts features in accordance with their importance, which is quantified by the number of times the feature is selected to build DTs. The more times a feature is used, the higher the ranking. In this research, the features were sorted using lightGBM for further analysis. The lightGBM program was implemented through a Python module. IFS Method Once the feature list F was generated by sorting the gene features with the lightGBM method, the number of significant features could still not be determined. Here, the optimal number of features was discovered using the IFS method [19]. IFS first generates a series of feature subsets from the feature list F on the basis of the specific step size. For example, when the step size is 5, the first feature subset f 1 generated is the top five features in F, the second feature subset f 2 is the top 10 features in F, and so on. Next, each classifier trains on a training set whose samples are expressed by the features in a feature subset. The performance of the classifier is evaluated using 10-fold cross-validation [23] and synthetic minority oversampling technique (SMOTE) [24], and the classifier with the best performance is considered the optimal classifier, and its trained subset is regarded as the optimal subset. Classifier Building with DT and RF In this study, DT and RF were adopted to build classifiers. Their descriptions are as follows: RF [21,[25][26][27][28] is a classification algorithm that integrates multiple DT classifiers. Through a bootstrap resampling technique, a new training set is composed, and the DT is constructed by randomly selecting samples and features from the original data set. The prediction labels of RF are obtained by combining the prediction results of multiple DT classifiers by using the principle of minority rule. RF has few parameters to tune, users can only choose a proper number of DTs so that it can yield good performance. Because RF contains several DTs, it has excellent noise tolerance and can avoid the overfitting problem. Importantly, although DT is a relative weak classifier, RF is much stronger. Thus, it is widely used in omics research. Through RF, investigators can build an efficient classifier. However, it is a black-box classifier. Its classification principle is not easy to understand. Accordingly, we cannot extract essential difference of heart cells in different types. In view of this, DT [20,29,30] was also employed in this study, which is widely used in the field of biomedical research, because the decision rules generated by DT can effectively elucidate how decisions are made for classification or regression tasks. In another word, a DT is a white-box classifier that splits the data many times on the basis of certain thresholds in the features by using the IF-THEN format. These IF-THEN formats constitute decision rules, which can clearly exhibit special patterns for each class. In this study, these patterns indicated specific characteristic of heart cell types. Although DT provides relative low performance, it can give novel insights to study heart cells. In the output file of DT, the value of "passed counts" indicates the number of samples satisfying the condition of the rule. The above two algorithms were performed by the scikit-learn program with default parameters in Python [31]. SMOTE The different number of samples from different cell types leads to the problem of data imbalance. Synthetic minority oversampling technique (SMOTE) was applied to minimize the effect of sample imbalance on the construction of classifiers [24]. It generates many synthetic samples for minority cell categories on the basis of the principle of k-nearest neighbors [32]. For each cell type, except for the cell type with the highest number of samples, new synthetic samples were added to this cell type via SMOTE until the number of samples of each type was almost the same. The SMOTE program was accessed from https://github.com/scikit-learn-contrib/imbalanced-learn (accessed on 24 March 2021), and the parameters were set to default. Performance Measurement As the number of samples in each category in the dataset could be strongly unbalanced, the weighted F 1 score [33][34][35] is an appropriate measurement of the classifier's performance. First, the F 1 score was calculated using the following formula: Next, weighted average was performed to the F 1 score of each category, and the weight was the proportion of the number of categories in the correct label. This measurement was called weighted F 1 score. Functional Enrichment For functional enrichment analysis, the Clusterprofile package was applied to the annotation of Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway, where p < 0.05 was taken as the screening criterion [36]. The GO terms are divided into three subgroups, namely, biological process (BP), cellular component (CC), and molecular function (MF). Results of LightGBM Method on the Dataset In this study, single cell expression profiles of 451,513 cell samples and 11 cell types for heart disease were obtained, and each sample of heart cell type was represented by the expression of 33,537 genes. LightGBM method was first applied to rank the genes into a feature list on the basis of feature importance to filter out the important set of features from these genes, and the results are provided in Table S1. The top 20 genes in the list are shown in Table 2. Results of IFS Method with RF A feature ranking list was obtained by the lightGBM method, but the optimal number of features was not determined. The IFS method was applied to optimize the selected gene features. We first adopted RF to execute IFS method. The propuse was to construct an efficient classifier for classifying heart cells. Based on the feature list provided in Table S1, the IFS method produced a series of feature subsets with the step size of 5. A SVM classifier was then built based on each feature subset to predict the label of each sample. To save time, we only considered top 1000 gene features in the list. The evaluation results of all SVM classifiers are shown in Table S2. The IFS curve with the number of features as the X-axis and the performance of each classifier, measured by weighted F1score, as the Y-axis is drawn in Figure 2. RF obtained the optimal weighted F1 score of 0.981 with the top 470 features. Accordinly, the optimum RF classifier can be built using these 470 features. Table 3 provides the detailed evaluation metrics of the optimal RF classifier. It also provided good performance on other metrics. In addition, the F1 score for each cell type under the optimal RF classifier is presented in Figure 3. Such optimum RF classifier showed excellent performance on the prediction of each category, indicating the effectiveness of this classifier. RF classifier can be built using these 470 features. Table 3 provides the detailed evaluation metrics of the optimal RF classifier. It also provided good performance on other metrics. In addition, the F1 score for each cell type under the optimal RF classifier is presented in Figure 3. Such optimum RF classifier showed excellent performance on the prediction of each category, indicating the effectiveness of this classifier. Figure 3. Performance of two optimal classifiers on each cell type. Figure 3. Performance of two optimal classifiers on each cell type. As mentioned above, the optimal RF classifier provided quite good performance. It can be an efficient tool to classify heart cells. To further elaborate its robustness, the following test was conducted. First, some noise was added to the dataset, in which cells were represented by features used in the optimal RF classifier. In detail, we randomly selected 10% cells. Each feature of these cells randomly increased or decreased by a small number. On such dataset, the optimal RF classifier was evaluated by 10-fold cross-validation. Above procedures were executed for ten times, producing ten weighted F 1 scores. A box plot was drawn (Figure 4) to show these values. It can be observed that the performance on datasets with noise was almost same as that on the original dataset. This proved the robustness of the optimal RF classifier. plot was drawn (Figure 4) to show these values. It can on datasets with noise was almost same as that on the robustness of the optimal RF classifier. Results of IFS method with DT Although the optimal RF classifier exhibited quite box classifier that fails to explain the decisions. To extr heart cells, DT, a white-box algorithm, was employed. W on DT that had been done for RF. The performance of subsets is listed in Table S2. An IFS curve was also plott observed that the highest weighted F1score was 0.957 Thus, an optimum DT classifier can be built with thes classifier are provided in Table 3. The performance of shown in Figure 3. Evidently, the performance of the than that of the optimum RF classifier, which conform powerful than DT. However, DT has its own merit that procedures of DT were completely open, which make u sification principle, thereby giving new insights to unco different types. Classification Rules Generated by the Optimal DT Class The optimum DT classifier was built based on top 3 . Box plot to show the performance of the optimal RF classifier on datasets with noise. The performance is almost same as that on the original dataset, proving the robustness of the optimal RF classifier. Results of IFS method with DT Although the optimal RF classifier exhibited quite good performance, it is a black-box classifier that fails to explain the decisions. To extract more insights for the study of heart cells, DT, a white-box algorithm, was employed. We conducted the same procedures on DT that had been done for RF. The performance of DT classifiers on different feature subsets is listed in Table S2. An IFS curve was also plotted, as shown in Figure 2. It can be observed that the highest weighted F1score was 0.957 when top 380 features were used. Thus, an optimum DT classifier can be built with these features. Other metrics of such classifier are provided in Table 3. The performance of such classifier on all cell types is shown in Figure 3. Evidently, the performance of the optimum DT classifier was lower than that of the optimum RF classifier, which conform to the general fact that RF is more powerful than DT. However, DT has its own merit that RF cannot have. The classification procedures of DT were completely open, which make us possible to understand its classification principle, thereby giving new insights to uncover the difference of heart cells in different types. Classification Rules Generated by the Optimal DT Classifier The optimum DT classifier was built based on top 380 features. Accordingly, we used these features to represent all heart cells. Such representation of all heart cells was learnt by DT. A large tree was obtained, from which we constructed 11,139 interpretable rules. The detailed rules are listed in Table S3. The number of rules for each category is shown in Figure 5. Endothelial cells obtained the largest number of rules, with 2588, followed by atrial cardiomyocytes and pericytes. A detailed description of these rules could be found in the Section 4.2. Functional Enrichment Analysis with Optimal Gene Set The best gene set was obtained, including the top 470 features, by using the IFS method. These genes were analyzed by GO and KEGG pathway functional enrichment, and the results are presented in Table S4 and Figure 6. Many genes were enriched in the KEGG pathway of hypertrophic cardiomyopathy and dilated cardiomyopathy, and some genes were enriched in the BP of heart process, indicating that these genes may be associated with the development of heart disease and further demonstrating the effectiveness of the method. Functional Enrichment Analysis with Optimal Gene Set The best gene set was obtained, including the top 470 features, by using the IFS method. These genes were analyzed by GO and KEGG pathway functional enrichment, and the results are presented in Table S4 and Figure 6. Many genes were enriched in the KEGG pathway of hypertrophic cardiomyopathy and dilated cardiomyopathy, and some genes were enriched in the BP of heart process, indicating that these genes may be associated with the development of heart disease and further demonstrating the effectiveness of the method. Discussion In this study, several machine learning algorithms were applied to classify different cell types from single-cell and single-nucleus transcriptome profiles of six adult heart regions. First, the lightGBM method was performed to obtain a ranking feature list in accordance with feature importance. Second, an RF algorithm was applied to construct a Discussion In this study, several machine learning algorithms were applied to classify different cell types from single-cell and single-nucleus transcriptome profiles of six adult heart regions. First, the lightGBM method was performed to obtain a ranking feature list in accordance with feature importance. Second, an RF algorithm was applied to construct a precise classifier with a high classification accuracy of 0.981. However, as a black-box classifier, this classifier cannot reveal the different expression patterns of different heart cell types. Therefore, the DT algorithm was further used to obtain a group of decision rules. By using the top 380 features, different cell types could be distinguished, with a high classification accuracy of 0.957. Furthermore, novel biomarkers or expression patterns may be identified by analyzing the expression pattern of 310 selected genes within these decision rules. Candidate Gene Expression Features Discriminating Different Heart Cells A total of 470 selected features (genes) were used to classify 452,136 heart cells into 11 cell types. Among them, the top ranked genes are usually more decisive for distinguishing different cell types. Some relevant experimental evidence that supported the results were presented. NPPA (ENSG00000175206) encodes natriuretic peptide A (ANP), which is highly expressed in the heart muscle and related to the control of extracellular fluid volume and electrolyte homeostasis. Studies have found that NPPA is expressed primarily in the heart, and atria have higher expression than ventricles. NPPA can regulate vasodilation, natriuresis, diuresis, and aldosterone synthesis and further influence blood pressure. Moreover, it is involved in inhibiting cardiac hypertrophy, cardiac fibrosis, and cardiac remodeling by inducing cardiomyocyte apoptosis and attenuating the growth of cardiomyocytes and fibroblasts [37]. In adipocytes, ANP can promote white adipocyte browning to increase energy expenditure via a PKG-p38 mitogen-activated protein kinase mediated pathway and make the heart as a central regulator of adipose tissue biology [38]. These findings are consistent with the results of the present study, which showed that NPPA has a crucial role in different functional heart cells. Similarly, gene LAMA2 (ENSG00000196569), which encodes the laminin subunit, plays an important role in normal heart function. Studies have demonstrated that homozygous mutation of LAMA2 can cause unstable myotube formation in various cardiac muscle, and abnormal LAMA2 expression may lead to heart dieases, such as cardiomyopathy, heart failure, and dilated cardiomyopathy [39,40]. A previous adult human heart research also showed that LAMA2 has different expression levels in fibroblasts, cardiac adipocytes, and other cell types [17]. Other key features in our results are also important for cardiac function. For example, the gene DLC1 (ENSG00000164741) is highly expressed in endothelial cells and a small number of ventricular cardiomyocytes [17,41]; RYR2 (ENSG00000198626) encodes a calcium channel component associated with cardiomyocyte and smooth muscle cell contraction and thermogenesis in beige adipocytes [42][43][44]; TTN (ENSG00000155657) encodes titin, a large abundant protein of striated muscle, mainly found in cardiac and skeletal human muscle. Mutations in these genes may cause a variety of cardiac diseases [45][46][47]. More importantly, we found that some long non-coding RNAs (lncRNA) are important for differentiating cardiac cell types. For example, LINC02019 (ENSG00000273356), it is the top gene in our feature ranking. The product of this gene is a long intergenic nonprotein coding RNA (lincRNA). We used the starBase tool to study its related RNA-binding proteins [48]. The most related proteins include EIF4A3, ELAVL1, LIN28A. Studies have shown that EIF4A3 is associated with acute myocardial infarction, and knockout of EIF4A3 can lead to failure of heart looping [49,50]. ELAVL1 plays an important role in inhibiting hyperglycemia-induced cardiomyocyte pyroptosis and regulating ferroptosis in myocardial injury [51,52]. LIN28A is also implicated in various cardiac injuries or diseases [53,54]. NEAT1 (ENSG00000245532) produces a lncRNA that may act as a transcriptional regulator for numerous genes. NEAT1 was markedly downregulated in cardiomyocytes following ischemia-reperfusion-induced injury. Moreover, by interacting with microRNA-125a-5p, NEAT1 could modulate the concentration of BCL2L12, which in turn regulates cardiomyocyte apoptosis [55]. Other studies also found that NEAT1 may influence myocardial injury repair through the MAPK and TLR2/NF-κB signaling pathways [56,57]. In summary, these genes are all related to cardiac structure and function, and they show various expression levels in the ventricle, atrium, and other cell types. Therefore, these genes could be used as decisive feature for distinguishing different cardiac cells, and we also confirmed that some lncRNAs may have more specific roles in the maintenance of normal cardiac function. Candidate Gene Expression Rules Discriminating Different Heart Cells Through DT method, a classifier consisting of 11,139 decision rules involving 380 selected features was built. Each cell type was assigned some rules, as shown in Figure 4. According to the value of "passed counts", top three rules for each cell type were extracted and listed in Table 4. The genes involved in these 33 rules were analyzed in combination with the existing literature to prove the reliability of the results. Studying other rules may also help find some new characteristics of cardiac cell subtypes, which may provide new insights into the in-depth understanding of cardiac development and function. Endothelial 8820 a : "passed counts" indicates the number of samples satisfying the condition of the rule. Cardiomyocytes Cardiomyocytes generate contractile force; thus, they normally show high expression of sarcomere proteins and calcium-mediated processes [17]. The present study showed that atrial cardiomyocytes and ventricular cardiomyocytes highly expressed TTN, which was the most important factor for distinguishing cardiomyocytes and non-cardiomyocytes. As mentioned above, atria showed higher expression of NPPA than ventricles [58]. The decision rules in the present study also showed the same expression pattern. In addition, atrial cardiomyocytes required a higher expression of KCNJ3 (ENSG00000162989) and MYL7 (ENSG00000106631) than ventricular cardiomyocytes. KCNJ3 encodes an integral membrane protein and an inward-rectifier type potassium channel. Studies have found that KCNJ3 plays an important role in governing cardiac electrical activity, and atrial cardiomyocytes have high KCNJ3 levels than ventricular cardiomyocytes [59]. Other studies also demonstrated that the protein encoded by MYL7, which is a part of myosin, is highly expressed in atrial cardiomyocytes [60]. Fibroblasts and Vascular, Stromal, and Mesothelial Cells In the decision rules, smooth muscle cells showed higher expression of MYH11 (ENSG00000133392) than other cell types. MYH11 encodes myosin heavy chain 11, and high MYH11 level is a marker of mature contractile phenotype of smooth muscle cell, while downregulation or mutation of MYH11 is associated with vascular disease [61]. ACTA2 (ENSG00000107796) is also a marker of smooth muscle cells; it is known as smooth muscle α actin, which is usually highly expressed in smooth muscle cells, pericytes, and myofibroblasts [62]. Studies found that mutation of ACTA2 could cause coronary artery disease and thoracic aortic disease. An experimental study on smooth muscle cells and myofibroblasts harboring ACTA2 mutations indicated that occlusive disease is associated with increased proliferation of smooth muscle cells [63,64]. ABCC9 (ENSG00000069431) encodes a protein that is a member of the ATP-binding cassette transporter superfamily. In this research, the decision rules of pericytes showed high ABCC9 expression requirement. This finding is in accordance with various published studies, which showed that ABCC9 is highly expressed in pericytes and could be a biomarker for it [17,65,66]. VWF (ENSG00000110799) was required to be highly expressed in endothelial cells in the decision rules. As a protein coding gene, VWF encodes a glycoprotein involved in hemostasis, and it is reported to be highly expressed in endothelial cells in human cells [67], thus confirming the results of the present study. Unlike endothelial cells, smooth muscle cells, pericytes, and mesothelial cells, fibroblasts are not involved in the construction of basement membranes. As known, collagen IV, laminin-entactin/nidogen complexes, and proteoglycans are the major molecular constituents of basement membranes [68]. This information was confirmed in the decision rules because LAMA2 (ENSG00000196569) and CD36 (ENSG00000135218) showed a relatively low expression in fibroblasts. As CD36 encodes collagen IV and LAMA2 encodes a subunit of laminin, they both required to be highly expressed in other four cell types. Although existing studies could not confirm why mesothelial cells need to relatively highly express PLA2G2A in the rules, this may be a coincidence caused by the limited number of mesothelial cells or the unknown effect of PLA2G2A on mesothelial cells. Similarly, other types of cells have some very meaningful genes that have not been studied in depth. However, the results of this study showed that they may be important. Adipocytes and Immune and Neuronal Cells Adipocytes, immune cells, and neuronal cells showed low expression of sarcomere proteins or basement membranes components. NRXN1 (ENSG00000179915) encodes neurexin 1, which is a cell surface receptor involved in the formation of synaptic contacts, and efficient neurotransmission depend on NRXN1 [69]. The same expression pattern could be observed from the decision rules of neuronal cells. Another highly expressed gene in the rules was neuronal growth regulator 1 (NEGR1, ENSG00000172260). NEGR1 mediates neural cell communication and synapse formation, and its downregulation is related to obesity, learning difficulties, intellectual disability, and psychiatric disorders [70]. Thus, appropriate NEGR1 expression is necessary to maintain neuronal cell function. Hematopoietic cells are commonly classified into myeloid and lymphoid cells. The expression of CD163 (ENSG00000177575) is the main criterion used to distinguish between myeloid and lymphoid cells in the rules. As CD163 encodes a member of the scavenger receptor cysteine-rich superfamily, it is exclusively expressed in monocytes and macrophages [71]. Thus, CD163 could serve as a marker for myeloid cells, and this finding also confirmed the results of the present study. Adipocytes showed high PLIN1 (ENSG00000166819) and ACACB (ENSG00000076555) levels than other cells. PLIN1 and ACACB participate in the inhibition of lipolysis and the regulation of fatty acid oxidation, respectively. According to the recent publications, these two genes are essential for the maintenance of adipocyte functions [72,73]. In conclusion, the genes that are highly expressed in the decision rules are often markers of the cell types or essential for maintaining cell function, which reflects the reliability of the research results. Some genes that are meaningful for cell classification, which have not been investigated in detail before, may have important implications for the function of the corresponding cell type. Functional Analysis of the Optimal Gene Set We also performed GO and KEGG pathway enrichments in the 470 decisive features identified by the RF method, and here we presented the relevant enrichment results of some cell types. The enrichment terms related to adipocytes include GO:0019216 (regulation of lipid metabolic process), GO:0036041 (long-chain fatty acid binding) and hsa04923 (Regulation of lipolysis in adipocytes), which reflect the energy supply and regulation of adipocytes. Similarly, there are also some terms related to lymphoid cells and neurons in the enrichment results, such as GO:0140058 (neuron projection arborization) and GO:0035325 (Toll-like receptor binding). The primary function of the heart is to effectively pump blood to the body tissues through the contraction-relaxation cycle of myocytes, and the heart is mainly composed of cardiomyocytes and fibroblasts. In our GO and KEGG enrichement results, many of the results are related to these two types of cells. Fibroblast-related terms are mainly related to fibers, such as GO:0043292 (contractile fiber), GO:0030016 (myofibril), and GO:0030017 (sarcomere). Sarcomere is composed of a segment of myofibril between two adjacent Z discs, and it is the contractile unit of myofibrils. Studies have found that various genetic mutations in the cardiac sarcomere could lead to defects in sarcomere production and further lead to ventricular dilatation and cardiac dysfunction [74]. The enrichment terms of cardiomyocytes mainly include GO: 0048738 (cardiac muscle tissue development), GO: 0003779 (actin binding) and GO: 0008307 (structural constituent of muscle). Actin plays an essential role in the assembly of cardiac myofibrils, and it is strongly associated with muscle contraction. In the process of myofibrillogenesis, actin is assembled into highly ordered mature state, and the abnormal expression of actin dynamic-related genes usually leads to myofibril abnormalities and heart defects [75]. Meanwhile, the KEGG pathway annotation showed that the features are related to muscle contraction and various cardiomyopathies (KEGG: hsa05410, hypertrophic cardiomyopathy; KEGG: hsa04260, cardiac muscle contraction). The main cause of cardiomyopathy is muscle cell dysfunction caused by genetic variation, especially the dysfunction of genes related to the cytoskeleton-sarcomere connection [76,77]. This further illustrates the important role of these genes in cardiac muscle cell function. In this part, an extended description about the decisive features was discussed. These terms have been shown to be associated with multiple cardiac cell types, further demonstrating the importance of our decisive features to the function of cardiac cell populations and supporting the reliability and effectiveness of their usefulness in constructing classifiers. Limitations of This Study There are several limitations of this study: (1) The data we analyzed was from Human Cell Atlas and it was not known how representative the Human Cell Atlas data is. When more data from various heart locations becomes available, we need to test the model on more data. (2) The tissue was only from adult human hearts. Can the model be applied to children or infants? Do they have different cell types? We need to compare the adult heart data with child and infant data. were measured with smart-seq, will other low expressed biomarkers and rare cell types be discovered? There may be more cell markers and cell types than the identified ones. Conclusions The feature selection method and machine learning algorithms were applied to build a workflow to determine the essential gene features and specific gene expression rules for classifying different heart cell types. The positive results received from this study indicated that the developed classification models achieved an excellent classification performance. The selected key genes and decision rules were demonstrated to be associated with cardiac structure and function in recently published literature and enrichment analysis. In the future, we will collect data from different heart locations, different age groups and different sequencing platforms to get robust heart cell type annotation model. With the knowledge of gene expression patterns at single cell resolution, we can decipher the cell composition changes for diagnosis and target the dysfunctional cell without harming other cells for precision treatment. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/life12020228/s1, Table S1: Ranking results of features, as obtained using lightGBM method; Table S2: Evaluation of the performance of each classifier with different numbers of features by using IFS method with 10-fold cross validation; Table S3: Decision rules generated by DT classifier when using the top 380 features; Table S4 Data Availability Statement: The data presented in this study are openly available in Human Cell Atlas Data Coordination Platform at https://www.ebi.ac.uk/ena/browser/view/ERP123138 (accessed on 29 January 2021), reference number [15]. Conflicts of Interest: The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
8,803.2
2022-01-31T00:00:00.000
[ "Computer Science", "Medicine" ]
A Novel Distributed Quantum-Behaved Particle Swarm Optimization 1Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Joint International Research Laboratory of Intelligent Perception and Computation, Xidian University, Xi’an, Shaanxi Province 710071, China 2School of Computer and Software, Nanjing University of Information Science and Technology (NUIST), Nanjing 210044, China Introduction With the development of information science, more and more data is stored, such as web content and bioinformatics data.For this reason, many basic problems have become more and more complex, which makes great troubles to current intelligent algorithm.As one of the most important issues in artificial intelligence, optimization problem in real-world applications also becomes harder and harder to be solved. In the past 30 years, evolutionary algorithm is becoming one of the most effective intelligent optimization methods.In order to face the new challenge, distributed evolutionary algorithms (dEAs) have been blossomed rapidly.The paper [1] provides a comprehensive survey of the distributed EAs and some models are summarized.Master-slave, island, cellular, hierarchical, pool, coevolution, and multiagent models are listed and introduced.And the different models are analyzed from aspects like parallelism level, communication cost, scalability, and fault-tolerance.And some hotspots of dEAs, such as cloud and MapReduce-based implementations and GPU and CUDA-based implementations, are listed.But no results of dEAs on distributed computing devices are reported.Cloud can be applied in many aspects, and [2][3][4][5][6][7][8] have realized various specific applications of cloud.The paper [9] gives a review of the parallel and distributed genetic algorithms in graphics processing unit (GPU).Some works along this idea are reported, such as [10][11][12].Cloud and MapReduce is a new and effective technology to deal with big data, which is proposed by Google in 2004 [13].To respond to the requirement of parallelization and distribution, this physical platform is very convenient to deploy an algorithm to update to be parallel.The programmers only need to consider the map function and reduce function, and the other details are provided by the model itself.Many practical problems are solved with MapReduce model and cluster of servers, such as path problem in large-scale networks [14], seismic signal analysis [15], image segmentation [16], and location recommendation [17].But the study about MapReduce-based EAs is still in initial stage.Although some genetic algorithms [18][19][20][21][22][23] and particle swarm optimization realized by MapReduce [24] have been proposed.There are still many kinds of EAs which are not implemented with distributed model and parallel potential of these algorithms is not released.Based on these considerations, in our previous work [25], MapReduce is combined with coevolutionary particle swarm optimization, which shows that MapReduce-based CPSO obtain much better performance than CPSO.In another work [26], the quantum-behaved particle swarm optimization is transformed on MapReduce successfully.And the idea of this paper is based on it and continues extending that the background is introduced and a practical application is added. Quantum mechanics and trajectory analysis gained extensive attention of scholars recently and sparkled in many areas, such as image segmentation [27], neural network [28], and population-based algorithms [29,30].In [31], Zhang presents a systematic review of quantum-inspired evolutionary algorithms.Quantum-behaved particle swarm optimization is a kind of PSO proposed by Sun et al. in 2004 [32].Inspired by movement of particle in quantum space, a new reproduction operator of solution is proposed in this algorithm.Because a particle could arrives at any location in quantum space with a certain probability, a new solution at any location in feasible space also could be generated with a certain probability in QPSO.This mechanism is helpful for particles to avoid falling in local optimum.Some more detailed analysis has been reported in [33].Unfortunately, when the algorithm faces large-scale and complex problem, the increasing computational cost becomes the bottleneck of this algorithm and without enough computing resource premature phenomenon could not been avoided, which urges the original algorithm to be parallel. In order to follow this trend and enhance the capabilities of a standard QPSO, the MapReduce quantum-behaved particle swarm optimization is developed.The MRQPSO transplants the QPSO on MapReduce model and makes the QPSO parallel and distributed through partitioning the search space.Through the comparisons of MRQPSO and standard QPSO, it could be found that the proposed MRQPSO decreases the time of same function evaluations.And on some test problems QPSO increases the performance of solution and is more robust than QPSO. The rest of this paper is organized as follows.Section 2 introduces the PSO and QPSO.Section 3 gives a brief presentation of MapReduce model.Section 4 describes the details of QPSO implementing on MapReduce.In Section 5, we show and analyze results of experiments, including the comparison with QPSO.Finally Section 6 concludes the work in this paper. PSO and QPSO 2.1.The Particle Swarm Optimization Algorithm.Inspired by bird and fish flocks, Kennedy and Eberhart proposed PSO algorithm in 1995 [34].This algorithm is a population-based intelligent search algorithm.In order to find the food as quickly as possible, the birds in a flock would trace their companions that are near to the food firstly.Then they would determine accurate area of food.The individual of PSO searches the optimum like the bird in a flock.Each particle has velocity and position.And the two parameters would be updated according to best value and global best value of the particles.The velocity and position of particle at the dimension are presented by V and , respectively.The updating equation could be described as where V () and () are the velocity and position. represents the th iteration. and are best value and global best value of the particle, respectively. 1 and 2 are random number uniformly distributed in [0, 1]., 1 , and 2 are three parameters of the algorithm. is initial weight proposed by Shi and Eberhart in 1998 [35] to control the balance of local and global optimum. 1 and 2 are the accelerated coefficients or learning factors.Usually, 1 = 2 = 2. From the above equations, it could be found that few parameters are used in PSO, which makes PSO easy to be controlled and used.Meanwhile, it has better convergence performance and quicker convergence speed.These advantages make the PSO algorithm gain a lot of research attention.However, the PSO is not a global optimization algorithm [36].The limited velocity constrains the search space in a limited area.So the PSO could not always find the global optimum.In other words, the premature convergence is the most serious drawback of the PSO. The Quantum-Behaved Particle Swarm Optimization Algorithm.To overcome the shortcoming of the original PSO algorithm, Sun et al. proposed the quantum-behaved particle swarm optimization (QPSO) in 2004 [32].This algorithm has a more superior performance comparing to the PSO.QPSO algorithm transfers the search space from classical space to quantum space.Particles can appear at any position, which implement the full search in the solution space. According to uncertainty principle, the velocity and position of a particle cannot be determined simultaneously.In quantum space, a probability function of the position where a particle appears could be obtained from Schrodinger equation.The true position of one particle could be measured by Monte Carlo method.Based on these ideas, in QPSO, a local attractor is constructed by particle best solutions and global best solution as (2) for each particle. where () is a local attractor of the particle at the dimension . () is random number distributed in [0, 1]. () is the particle best solution. () is the current global best solution. The position of the particle is updated by where is the only parameter in the algorithm called creativity coefficient, which is a positive real number, to adjust the balance of local and global search.The definition of refers to (4).Iter max is the maximum number of iterations. is random number distributed in [0, 1], and is the mean position and defined as follows: where is the size of population. is the global extremum of particle . In QPSO, first step is initializing the population randomly, which concludes the position of each particle, particle best value, and global best value.Next, calculate the mean position of th dimension according to (5).Then, particle is evaluated again and the best and global best solution of the particles would be updated according to the fitness value.After that, the particle is updated as ( 2) and ( 3).When the number of iterations or accuracy requirements is satisfied, stop running and output the optimum. Although the QPSO algorithm is superior to PSO, it still has some disadvantages.Because the particles in the QPSO fly discretely, the narrow area where the optimum is may be missed.In the case of too much computation, QPSO may spend too much time. MapReduce MapReduce [13] is a programming model proposed by Dean and Ghemawat.Inspired by the map and reduce primitives present in Lisp and many other functional languages, this model is created for processing the large-scale data in parallel.The infrastructure of MapReduce provides detailed implementations of communications, load balancing, faulttolerance, resource allocation, file distribution, and so forth [1].Programmers do not need a lot of knowledge and experiments about parallel and distributed programming.They only need to pay attention to map and reduce function which the model consists of and then can implement algorithm to parallelization easily. In this model, the computation takes a set of key/value pairs.The map function processes the input key/value pairs and then emits new lists of key/value pairs, called intermediate key/value pairs.The type of these two lists may be from different domain.The map function is called independently, and the parallelization is implemented in this way.After all map functions' processions are completed, the reduce function is called.Intermediate key/value pairs are grouped and passed to reduce function.The reduce phase merges and integrates these intermediate key/value pairs and outputs the output key/value pairs finally.And the type of intermediate and output pairs must be the same.The type of map and reduce functions can be written as follows: Because Google has not released the system to the public, Hadoop, the Apache Lucene project developed, has been used generally.This Java-based open-source platform is a clone of MapReduce infrastructure, and we can use it to design and implement our distributed computation. The MRQPSO Algorithm The particle swarm optimization algorithm [34] is one of the popular evolutionary algorithms.It has attracted much attention because of the merits of simple concept, rapid convergence, and good quality of solution.However, this algorithm is bothered by some weakness, such as premature phenomenon.Focusing on the shortcoming of original PSO, Sun et al. proposed an uncertain and global random algorithm named quantum-behaved particle swarm optimization (QPSO) in 2004 [32].The new one puts the search space into quantum space to let the particle move to any location with different probability.Through this strategy, the premature phenomenon could be solved to a certain degree. Although the QPSO has satisfying progress on premature phenomenon, it has not been prepared to challenge of problems with complex landscape or needing huge computation to be solved.Due to the particles of the QPSO flying discretely, they may miss the narrow area where the global optimum is.And as the problem is getting complex, the computational cost increases.So we implement the QPSO parallel and distributed by transplanting the algorithm on MapReduce model and we name this algorithm MRQPSO.The framework of MRQPSO could be described as in Algorithm 1, and the flowchart is shown as in Figure 1. The proposed MRQPSO partitions the search space into many subspaces.For -dimension search space, the range of each dimension is cut to be parts (0 < < ); then the space-partition is completed, and 1 * 2 * ⋅ ⋅ ⋅ * subspace is obtained [25].Then using several servers, several mappers perform QPSO on different subspace in parallel and independently.After all the mappers finished the calculation, the reducer merges and integrates the immediate value and outputs the best solution.The space-partition helps the particle get distributed uniformly, which ensures all areas have the particles fall in at the initialization phase.It is effective to avoid the particles overflying the narrow zone where the optimum may lie.And the parallel mappers could help MRQPSO to save time cost. Output the optimal solution Output the optimal solution Reducer: select the best optimal solution from every subspace Output the global optimum position of solution with global best value.Several subspaces are saved as records and form data block.The mapper is called when a block starts a QPSO procedure.The input key/value pairs denote the massages of data block.The key is the ID of one record and the value is the string of search space.Then the mappers start to process the QPSO in every block independently.Once a block has been explored, another block will follow up immediately.Under ideal conditions, the larger the number of mappers is, the fewer process a single mapper processes, and the fuller the parallelization is.However, the mapper would spend time to be started in fact. If the data is big enough, the starting time can be neglected.But in our experiments, it will influence the results more or less. After being processed by mappers, the immediate key/value pairs change to denote the information of and global optimum of current data block, showed as Algorithm 2. And then the immediate key/value pairs are ready to transport to the reduce phase. MRQPSO Reduce Function. The reduce function is in charge of merging and integrating the information which the mapper emitted.As Algorithm 3 shows, the reducer of MRQPSO is used to select the minimum from all subspaces.The mappers produced and transported the immediate key/value pairs which are received by the reducer after all mappers completed their work.At the reduce phase, all best and corresponding fitness of blocks are compared with each other.And the minimum of them is selected and outputted finally. Performance of MRQPSO on Test Problems. To validate the proposed MRQPSO algorithm, we selected 8 functions to evaluate the ability of solving complex problems firstly.The scalable optimization problems are proposed in the CEC 2013 Special Session on Real-Parameter Optimization [37] and listed in Table 1.All the test composition functions are in the same search range: [−100, 100] .And they are all minimization problem with global optimum zeros. is the number of kinds of basic benchmark functions.Some parameter settings and environment are listed as follows. We compared our proposed MRQPSO with original QPSO algorithm to test the optimization performance.Each function is run for 20 independent times and all the results are recorded in Tables 2-5.All experiments are run for 2 13 × 900 function evaluations. = 10. (1) QPSO: this algorithm transforms search space from classic space to quantum space.In quantum space, particles can appear at any position and avoid premature convergence to some degrees.The population size of QPSO is 10.(2) MRQPSO: this algorithm is a QPSO implementation on the MapReduce model, which achieves the parallel and distributed QPSO.The population sizes of MRQPSO were 10, 20, and 30, respectively, denoted by s on Table 2.The search space is partitioned into 2 11 blocks averagely. All experiments are run on VMware Workstation virtual machines version 12.0.0:one processor, 1.0 GB RAM.Hadoop version 1.1.2and Java 1.7 were used in MapReduce experiments; we used three virtual machines while serial algorithm used one.CPU is core i7.Programming language is Java. In Table 2, the best, worst, mean value, standard derivation, and running time of MRQPSO with different population sizes are listed.According to the results, as the population size becomes larger, the solutions become worse.It may be because that when the number of particles increases, the number of iterations of each particle decreased for the same function evaluations, which is not helpful to improve accuracy. Comparison with QPSO on Test Problems. The results of MRCPSO are compared with QPSO algorithm in Tables 3 and 4. The population size is set to 10.We show two columns for each item to compare two algorithms clearly.From Table 3, MRQPSO has a better solution almost on all items.For F2 and F3, although the best value of QPSO is lower, MRQPSO meet this value nearly.Since these two values are close, one can consider that two algorithms are trapped into the same local optimum.And due to the less number of iterations of each particle, the MRQPSO cannot converge to a lower point like QPSO.In general, the MRQPSO has a better performance on mean value and standard derivation; this suggests the MRQPSO is more capable of searching for the optimum and overcoming the premature phenomenon and is more robust and steady than original QPSO. The notable advantage in time is presented in Table 4. From this chart, the MRQPSO is more effective in saving time cost.And it seems that the more time QPSO spend, the more advantage MRQPSO has.Normally, it takes some time to start mapper.When a problem is so simple that the serial algorithm processes fast, the outstanding benefit of rapid convergence may weaken, such as F1-F3.But when search time gets longer, the mapper starting time even can be negligible, such as F4-F7, where the MRQPSO programs' running time reduced to half compared to the QPSO. To summarize, we can discover that the MRQPSO has better solution performance and cost less running time.The proposed MRQPSO is more suitable and effective for dealing with complex problems. Comparisons on Nonlinear Equation Systems. Nonlinear equation systems arise in many areas, such as economics [38], engineering [39], chemistry [40], mechanics [41], medicine [42], and robotics [43], widely.Generally, a nonlinear equation system could be described as [44] where ⇀ ∈ Ω . is the number of equations. is the dimension of variable. is the th equation in the system.Usually, at least one equation is nonlinear.If one solution could give all the equations in the system true statement, this solution is an optimal solution of this equation system. In order to obtain the optimal solutions of a nonlinear equation system, an optimization problem like (8) or ( 9) could be constructed.Optimal solutions of ( 8) or ( 9) are the optimal solutions of nonlinear equation system ( 7 In this article, optimization problem like ( 8) is used to deal with one nonlinear equation system.And MRQPSO are compared with QPSO on Fun 1-Fun 3. The details of these three problems are listed as follows. Fun 1: where is 20.Feasible area is [−1, 1] 20 .Both equations are nonlinear.And 2 theoretical optimal solutions exist.Fun 2: where is 6.Feasible area is [−1, 1] 6 .All the six equations are nonlinear.And infinite theoretical optimal solutions exist.Fun 3: where is 20.Feasible area is [−1, 1] 20 .In the system, one equation is linear and other nineteen equations are nonlinear.Infinite theoretical optimal solutions exist.Some parameters and environment used on solving nonlinear equation system are listed as follows.Each algorithm is performed 20 times on each problem independently.All experiments are run for 2 10 × 1000 function evaluations.The population size is 10.And search space of MRQPSO is partitioned into 2 10 blocks.Results from QPSO and QPSO are compared and reported in Table 4. Two aspects are considered in the comparisons.One is running time of the two algorithms.The other is the obtained minimized objective function value.The results are reported in Table 4.The better results are marked with blackbody. From Table 5, it could be found that both the two algorithms have good performance on objective function value.On Fun 1 and Fun 2, MRQPSO have slight advantage than QPSO on mean and max value.On Fun 3 and min value, QPSO has advantage than QPSO.Here it seems that QPSO obtained much better result on min value of Fun 1.But actually, both of the solutions obtained by MRQPSO and QPSO are very close to the theoretical optimal solutions.But in MRQPSO, the computing resource is assigned on different areas.So during the latter search of MRQPSO, no computing resource as much as QPSO could be used to improve accuracy.This may be the reason for the worse performance of MRQPSO on min value. But from the view of time cost, it is clear that MRQPSO outperformed QPSO on all the cases.And the advantage is significant.Because three virtual devices are used to evaluate solutions in feasible space at the same time, the computing task could be completed with less time. Conclusion This paper developed a MRQPSO algorithm and implemented serial QPSO into the MapReduce model, speculative parallelization, and distribution of QPSO.The proposed method was applied to solve the composition benchmark functions and nonlinear equation systems and got satisfactory solutions basically.Moreover, from the comparisons between MRQPSO and QPSO, the results showed us the parallel one outperformed the serial one on both quality of solution and time cost.MRQPSO can be considered as a suitable algorithm to solve large-scale and complex problems.In order to solve more complex practical problems, a cluster with more servers is needed to be constructed and used to test the performance of MRQPSO, which would be a further work. . Algorithm 1 shows the pseudo code of the map function of proposed MRQPSO. is the position of particle with best value and is theStep 1. Divide feasible space into several subspace; Step 2. Construct Mapper which performs QPSO on one subspace and outputs the obtained optimal solution on this subspace; Step 3. Construct Reducer which selects the best optimal solution on different subspace from mapper;Step 4. Output the best optimal solution and its functional value. Table 1 : Benchmark functions used in this paper. Table 3 : Comparison between MRQPSO and QPSO on the optimum. Table 4 : Comparison between MRQPSO and QPSO on the running time. Table 5 : Comparison between MRQPSO and QPSO on nonlinear equation systems.
4,933.6
2017-05-03T00:00:00.000
[ "Computer Science" ]
Hidden Resources of Coordinated XPS and DFT Studies Hidden Resources of Coordinated XPS and DFT Studies Electronic configuration of chemically bound atoms, at the surface or in the bulk of a solid, contains the traps for energy absorption provided by the valence band electron transitions; the core-level excitation of any origin is coupled with traps forming the mul- tichannel route for energy dissipation. This chapter displays tracing over these channels by means of X-ray photoelectron spectroscopy (XPS) and density functional theory (DFT). Conformity between energy losses in the XPS spectra and electron transitions in relevant unit cells is verified by the examples of the pristine and half fluorinated graphite C 2 F, and the Br 2 -embedded C 2 F. Perfect XPS-DFT combination can be useful for material science providing exhaustive data on state and geometry of the atoms in a sample, regardless the field of its application. The valence band is insensitive to the energy source for its excitation. It makes the behavior of energy losses in XPS spectra of the atoms to be a descriptor of bonding between these atoms in multicomponent materials. Moreover, the state of any component can be tracked through change or invariability of satellites in the relevant XPS spectra, obtained in the course of the external influence, thus revealing a wear performance of the material. Introduction Fundamental studies in the field of surface science form the grounds for the sustained development of advanced technologies and new composite materials and catalysts with the desired properties [1,2]. The methods of electron spectroscopy are unique tools of the basic research enabling characterization of the structure, composition, and properties of solids and interfaces at the atomic-molecular level. Key properties of a sample are often exposed in their responses to a core-level excitation. The specific responses have been discovered using the tunable synchrotron irradiation and became the basis of advanced techniques [3]. The X-ray absorption fine structure (XAFS) appears near the edge of photoionization threshold (XANES) and reveals the vacant state structure, while the XAFS beyond the absorption edge (EXAFS) exhibits the local geometry [4]. The resonant inelastic X-ray scattering (RIXS) is enabled by the energy and momentum transferred by a photon near the absorption edge and exhibits the intrinsic excitations [5]. Resonant photoemission (RPES) and Auger electron spectroscopy (RAES) disclose the local electronic structure and correlations in a system, respectively [6]. The use of the electron impact instead of the X-ray, as a source of the core-level excitation, has discovered the similar effects of the conjugate electron excitation (CEE) [7][8][9][10][11]. The CEE shows itself as a set of satellites in disappearance potential spectra (DAPS), which answers to valence band (VB) structures of near-surface atoms, including the adsorbed species, and plasmon excitations. Experimental evidence for the CEE phenomenon is based on DAPS spectra, obtained from various adsorbed layers and on its mechanism represented by the combination of ordinary electron transitions. For example, plasmon oscillations are often observed by means of X-ray photoelectron (XPS) and Auger electron spectroscopies (AES) [12,13]. Ionization of the VB, of the substrate and adsorbed species, is a basis of ultraviolet photoelectron spectroscopy (UPS) [14]. Similar satellite structures, above different thresholds, in DAPS spectra confirm that the core electrons are identical with regard to CEE transitions [7,8]. CEE phenomena represent, in whole, the multichannel route for energy dissipation within the DAPS probing depth of 2-3 monolayer (ML), which does not undermine the general concepts in the field of electron scattering. Novel as well as advanced technologies strongly require the next generation materials in the fields of tribology [15][16][17], hardness [18,19], corrosion and wear performance of the material [20], and many others [21][22][23]. The progress of material science in these fields is resulted, in large measure, from the fundamental studies by means of modern techniques, including the XPS as well [16,17,[21][22][23]. The X-ray photoelectron spectroscopy is a powerful analytical tool; however, this method is limited by the content and charge state of the atoms, while it cannot disclose the chemical behavior and structural features of the atoms in a sample, which are urgently needed in case of the multicomponent substrate. These properties and many others are direct products of the DFT. In turn, the DFT runs give greatly different results depending on starting conditions and the operational parameters, which have to be found indirectly. A reliable intersection between the XPS and DFT outputs would help to employ the hidden resources of both techniques. The CEE phenomenon is a true multi-channel route of energy dissipation through the VB. The identity of electronic nature of the surface and bulk atoms allows one to suggest the similarity of inelastic electron scattering mechanisms on the surface and in the bulk of a solid. Then, the CEE should also occur, under the X-ray core-level excitation, and manifest itself in the XPS spectra as energy losses. The plasmons in AES and XPS spectra correspond indeed to the collective CEE phenomena. Highest occupied molecular orbital-lowest unoccupied molecular orbital (HOMO-LUMO) transitions, which are often used for assignment the XPS, RPES and XANES spectra [24,25], are also clear CEE manifestations. The electron energy dissipation, accompanying the core-level excitation, is the general trend of any electronic configuration. Auger transitions are particular cases of the event resulted from filling the core hole. CEE is another route of relaxation consisting in redistribution of the photoelectron energy through the accessible valence bands. A close attention is paid to the graphite materials as essential parts of the advanced technical devices [26]. Fluorinated graphite serves as an intermediate of the graphene synthesis, while embedded with the alkali or halogen, exhibits high chemical stability, improves the anode activity in a fuel cell, becomes the nano-reactor or the storage nano-container of volatile compounds, and so on [27][28][29]. We have considered highly orientated pyrolytic graphite (HOPG), half-fluorinated HOPG (C 2 F), and that with the embedded Br 2 (C 2 F *0.15Br) as touchstones of the following concept [30,31]. First, inelastic electron scattering, in near-surface layer and in the bulk of a solid, follows the same regularities. Second, the X-ray core-level excitation is accompanied by the VB transitions revealing themselves as the photoelectron energy losses. Third, linking of the XPS and DFT methods via conformity between XPS spectral structures and theoretical CEE transitions justifies the other DFT products related to local geometry, chemical bonding, and states of the atoms in a sample. Experimental Inelastic electron scattering has been monitored by the elastic scattering, using the DAPS. Disappearance potential spectroscopy is based on the threshold core-level excitation by an electron beam of the time-based energy E p [32]. Whenever the incident potential exceeds the corelevel energy, a part of electrons disappears from the elastic current I and provides a dip in the dependence of dI(E p )/dE p on E p . The event occurs if E p is equal to difference between the core and vacant state energy. The spectrum shape is determined by the self-convolution of vacant density of states (DOS), as the destination of interacting electrons. The adsorption of test gases over the Auger-clean Pt(100)-(1 × 1) single crystal was performed at 300 K [33]; exposures are given in Langmuir (1 L = 10 −6 Torr s). The Fermi level (E F ) in DAPS spectra corresponds to E p = 314.8 eV, which is close to the reference Pt4d 5/2 core-level energy [34]. Other experimental details and the spectrum processing can be found elsewhere [7][8][9][10][11]. The XPS studies were performed on a Phoibos 150 SPECS spectrometer (Germany) using monochromatized Al K α radiation (1486.7 eV). The backgrounds of external and surface energy losses in XPS spectra have been subtracted [35]. Other experimental details, the lowtemperature synthesis technique, and characterization of HOPG, pristine and fluorinated C 2 F can be found elsewhere [36,37]. The Br 2 embedding into C 2 F was performed as in Ref. [38] and resulted in the ~C 2 FBr 0.15 stoichiometry. Theoretical Geometric parameters and DOS of the unit cells were calculated by the density functional theory using the Quantum ESPRESSO Software [39] and the nonlocal exchange-correlation functional in the Perdew-Burke-Ernzerhof parameterization [40]. The interactions between ionic cores and electrons were described by the projected augmented wave (PAW) method [41] with the kinetic energy cutoff E cut = 40 Ry (320 Ry for the charge-density cutoff) for a plane-wave basis set. The Gaussian spreading for the Brillouin-zone integration was 0.02 Ry; the Marzari-Vanderbilt cold smearing was used [42], and the van der Waals (vdW) corrections were included [43]. The Pt DOS was calculated using the Perdew-Burke-Ernzerhof functional [44] and the PAW with the optimized lattice constant 3.99 Å. The kinetic energy cutoff E cut = 40 Ry and a 12 × 12 × 12 grid of Monkhorst-Pack k-points were applied. Half-fluorinated graphite, pristine and imbedded with the Br 2 molecule, was modeled with a bilayer C 24 F 12 and C 24 F 12 Br 2 unit cell, respectively, with the optimized lattice parameters a = 2.49 Å × 3, b = 2.48 Å × 2 for the hex structure and a = 2.50 Å × 3, b = 2.48 Å × 2 for the Bernal structure. The F was attached to C atoms all outside and half inside and half outside a cell (Figure 1) with 40 Bohr space between slabs to prevent the interactions. In latter case, the entry interlayer distance d layer in a C 24 F 12 Br 2 unit cell was taken larger by 2 Å than optimized d layer in a C 24 F 12 unit cell (in order to avoid unrealistic Br 2 F formation or F ↔ Br 2 replacement under relaxation of the system), in line with experimental measurements [46]. All atoms were allowed to move free under an optimization of the unit cells. The Brillouin-zone integration was performed on a 20 × 20 × 1 grid of Monkhorst-Pack k-points [47]. The accuracy was verified by testing the energy convergence. The default numbers of bands were used for free bromine particles. Inelastic electron scattering in near-surface layer DAPS theory directs core and primary electrons to nearest vacant states above E F [32]. The larger is the vacant DOS, the larger is the spectral dip, and the lack of free DOS gives no signal. The DAPS technique discovers all channels of the elastic electron consumption, which are specifically related to CEE phenomena and consisted of shake-up and shake-off VB transitions coupled with the threshold core-level excitation of an atom. These channels are electron transitions from the ground σ VB to vacant DOS σ Vac and the vacuum level, whose probability W(E) is in proportion to the corresponding convolution and σ VB , respectively, on the absolute energy relative to E F with a matrix element f(E, σ): The shake-off CEE moves σ VB to free DOS at the vacuum level. According to the Rutherford relation ds / d ~sin 4 (Θ / 2) , the cross-section for the nonrelativistic scattering is efficient for small angles Θ [48,49]. The probability W(E) off in Eq. (1) therefore includes one-dimensional (1D) free DOS. According to Van Hove singularities, the 1D DOS is equal to 0, infinity, and Pt at the energies below, at, and above the vacuum level, respectively [50], as shown in Figure 2(a) (where ϕ Pt = 5.6 eV is the work function of Pt(100) [51]). This provides the resonant CEE behavior and multiple tracing over the adsorbed species (including hydrogen atoms and reaction intermediates) around different thresholds [8][9][10][11]. The shake-off satellite of adsorbed particle is an intense peak of the 1-2 eV base-width and coverage-proportional intensity, which is located at its ionization potential above the Pt threshold. The DAPS spectra in Figure 2 particularly exhibit the σ state of the H ad atom and 1π, 5σ, and 4σ states of the CO ad molecule, which fit published UPS data in Table 1 [52][53][54][55]. Similar accordance between the DAPS and UPS measurements has been found for the adsorbed O, N, NO, and NH species [7]. The Pt shake-off spectrum in Figure 2(b, c) was constructed on the basis of DFT data as follows. The VB was inverted (because the larger σ VB the larger W off (E) in Eq. (1), and the larger the spectral dip); differentiated, and shifted to the higher energy by ϕ Pt . Adsorbed layer makes significant contribution into the DAPS spectrum due to superior surface sensitivity of this technique, whose probing depth 2-3 ML is determined by half the electron mean free path in a solid. The Pt shake-up spectrum in Figure 2(b) corresponds to convolution of the occupied and vacant d states by Eq. (1). The calculated Pt shake-up and shake-off spectra in Figure 2 are close to each other due to strongly localized vacant states at E F and the 1D DOS at the vacuum level, respectively, and because of domination of the equal d zx , d zy , and d xy states in total DOS [30]. An agreement between experimental and simulated spectral fragments in Figure 2 implies regular involvement of the Pt DOS into CEE events, as well as similar matrix elements f(E, σ) for different partial densities of states (pDOS) in Eq. (1) and no symmetry ban for CEE transitions. the metal atom) [8,10]. Furthermore, the surface plasmon disappears while the bulk plasmon decreases on coverage in Figure 2(b) because of screening by the adsorbed layer, in contrast to behavior of the VB CEE satellites [9]; multiple plasmons were also detected at relevant energy points [7]. Conformity between the calculated and experimental data in Figure 2 indicates that the vacancies, at E F and at the vacuum level as well, are appropriate spots of destination of the electrons under their CEE transitions. Present consideration confirms the generality of CEE phenomena under the inelastic electron scattering in the adsorbed system. The CEE control is an additional tool of electron analyzer for fingerprinting the adsorbed layer and an alternative to the RPES, which requires a tunable synchrotron irradiation and special instrumentation [6]. Besides that, the DAPS provides the vacant state structure and geometrical parameters similar to XANES and EXAFS, respectively [56]. CEE satellites of the adsorbed species accompany the threshold core-level excitation of that neighboring atom, which is bound to the above species, while core-level energies are easily distinguishable. Therefore, the CEE control empowers localization of the adsorbed species over multicomponent surface. CEE regularities in the near-surface layer can be summarized as follows: • Shake-up transitions correspond to convolution of the occupied and unoccupied pDOS of the same atom and likely have no symmetry ban. • Shake-off transitions are available for the substrate atom and adsorbed species as well, where the former is the energy source; the 1D DOS at the vacuum level is a common VB destination. There is no symmetry prohibition, and the satellite structure is a VB mirrorimage with respect to E F , shifted to higher energy by the work function. • Plasmon oscillations give evidence for the collective CEE phenomenon. Conjugate electron excitations in the bulk of a solid The CEE phenomena occur with high probability in the near-surface layer of 2-3 ML [7][8][9][10]. Electronic affinity between surface and bulk atoms assumes similar channels of the inelastic electron scattering under the core-level excitation, no matter by primary electrons or X-ray irradiation. The photoelectron can partially lose its energy for the CEE transition and exhibits the VB peculiarities in fine XPS spectral structure. By analogy with the above findings, CEE phenomena in the bulk of a multicomponent material, under the nonresonant X-ray core-level excitation, can be characterized as follows: • Shake-off transitions are available, where pDOSs must be considered due to probable difference in their matrix elements. The same ground state (VB), the common destination (the vacuum level), and enough energy excess of any of the photoelectrons should result in analogous energy losses in the XPS spectra of different components of a sample. • Shake-up transitions are available, in which the convolution should include pDOSs of the same atom. The VB of chemically bound atoms has no preference for a photoelectron to detach its energy for the CEE transition, and so similar energy losses are also expected in the XPS spectra of different components of a sample. This chapter focuses on the DAPS omitting the allied threshold excitation techniques of the Auger electron and soft X-ray appearance potential spectroscopies, which follow the core hole decay and are complicated, at least, by the electron-core hole interaction [32]. In contrast, the DAPS fixes origination of the core hole, when the electron-hole interaction is not yet occurred or minimal. The same is true for energy losses in the XPS spectra because the photon absorption and the CEE energy dissipation can proceed at once or shortly, thus eliminating or minimizing the electron-core hole interaction, respectively. It is worth noting that CEE events are enabled by belonging of former core and valence band electrons to the same configuration and can be hardly detectable by electron energy loss spectroscopy, while the AES spectra are usually complicated by the background. HOPG and half-fluorinated graphite C 2 F DFT runs have revealed that density of states in Figure 3(a), obtained for the C 24 unit cells with the Bernal and hex structure, are similar and close to the DOS of graphite/graphene [58]. The optimized C-C bond length d C-C = 1.42 Å and the interlayer distance d layer = 3.34 Å in hex C 24 also fit to those of the graphite. The Bernal C 24 structure reveals the larger formation energy and smaller d layer by ~0.3 Å due to stronger interaction between the layers as compared to hex C 24 structure. Conventional satellites at higher energy sides of the XPS spectra correspond indeed to the photoelectron energy consumption. The CEE approach enables complete description of the HOPG XPS C1s spectrum in Figure 3(b) by the combination of shake-up and shake-off CEE transitions. The satellite ~5.5 eV in Figure 3(b) is assigned to a π plasmon responsible for the π → π* transition [59], although the standard plasmon is related to the collective oscillations of free electrons missing in HOPG [60].This satellite originates rather from the shake-off than from the shake-up p z transition (Figure 3(b)). There is expected similarity between higher energy tails of the (F) C1s (C is bound to F) and F1s XPS spectrum in Figure 4, which emphasizes the indifference of the VB with respect to energy source for the CEE transition. Fine XPS spectral structures above 10 eV conform well to shake-up (Figure 4(a)) and shake-off (Figure 4 Figure 4 (a, b) evaluate f(E, σ) and the contribution of a particular CEE transition, in total theoretical energy consumption, to fit the experimental photoelectron energy losses. (b)) CEE transitions calculated by Eq. (1). The matrix elements f(E, σ) in Eq. (1) were accepted unity for a W(E) basis set, while the Y-scale factors in The π plasmon at ~5.4 eV in Figure 4(a, b) is assigned to the conjugated π bonds in a chain of C atoms [37]. This feature fits the shake-up Cp z transition and accompanies the C1s spectrum, since the π bond is localized exclusively at the Csp 2 atom, not bound to F. Similar feature is exhibited in the C K-and not exhibited in F K-edge XANES spectra of C 2 F [29], and the π plasmon is certainly not observed the XPS C1s spectrum under the lack of π-bonds [61]. Discovery of a similar satellite in the F1s spectrum is rather confusing because F atoms have nothing to do with the π bond between C atoms), but it is quite in line with the CEE model. The base line shift relative to C1s = 285.1 eV in Figure 4 (C is not bound to F) saves accordance with the DFT data and enables to make a contribution to the energy loss ~5.4 eV to the other shake-off transitions ( (F) Cp y , Fp y , Fp z ). The formation energy of the Bernal unit cell C 24 F 12 is by 0.008 eV higher than that of the hex C 24 F 12 cell, while pDOSs of both structures are very similar. A sizable DOS at E F accepts plasmon oscillations that can give the energy loss at 9.1 or 12.9 eV for 1 or 2 free electrons per a C 2 F fragment, respectively [30]. Br 2 -embedded C 2 F XPS spectra obtained after the Br 2 intercalation into C 2 F matrix reveal stoichiometry C 2 FBr 0.15 and the lack of new features compared to XPS spectra of the pristine C 2 F (Figure 4); C1s and F1s spectra are in agreement with the same CEE transitions as before the Br 2 embedding. DFT studies were performed for the Bernal and hex C 24 F 12 Br 2 unit cells #1-9 in Table 2 at the entry angles α 0 = 0 and 90° between the Br-Br axis and C planes and different arrangements of the F atoms (Figure 1). DFT calculations have revealed that the Br 2 embedding enlarges the interlayer distance, but insignificantly affects the pDOS of the C, (F) C and F [30]. The latter conforms to chemical inertness of the pristine C 2 F cell and to low Br content in the product C 2 FBr 0.15 [62]. Invariant pDOS of the C and F atoms and slight change in the C1s and F1s XPS spectra, after the Br 2 embedding into C 24 F 12 framework, restrain the correlation between XPS and DFT outputs. The novelties of XPS and DFT data, which are resulted from the Br 2 embedding, concern the bromine only and are considered in more detail. . XPS C1s and F1s spectrum of C 2 F (relative to E Core = 287.6 eV and 687.4 eV, respectively; the background of external and surface energy losses are subtracted [35]) and shake-up (a) and shake-off (b) transitions of (F) C and F atoms forming the C-F bond in the unit cell hex C 24 F 12 (all F are outside). (c) Shake-up and (d) shake-off CEE transitions for different arrangements of the F atoms (Figure 1). Adapted from [31]. The difference F1s spectrum in Figure 5 exhibits a distinct structure, which conforms to shakeup transitions of the pDOS responsible for C-F bonding and which is interpreted as the C-F bond strengthening [30]. It can be the case since the Br 2 embedding weakens the interactions between carbon layers, which should be accompanied with the enrichment of the occupied DOS of C and F. Arrangement of the F atoms in a cell makes no matter for the conclusion. DFT studies have found a set of cells C 24 F 12 Br 2 with the optimized parameters and different local geometry and the state (atomic, molecular, and chain type) of the embedded Br 2 ( Table 2). Each of nine local structures is appropriate; no preference can be given to a particular cell from the conventional DFT study.Each of these unit cells is characterized by the specific Br pDOS structure [30]. DFT examinations of separate bromine species revealed a strong difference Δ s-p = 0.2-1.3 eV between the weighted average energy <E s,p > of the Br s-and p-DOS, far beyond the accuracy of DFT runs ~0.01 eV ( i for the Br atom with single, several localized, and diffused s-and p-DOS, respectively. In the same way, as the binding energy determines the oxidation rate in extensive XPS practice; the parameter Δ s-p was taken as a descriptor of the Br state [30]. For the unit cells with all F atoms outside, the minimal deviations Δ s-p of 0.02, 0, and 0.04 eV make preferable the cells #1, #2, and #3, respectively ( Table 2). Moreover, in case of the large d BrBr (the Br-Br bond is lost), the difference Δ s-p in a cell should be close to that of free Br (Table 1). Besides, the reaction C 24 F 12 + Br 2 → C 24 F 12 Br 2 is endothermic for cell #1, in contrast to other cells [30]. The bromine p z -and s-shake-off transition conforms to the nonresolved 5-12 eV fragment and ~20 eV shoulder of the Br3d spectrum in Figure 6(a), respectively. The higher energy parts of the XPS Br3d, F1s and C1s spectra are similar [30], but they do not correlate with any of the Br CEE transitions. This indicates such a bonding between the Br 2 molecule and the C 2 F frame that provides the Br3d photoelectron energy losses via CEE transitions of pDOS of the C and F atoms. DFT calculations for the unit cells C 24 F 12 Br 2 , with the F atoms half inside and half outside a cell, have revealed two stable Br 2 states. The first state, in cells #7 and #9, corresponds to Br 2 pairs ( Table 2), which are separated from each other in the adjacent cells and exhibit the same Br-Br distance d BrBr ~2.29 Å as in a free Br 2 molecule. The second state (cells #6 and #8) corresponds to Br arrangement as the chain, in which d BrBr ~2.44 Å within a unit cell is larger than in a free Table 2 with different layout of the F atoms (Figure 1). Br 2 , while nearest distance between the Br atoms of the adjacent cells 2.73 Å is smaller than the nearest intermolecular distance in a solid Br ~3.37 Å [63] and still enough for the vdW interaction [45]. There is no visible difference in the Br pDOS of cell #6 and #8 (chain type Br), while there is a few difference between the Br p x and p y states for cell #7 and #9 (molecular Br 2 ). Finally, the weighted average Δ s-p values are line with the molecular Br state for cells #7 and #9; the state close to Br Table 2). The ~20 eV shoulder in the Br3d spectrum in Figure 6(b) conforms well to shake-off transition of the s-state in unit cells #8 and #9 with different arrangement of the Br atoms, and there is no solid reason to give preference to a particular cell. The shape and location of the 5-12 eV spectral fragment in Figure 6(b), with due regard to the baseline of this energy region, are consistent with a comparable mixture of CEE transitions calculated for the unit cells #8 and #9. According to DFT data, there is a little difference between cells #6 and #8 (chain type Br) with respect to the shake-off p y transition, while the cell #6 wins #8 by ~5% in the formation energy. On the contrary, using the cell #7 hex instead of #9 Bernal (molecular Br 2 ) results in larger discordance with the XPS data due to their specific p x structure. Finally, the unit cell C 24 F 12 Br 2 #6, with the chain type Br layout, is preferable among others, whereas the energy losses in the Br3d XPS spectrum suggest a mixture ~1:1 of unit cells #6 and #9. Experimental data have reported the angle α ~30° between the Br-Br axis and C-planes and the molecular Br 2 state for similar to C 2 FBr 0.15 systems [38,46]. The current combination of XPS and DFT outputs suggests possibility of the chain like bromine arrangement, which can Table 2, which have been found for the most probable unit cells #2, #6, and #9, respectively. Outlook Any novel approach, including the CEE model, can be truly evaluated by a benefit from its practical application. By an example of the graphite based materials, the chapter has shown how a confluence of the XPS and DFT data can provide additional information on chemical behavior, local geometry and state of the embedded Br 2 molecule and of the other atoms in a sample. A similar treatment can be useful in the field of surface engineering as well, because just a deep knowledge on chemical behavior of a sample can disclose the mechanism and dynamics of its wear performance, thus facilitating the development of advanced materials. The valence band of chemically bound atoms is insensitive to a photoelectron, whose energy it uses for the CEE transition. Then, in the case of the multicomponent materials, agreement or disagreement between energy losses in the XPS spectra of some atoms can be a descriptor of the presence or absence of chemical bonding between these atoms in a sample [17,19,22]. Moreover, the state of any component can be traced through change or invariability of the satellites in its XPS spectra, obtained in the course of external influence, thus revealing a wear performance of the material. In case of a "simple" material, the reliable structural data can be used as starting conditions for the appropriate DFT run, which gives comprehensive information on a sample at the atomic level [16,18]. Extensive use and practice of the XPS and DFT techniques make the CEE analysis easily accessible. CEE control in a coordinated XPS and DFT study is characterized by the following obvious, verified, and hidden resources. • Comparison of fine XPS spectra with the calculated CEE transitions can provide the local geometry and bond types in a sample from conventional DFT facilities. The occurrence and consequence of bonding between atoms can be also determined, because the core-level excitation of an atom is accompanied by CEE satellites of the next one only within the integrated valence band. • The multiple CEE controls (around different XPS peaks) facilitate the data interpretation, while individual sets of the core-level energies improve studying of the multicomponent materials. • Hydrogen tracing by the XPS, as a specific CEE satellite above the core-level energy of the other sample component, is possible without a contradiction with XPS principles. • The CEE control is available for samples of any conductivity, because the photoelectron energy losses are linked up to the XPS peak regardless its apparent core-level energy. • The valence band structure in the XPS spectra differs from that obtained by the nondestructive and theoretical methods. The CEE event is nondestructive as well, because the valence band absorbs only a part of the photoelectron energy for a CEE transition, omitting a destructive force of the incident X-ray impact. Conclusion Primary collecting of the extra data by a routine technique is always desirable. This chapter highlights a rational model that gives a chance to realize this desire using the conventional XPS and DFT outputs. The model is based on following statements. • Electronic configuration of the atoms in a solid holds the traps for the energy absorption, such as valence band electron transitions; and the core-level excitation of any origin fills those traps forming the multiple channels for energy dissipation. • These channels can be traced by the XPS, as the photoelectron energy losses, and by the DFT, as the valence band electron transitions. This pattern does not conflict with general concepts of electron-solid interaction and has been well verified in model studies of Pt and graphite-based materials. • Intersection of the XPS and DFT outputs carries out two duties. First, it rejects those DFT results which do not conform to the fine XPS spectral structures. Second, it justifies the assignment of the refined DFT data, related to an appropriate unit cell, to a given XPS sample. As a result, the correlated XPS and DFT study discloses hidden potentialities of both techniques and provides the extra data on chemical behavior and local geometry of the atoms in a sample. • The procedure of a coordinated XPS and DFT study being highlighted can provide a deeper insight into the mechanism of wear performance of the material, thus facilitating the development of advanced composites.
7,401.8
2018-11-05T00:00:00.000
[ "Materials Science", "Chemistry", "Physics" ]
SUSY enhancement from T-branes We use the F-theoretic engineering of four-dimensional rank-one superconformal field theories to provide a geometric understanding of the phenomenon of supersymmetry enhancement along the RG flow, recently observed by Maruyoshi and Song. In this context, the superpotential deformations responsible for such flows are interpreted as T-brane backgrounds and encoded in the geometry of elliptically-fibered fourfolds. We formulate a simple algebraic criterion to select all supersymmetry-enhancing flows and, without any maximization process, derive the main features of the corresponding N=2 theories in the infrared. Despite M being nilpotent, fluctuations make M non-nilpotent. This allows us to read the effects of the deformations at the level of geometry, in the M/F-theory lift of the probed Tbrane configurations. This is in contrast to T-branes of the nilpotent type, which are completely invisible to the M/F-theory geometry. 3 It is precisely by looking at the behavior of this geometry along the RG flow that we will be able to conclude whether supersymmetry will enhance or not in the IR. Our logic can be summarized as follows for the 4d case (the 3d one is analogous): We start with the local geometry of a Calabi-Yau twofold in F-theory. The deformation is then encoded in the hypersurface equation for a Calabi-Yau fourfold, which characterizes the flow of the N = 1 theory between the fixed points. If supersymmetry is to enhance at the IR fixed point, the fourfold must factorize into a Calabi-Yau twofold and a trivial factor. By treating the various hypersurfaces homogeneously, we will be able to determine in which cases this factorization (and thus the enhancement) occurs, and in which cases instead the probed IR geometry is higher-dimensional, giving rise to 4d theories with only four supercharges. Finally, for the cases that display supersymmetry enhancement, we will find the candidate IR N = 2 theory together with the correct conformal dimension of its Coulomb-branch operator, by reading off the corresponding probed geometry. Our results are in perfect agreement with those known from the field-theory analysis. Remarkably, we do not make use of any maximization procedure in our study, but only perform algebraic manipulations to derive the relevant IR quantities. The paper is organized as follows: In Section 2 we discuss the case of Abelian theories in three dimensions and their realization in M-theory. This part also serves to illustrate our approach and how the infrared effective geometry can be analyzed. In Section 3 we turn our attention to four dimensional theories living on D3-branes probing F-theory singularities. We derive the conditions one has to impose in order to have supersymmetry enhancement and describe explicitly in four non-trivial cases how to determine the low-energy theory from the underlying geometry. We discuss three cases of infrared enhancement and one case in which supersymmetry does not enhance. In Appendix A we collect some basic properties of the RG flows of interest for us, and provide an alternative derivation of the criterion for enhancement based on the Seiberg-Witten geometry alone. This is then applied to perform a detailed scan of all the nilpotent orbits for rank-1 theories engineered in F-theory. Only orbits for which enhancement occurs pass our criterion. Warm-up: SQED in three dimensions In this section we would like to introduce the main ideas behind our investigation. We do it in the context of 3d Abelian field theories because, on the one hand the analysis is technically easier, and on the other hand this allows us to lay down the general string-theory set-up which, with few important variations, will also be relevant for the study of 4d theories. Geometric set-up The theory we start from is engineered by type IIA string theory with a single D2-brane probing a stack of N D6-branes in flat space-time. Branes are extended as in the following table: Type IIA 0 1 2 3 4 5 6 7 8 9 D2 × × × It is well known that the low-energy theory living on the probe is a 3d N = 4 field theory with U (1) gauge group and N hypermultiplets (originating from 2 − 6 strings) transforming in the fundamental of the U (N ) flavor symmetry. We call {Q i ,Q i } i=1,...,N the chiral components of such hypermultiplets, with gauge charge +1, −1 respectively. Strings stretching from the D2 to itself, instead, give rise to a vector multiplet and a neutral hypermultiplet. The former comprises a vector field A µ , a complex scalar field φ describing the motion of the probe transverse to the D6-stack, i.e. in the (8,9)-plane, and a real scalar field σ for the motion along direction 3. The latter, whose chiral halves we call s 1 , s 2 , is associated to the motion of the probe in the directions longitudinal to the D6-stack, namely along 4, 5, 6, 7, and in the N = 4 theory is a free field. Interactions are described by a superpotential of the form This theory has a non-trivial IR physics, and here we are mostly interested in the Coulomb branch of its moduli space, which can be described as follows [39,40]. We first Hodge-dualize the photon to a real scalar γ: The latter is periodic and lives on a circle of radius equal to the square of the gauge coupling g. Then we cast the fields γ, σ in the so-called "monopole operators": V ± ∼ e ±(σ+iγ/g 2 ) . (2. 2) The above are to be interpreted as classical relations, valid far out along the Coulomb branch, where they satisfy the obvious constraint V + V − = 1. However, at distances of order g 2 from the origin, the Coulomb branch drastically deviates from a cylindrical shape, due to strong quantum corrections. For a definition of the monopole operators V ± valid in the full quantum theory, see [41]. Quantum corrections turn the Coulomb branch into the following ALF space: This phenomenon is elegantly described by the M-theory lift of the above type IIA set-up, where the D6-branes precisely become the N -center Taub-NUT space described by (2.3) and probed by a M2-brane, while the gauge coupling gets "geometrized" into the radius of the 11th dimension. The IR fixed point of the theory on the probe corresponds to sending the gauge coupling to infinity, and thus to turning the Taub-NUT into C 2 /Z N , an ALE space with an A N −1 -type singularity at the origin. Given C 3 with coordinates u, v, z, this space is conveniently modeled by the following holomorphic surface where u, v are to be understood as "fiber" coordinates, whereas z parametrizes the base, being identified with the field φ. The M-theory configuration we are considering is summarized in the following table M-theory 0 1 2 3 4 5 6 7 8 9 10 We are now interested in adding a specific class of field-dependent relevant deformations to the superpotential (2.1), which can be described as where µ j i ≡ Q iQ j is the meson matrix (or the so-called "moment map" associated to the U (N ) flavor symmetry), and M is a gauge-invariant chiral superfield that we are adding to the theory, transforming in the adjoint of the flavor group. The string-theory set-up we are using to engineer the field theory leads to interpret the extra field M as the vacuum expectation value of the "Higgs field" Φ of the D6-branes, namely of the background field whose spectral data describe the motion of the D6-stack in the (8,9)-plane: where P M ≡ det(z1 − M ) denotes the characteristic polynomial of the N × N matrix M . In this paper, we will mostly be interested in the kind of deformations considered in 4d by Maruyoshi and Song [1], and thus will let M acquire a nilpotent vacuum expectation value. As is well known, this breaks the adjoint representation of SU (N ) into a sum of SU (2) representations, labeled by the spin. As a result, only the components δM (j,−j) will remain coupled, where the subscript denotes the lowest state of the representation with spin j. We will associate s 1 , s 2 with the fluctuations δM (j,−j) with the two highest spins 4 . It is immediate to see that these deformations originate from a Higgs-field background with the property that which correspond to having formed a T-brane of the D6-stack [15]. However, as we will see momentarily, these are the types of T-branes which do deform the geometry of the Coulomb branch: Indeed, due to the fluctuations, the spectral data of M will not be empty. Before starting to analyze the consequences of the RG flows generated by (2.5), it is worth remarking here that, clearly, this realization of the field theory forces M to only contain at most two singlet fields. As we will see, for the purpose of discussing supersymmetry enhancement, this will be enough and will not constitute a limitation at all. Actually, in the 3d Abelian theory treated in this section, even retaining just one of the two singlets will mostly be enough to show when supersymmetry enhances, and in which cases, instead, the IR theory will inevitably posses only N = 2 supersymmetry. SUSY enhancement Let us start from the simplest case, i.e. N = 2. As we increase the number of flavors later, we are going to see an easy pattern allowing us to make general statements about the enhancement. The results we find are in agreement with the field-theory analysis of [42,43]. For SU (2) there is only one non-trivial nilpotent orbit, and a single spin-1 field to be considered. The corresponding deformation matrix is where we made explicit the mass scale through the parameter m. Now, adding the deformation (2.5) to the superpotential (2.1), and integrating out the massive fields, we end up with the following effective superpotential (2.10) By flowing to the IR, we look at the system at decreasing energy scales, which effectively means sending to infinity the scale m. At the IR fixed point, therefore, the superpotential simply becomes W IR = s Q 1Q 2 , (2.11) which gives us back N = 4 supersymmetry. Alternatively, one can also invoke the chiral ring stability criterion of [42] to remove the second term in W eff : The F-term equation for s sets to zero Q 1Q 2 in the chiral ring and therefore the φ-dependent term can be dropped without affecting the infrared dynamics. The final theory is however different from the starting one: Neglecting the decoupled chiral multiplet φ, we have obtained a N = 4 theory with a single fundamental hypermultiplet. This theory has trivial Higgs branch, and in the IR it is actually equivalent to the theory of a free hypermultiplet. Indeed, using (2.7), its IR Coulomb branch is where we have performed a trivial redefinition of the coordinate s. The latter can now be eliminated in favor of u, v, which correspond to the free hypermultiplet. Geometrically, this phenomenon of enhancement amounts to a rather obvious statement: The deformed Coulomb branch is a smooth threefold, and hence it can always be written locally as a twofold times a line. This has a clear physical meaning: The deformed theory is realized on a D2-brane probing a single D6-brane wrapping the curved space z 2 − s = 0, which has a parabolic shape in the complex plane with coordinates z, s (see figure 1). In the IR, however, the D2 does not dispose of enough energy to "feel" the curvature of the D6, being only able to probe a tiny neighborhood of it around s = 0. At the fixed point, the probe just sees a flat D6-brane at s = 0, which is the reason for supersymmetry enhancement. where V j denotes the SU (2) representation of spin j. Now, in principle, we should retain all of the N − 1 spin (j, −j) fields, which will give rise to several interaction terms in the deformed superpotential. It is immediate to see, however, that the dominant term in the m → ∞ limit is always the one involving the highest spin, which, as before, substitutes φ in the role of complex scalar in the N = 4 vector multiplet. The physical meaning and the geometric realization of the enhancement are identical to the case discussed above 5 . For any nilpotent orbit other than the maximal one, the RG flows triggered by deformations analogous to those considered by Maruyoshi and Song in 4d do not lead to supersymmetry enhancement. Let us give two slightly different examples of non-principal embeddings, from which a general pattern can be easily inferred. First consider N = 3 in the starting theory, and 5 The differences are just in higher-order monomials near the origin, neglected at the IR fixed point. For instance, retaining only the two highest spins, the Coulomb branch of the deformed theory is a fourfold of the deform it according to the subregular (or minimal) nilpotent orbit of SU (3), i.e. with where again we only kept the field of highest spin, (1, −1). The Coulomb branch of the deformed geometry has now a singularity of conifold type: and consequently the M2 probing this geometry only preserves N = 2 supersymmetry. Analogously, in type IIA, the D2 sits near the intersection point of two D6-branes. As a second example, take SU (4) with the deformation induced by which descends from the [2, 2] partition of 4. Here, in the decomposition of the adjoint, one finds four spin-1 fields, and hence we need to retain at least a couple of them, s 1 and s 2 . Again, at low energies, the probe sees a conifold geometry: 17) and the IR fixed point only has four conserved supercharges. Interestingly, notice that if we had considered a slightly different deformation of this theory, namely with M like in (2.16), but with s 1 and s 2 identified, we would have found supersymmetry enhancement in the IR, with a non-trivial fixed point: a M2-brane probing the singular space It is clear that, in general, for all non-principal embeddings, we end up having at least two chiral fields which remain coupled in the IR theory. Consequently there is no supersymmetry enhancement along the RG flow. Rank-1 four-dimensional theories In this section we turn our attention to 4d N = 2 supersymmetric field theories of rank 1. The reason why we focus on them is that they have an easy geometric realization in the context of F-theory, which will help us explore the question of supersymmetry enhancement in purely algebraic terms. Our results agree and provide an explanation of the phenomena observed in [2,5,8] using field-theoretic methods. Moreover, for the RG flows leading to supersymmetry enhancement, we are able to compute algebraically the correct IR conformal dimensions of all fields, i.e. without using any maximization procedure. Geometric set-up The geometric engineering of 4d N = 2 field theories and of their N = 1 deformations in Ftheory is closely related to the M-theory setting we used to engineer 3d theories in Section 2.1. There are, however, a few important differences which we would like to highlight here. Rank-1 field theories can be realized as theories on a single D3-brane probing a stack of 7-branes in type IIB string theory [46]. Contrary to the previous case, the 7-branes can be mutually non-perturbative, thus realizing exceptional flavor symmetries 6 . The corresponding probe theories are the so-called Minahan-Nemeschansky theories [47]. F-theory provides the set-up to analyze these systems with geometric methods. It uses an auxiliary 2-torus fibered over the physical space, which from the field-theory viewpoint plays the role of the Seiberg-Witten curve. As in M-theory, one probes isolated singularities of elliptically-fibered ALE spaces, which will now be of different Kodaira types, according to the flavor structure. Probe and singular space for the starting theory in the UV extend in twelve dimensions as follows: F-theory 0 1 2 3 4 5 6 7 8 9 10 11 In the above table, 10 and 11 represent the torus directions and, unlike in M-theory, do not correspond to any physical operator of the probe theory. Instead, the (8, 9)-plane, which the torus is fibered over, is parametrized by the UV Coulomb-branch operator (this is the motion of the D3 probe transverse to the 7-branes). Like in the previous section, the 4, 5, 6, 7 coordinates, s 1 , s 2 , correspond to a free hypermultiplet parameterizing the motion of the probe along the 7-brane stack, which will be coupled when deforming the theory. Deformations are again formulated as in (2.5), where µ is the moment map associated to the flavor symmetry of the starting theory, which in most of the cases is a non-Lagrangian theory. M is the extra chiral field added to the theory, which, following [44], we will split as where the first term is its vacuum expectation value, taken along the nilpotent element ρ(σ + ). Again, for the purpose of discussing the possible appearance of supersymmetry enhancement in the IR, it will be sufficient to restrict the above sum of fluctuations to the two highest spins, s 1 , s 2 , in the decomposition of the adjoint representation of the original flavor symmetry. We name them such that spin(s 1 ) ≥ spin(s 2 ). The UV conformal dimension D UV (·) of such fields is related to their spin by [44] (see also [2]) as we will review in Appendix A. Hence fields of vanishing spin are free fields. Like in the M-theory construction, M induces a deformation of the ALE space in F-theory, because it corresponds to the vacuum expectation value of the 7-brane Higgs-field. The probed configuration is again of T-brane type. In this case, however, there is a technical complication: The characteristic polynomial of M does not directly appear in the one defining the geometry, like in (2.7). Nevertheless, there exists a precise one-to-one correspondence between the Casimir invariants of M and the versal 7 deformations of the original singular geometry (see e.g. [48]). See Table 1, taken from [49], for a summary of the complete unfolding of the singularities which are relevant to us in this section. By using this correspondence, we are able to write down the deformed F-theory geometry for any given nilpotent orbit. Kodaira Surface Flavor Regardless of which RG flow and which energy scale we look at, the probed F-theory geometry is always going to look like a hypersurface in Weierstrass form where x, y are the auxiliary fiber directions and z parametrizes the UV Coulomb branch. The precise form of the holomorphic functions f, g depends on the choice of UV theory we start with, and on the way we deform it (i.e. on the choice of nilpotent orbit in (3.1)). The ALE space corresponding to the UV theory is retrieved from (3.3) by setting s 1 ≡ s 2 ≡ 0. SUSY enhancement Let us now perform an algebro-geometric analysis of the RG flows triggered by the T-brane deformations described above. In particular, our method will tell us which nilpotent orbits are expected to lead to supersymmetry enhancement in the IR, and what are the N = 2 geometries we are supposed to land on in each case. Results are in agreement with [2,5,8]. Approach The logic is the same of the previous section: If supersymmetry enhances in the IR, the deformed geometry, given by the (local) Calabi-Yau fourfold in equation (3.3), must factorize into a Calabi-Yau twofold and a trivial factor: Here we made it manifest that the twofold geometry we find in the IR is generally different from the one we start with in the UV. The two are obviously the same if we choose the trivial nilpotent orbit, whereby s 1 , s 2 are both free fields (being of spin 0), and thus δW ≡ 0. In fact, as we will see momentarily, this is the only instance for which the twofold geometries coincide. To proceed, we employ the following strategy: First, since conformal dimensions of operators may be viewed as C * -assignments for the corresponding algebraic variables, we promote the affine coordinates in (3.3) to projective ones, and require the fourfold polynomial to be homogeneous. Homogeneity is indeed the geometric counterpart of the fact that relative scalings of the fields in question are invariant under RG flow 8 . 8 See Appendix A for a field-theory proof of this fact. Next, we assume that supersymmetry enhances at the end of the flow and we make an "educated guess" of what the IR twofold geometry would be. Our Ansatz is that means promoting the highest-spin field s 1 to the role of IR Coulomb-branch operator (or equivalently making it the base coordinate for the elliptic CY IR 2 ). Note that, on the one hand, choosing to retain s 2 instead of s 1 would have not been a viable option because, due to homogeneity, a decoupling of s 1 would force s 2 to decouple too. On the other hand, suppressing s 1 , s 2 , namely imposing CY IR 2 = CY UV 2 , would force z to maintain its UV conformal dimension, again due to homogeneity. This in turn implies trivial RG flow, and thus trivial nilpotent orbit. Finally, we must make sure that our Ansatz (3.5) is indeed consistent. In order to do so, we compute the IR conformal dimensions of the various fields and verify that both z and s 2 will hit the unitarity bound and decouple [50], which means: We argue that the deformations for which this happens lead to IR supersymmetry enhancement. In contrast, when at least one of these fields does not satisfy (3.6), the IR fixed point certainly preserves only four supercharges. Based on the above considerations, we can immediately conclude that a non-trivial orbit characterized by an adjoint decomposition where the highest-spin state is multiply populated, can never lead to supersymmetry enhancement. This is because D(s 1 ) = D(s 2 ) both in UV and IR, and thus the second relation in (3.6) cannot be satisfied. Similar conclusion holds for those orbits where the highest-spin field has UV dimension smaller or equal to that of the Coulomb-branch operator, because the first relation in (3.6) would be violated. Conformal dimensions in the IR are computed in a purely algebraic manner, by exploiting the assumption of extended supersymmetry: 9 As is usually done to construct local models in F-theory [51], we imagine the twofold CY IR 2 fibered over the rest of the space, and thus let the homogeneous coordinates x, y, s 1 to be sections of suitable powers of the canonical bundle of the base. These powers are the would-be conformal dimensions in the IR. We now impose that the total space of such fibration be Calabi-Yau, which, using adjunction, amounts to the condition 10 D IR (x) − D IR (y) + D IR (s 1 ) = 1 . (3.7) Together with homogeneity, this equation allows us to determine D IR for all fields. They will coincide with the correct IR conformal dimensions, in case (3.6) is satisfied and thus the assumption of enhancement is found to be consistent. In the cases where we find no enhancement, instead, this algebraic method is unreliable, and one cannot bypass the a-maximization process to derive the correct IR scaling of fields. We will now exemplify the strategy outlined above in three qualitatively distinct cases of enhancing RG flows and in one instance without enhancement, deferring to Appendix A a complete scan of rank-1 theories and of their Maruyoshi-Song deformations. SU(2) gauge theory with 4 flavors and maximal orbit Among the UV theories we consider, this is the only case with a weakly-coupled Lagrangian description, and the analysis can be performed entirely in a type IIB setting of a D3-brane probing a stack of 4 D7-branes attached to an O7 − plane. We will however proceed using the more powerful geometric setting of F-theory, which can be exported to the strongly coupled cases of the next subsections. The SU (2) gauge theory with SO(8) flavor symmetry arises as the theory on a D3-brane probing an elliptically-fibered ALE space with D 4 singularity at the origin (Kodaira type I * 0 ). The corresponding hypersurface equation in C 3 reads where τ denotes the exactly marginal coupling of this theory. As discussed, we can derive the UV conformal dimension of the Coulomb-branch operator z and the scalings of the auxiliary variables x, y, by simply fibering this singular space over the 7-brane worldvolume. This gives us the following three conditions: where the first comes from the Calabi-Yau condition of the total space of the fibration (the UV analog of (3.7)), whereas the others are simply consequence of the homogeneity of (3.8). Solving the above system gives D UV (z) = D UV (x) = 2 and D UV (y) = 3. We are now interested in deforming this theory with M in the principal nilpotent orbit of so (8). The decomposition of the adjoint corresponding to this orbit reads (see Table 2): where V j indicates the representation of spin j under the embedded SU (2). We now retain only two of the four extra singlets which remain coupled to the theory, namely the spin-5 field and one of the spin-3 fields, and identify them with s 1 , s 2 respectively. The deformation we consider is then of the form (2.5), with M the following 8 × 8 matrix For a discussion on how to build explicit standard triples for nilpotent orbits of complex simple Lie algebras, see for example [52]. The characteristic polynomial of the above matrix reads P M (t) = t 8 + 240 √ 6s 1 t 2 + (120s 2 ) 2 , (3.14) from which we see that two of the four independent Casimir invariants of so(8) are activated, i.e. the sixth-order Casimir and the Pfaffian (one of the two fourth-order Casimir's). After a suitable rescaling, they can be identified with s 1 , s 2 respectively. We can now derive the versal deformations of the D 4 singularity induced by (3.13), finding the (smooth) fourfold (see Table 1): Following our procedure, we make at this point the assumption that this RG flow leads to supersymmetry enhancement and make the Ansatz (3.5) for the twofold in the IR, i.e. CY IR 2 : This is a smooth local elliptic K3 manifold, with a singular fiber in the origin of cusp form (Kodaira type II). We now compute the new scaling dimensions of x, y, s 1 , by using (3.7) and homogeneity of (3.16), and we find D IR (x) = 2/5, D IR (y) = 3/5, D IR (s 1 ) = 6/5. Homogeneity of (3.15) then fixes the conformal dimensions of z, s 2 to be D IR (z) = 2/5, D IR (s 2 ) = 3/5, which violate unitarity unless the fields z, s 2 decouple. This makes our Ansatz consistent, and we have found that the N = 2 IR fixed point is the theory of a D3-brane probing the space (3.16), i.e. the Argyres-Douglas theory of type H 0 . Also, the dimension of its Coulomb-branch operator s 1 (6/5) turns out to be the correct one. E 7 Minahan-Nemeschansky and E 6 -type orbit The starting UV theory here is the one living on a D3-brane probing the space i.e. a local K3 surface with an E 7 singularity in the origin (Kodaira type III * ). Following the usual trick, we compute the scaling dimensions of x, y, z, finding D UV (x) = 6, D UV (y) = 9 and D UV (z) = 4. The deformation we would like to consider corresponds to the nilpotent orbit of e 7 with Bala-Carter label E 6 , which has an adjoint decomposition such that (see Table 4) We therefore identify the spin-11 and the spin-8 fields with s 1 , s 2 respectively. Among the 7 independent Casimir invariants of the e 7 algebra, there is one of degree 12 and one of degree 18. By a quick scaling argument which uses equation (3.2), we conclude that both of them are activated and are proportional to s 1 and s 2 2 respectively. There are no other combinations of the two singlets with a degree matching that of any other Casimir invariants. As can be seen from Table 1, the deformed hypersurface is then (modulo rescaling of fields) We now conjecture that supersymmetry enhances with a IR twofold of the form: This theory arises on a D3-brane probing the elliptic surface (3.20) with singularity of Kodaira type III. E 6 Minahan-Nemeschansky and D 4 -type orbit Here we start from the theory of a D3-brane probing the following ALE space which has an E 6 singularity in the origin (Kodaira type IV * ). Scaling dimensions in the UV are: D UV (x) = 4, D UV (y) = 6 and D UV (z) = 3. We consider the deformation associated to the nilpotent orbit of e 6 with Bala-Carter label D 4 , which has an adjoint decomposition with the property (see Table 3) We are led to identify the spin-5 field with s 1 and the spin-3 field with s 2 . Among the 6 independent Casimir invariants of the e 6 algebra, there is one of degree 8 and one of degree 12, which we identify with s 2 2 and s 2 1 respectively. By a degree-argument we can conclude that no other Casimir is activated. Using Table 1, we can write the deformed geometry as Our educated guess for the IR twofold corresponding to the susy-enhanced theory is CY IR 2 : E 8 Minahan-Nemeschansky and E 8 (a 2 )-type orbit We would like to conclude this section by giving an example for which our procedure guarantees the absence of IR supersymmetry enhancement. We start in the UV with the E 8 Minahan-Nemeschansky theory, i.e. with a D3-brane probing the ALE space Table 5) We identify the spin-19 and the spin-17 fields with s 1 , s 2 respectively. Among the 8 independent Casimir invariants of e 8 , the only ones which are turned on by this deformations are the one of degree 18 and the one of degree 20, which, after suitable field redefinitions, can be taken to coincide with s 2 and s 1 respectively. The geometry (3.25) will then be deformed as follows (see Table 1) We now assume that this RG flow leads to supersymmetry enhancement and make the following Ansatz for the IR twofold: which is an ALE space with type-III Kodaira singularity, just like (3.20). Hence, IR scaling dimensions are: D IR (x) = 2/3, D IR (y) = 1, D IR (s 1 ) = 4/3. Using homogeneity of (3.27), however, we find that the new scaling dimensions of z, s 2 are: D IR (z) = 2/5 and D IR (s 2 ) = 6/5. Therefore, while the former Coulomb-branch decouples, the other extra singlet remains coupled and thus invalidates our assumption of IR supersymmetry enhancement. We can then definitely conclude that the IR theory has only N = 1 supersymmetry, but the conformal dimensions we have computed are incorrect. In this case, we cannot bypass a differential process to derive them. Conclusions In this note we have explained how the geometric realization of theories with 8 supercharges in M/F-theory can be used to understand the phenomenon of supersymmetry enhancement upon a superpotential deformation of the Maruyoshi-Song type. The key fact is that, in the stringy setup we consider, the superpotential deformation is encoded in the Weierstrass polynomial (3.3), and therefore we gain control over the entire RG flow using purely geometric techniques. In our setup the phenomenon of supersymmetry enhancement is translated into the simple geometric constraint that the background should preserve half of the supersymmetry, and thus reduce at low energy to a twofold times a flat euclidean space. This requirement precisely provides the consistency conditions which select the class of T-branes inducing supersymmetry enhancement. Looking beyond the scope of this paper, our analysis potentially leads to some interesting observations regarding nilpotent T-branes. We make an example to explain this point: Take the H 1 theory which flows to H 0 upon a nilpotent vev for the SU (2) adjoint. If we flip the singlet s 1 (thus making it massive), 11 we are left at low energy with the same theory we would get by activating a constant nilpotent T-brane. The relation with the H 0 theory clearly tells us that the IR fixed point can also be reached starting from a different background (the Kodaira-type II singularity realizing the H 0 theory) and deforming the theory by flipping the Coulomb-branch operator. A relation of this type extends to several other cases of nilpotent T-branes: The low-energy theory living on the probe can be realized starting from a different geometry via a superpotential deformation. It would be interesting to investigate this aspect further and understand how general this phenomenon is. We hope to come back to this issue in the future. Also, part of our construction extends to higher-rank theories which can be engineered in M-theory by wrapping M5-branes on a Riemann surface: The models discussed in this paper are just special cases. It would be important to extend our construction to that class of theories as well, and to formulate a precise geometric criterion for supersymmetry enhancement. A similar question can be asked in the more general context of class S theories. We are currently investigating these topics. Acknowledgments We would like to thank A. Collinucci 11 By "flipping an operator O" we mean adding by hand a chiral multiplet s and turning on the superpotential term W = sO [53]. A Scan of rank-1 theories In this Appendix we formulate a criterion for supersymmetry enhancement using arguments based on the Seiberg-Witten (SW) curve, and then show explicitly that our criterion selects precisely for each theory the correct nilpotent orbits. We do not discuss in detail the RG flows starting from the Argyres-Douglas theories H 1 and H 2 , which have already been discussed several times in the literature. It can be easily checked that in these two cases all choices of nilpotent vev pass our test and lead to supersymmetry enhancement in the IR. A.1 RG flow invariance of relative scalings and enhancement criterion As we have explained in Section 3, the SW curve for the models we consider in the present paper is an elliptic curve that can be uniformly written as in (3.3): where the precise form of f and g depend on the theory and the choice of nilpotent orbit. The curve associated with the UV theory, before turning on the mass deformation, is obtained simply setting s 1 = s 2 = 0 in (A.1), and in this case z describes the expectation value of the Coulomb-branch (CB) operator. The explicit form of the SW differential λ SW is also model dependent. However, its derivative w.r.t. the CB operator is always equal to the unique (up to exact terms) holomorphic differential dx/y of the torus [54]. This condition follows from extended supersymmetry. We therefore have (in the UV) the relation As is well known, from the curve and differential one can extract the scaling dimensions of CB operators [55]: The key observation for us is that the relative scaling dimensions do not vary along the flow and are therefore the same in the UV and in the IR. This can be seen for example by noticing that the curve (A.1) describes both the UV and the IR theory and the curve is the only information we need to extract relative scaling dimensions. We can also provide a more direct field-theoretic argument based on symmetries, as we will now explain. The deformation be parametrized as follows (see [2] for conventions): where the value of can be determined via a-maximization. All the operators appearing in In From the RG independence of relative scaling dimensions we know that the ratio (which we denote by α) between the dimension of s 1 and the dimension of z is the same in the UV and in the IR, and indeed we know how to compute it in the UV: explained in Section 3, whenever supersymmetry enhancement occurs, s 1 is identified with the CB operator in the IR, and therefore (A.2) should be replaced by leading to the equation Using now (A.6) and (A.7), we can rewrite (A.9) as follows: , Using again the RG invariance of relative scaling dimensions (A.6), these inequalities can be written more explicitly as follows: If the choice of nilpotent orbit is not consistent with these inequalities, we conclude that supersymmetry does not enhance. In the rest of the Appendix we will check that (A.12) singles out precisely the nilpotent vev's which induce enhancement of supersymmetry in the infrared for the theories D 4 , E 6 , E 7 and E 8 . A.2 Flows starting from D 4 In Table 2 we list all the nilpotent orbits of D 4 . The orbits colored in red are the ones in which the highest-dimensional irrep appears more than once as well as those such that the singlet belonging to the highest-spin irrep has in the UV a conformal dimension smaller or equal to 2 (i.e. the dimension of the CB operator). As we have seen in Section 3, for these orbits there can never be enhancement. This leaves us with a total of 5 cases to be checked by hand. Remarkably, all five cases will give enhancement. Therefore, simply by imposing the two criteria of uniqueness of the highest-spin singlet and that its UV dimension is greater than Orbit O dim CŌ Decomposition of Adj Enhancement? The undeformed Weierstrass model reads (see Table 1) From the homogeneity of the curve and the CY condition we can determine the dimensions of the coordinates and of the CB operator. We get The inequalities (A.12) now read and simply by looking at Table 2, we can see that they are indeed satisfied in the enhancing cases. A.3 Flows starting from E 6 The undeformed Weierstrass model in this case reads (see Table 1) From the homogeneity of the curve and the CY condition we can determine the dimensions of the coordinates and of the CB operator. We get (A. 17) In Table 3 we list all the nilpotent orbits of e 6 . Again we color in red orbits with the highest-spin UV dimension less or equal to 3 (i.e. the UV dimension of the CB operator) or in which the highest-spin irrep appears more than once. As we have seen in Section 3, for these orbits there can never be enhancement. This leaves us with a total of 10 cases to be still checked. The inequalities (A.12) in this case read (A. 18) In Table 3 we color in blue all the orbits which violate (A.18), leaving just the orbits which give supersymmetry enhancement. A.4 Flows starting from E 7 The undeformed Weierstrass model in this case reads (see Table 1) From the homogeneity of the curve and the CY condition we get In Table 4 we list all the nilpotent orbits of e 7 . The orbits colored in red are the ones in which the highest-dimensional irrep appears more than once. As we have seen, for these orbits there can never be enhancement. We also color in red the orbits for which the singlet s 1 belonging to the highest-spin irrep has in the UV a conformal dimension smaller than or equal to that of the Yes, H 0 theory. Yes, H 2 theory. CB operator, which is 4. They also can never lead to enhancement, as discussed in Section 3. This leaves us with a total of 20 cases to be checked by hand. In Yes, H 0 theory. From the UV surface (see Table 1) we find the following assignment of scaling dimensions: In Table (5) we list all the nilpotent orbits of e 8 . Again we have colored in red all the orbits which cannot induce enhancement of supersymmetry, either because the highest-dimensional irrep appears more than once, or because the singlet belonging to the highest-spin irrep has in the UV a conformal dimension smaller than or equal to that of the CB operator. Since in this case the CB operator has dimension 6, the latter condition rules out all the orbits with highest spin smaller or equal to 5. This leaves us with a total of 25 cases to be checked by hand. Decomposition of Adj Enhancement?
9,535.4
2018-09-13T00:00:00.000
[ "Physics" ]
Vector-Like Pairs and Brill--Noether Theory How likely is it that there are particles in a vector-like pair of representations in low-energy spectrum, when neither symmetry nor anomaly consideration motivates their presence? We address this question in the context of supersymmetric and geometric phase compactification of F-theory and Heterotic dual. Quantisation of the number of generations (or net chiralities in more general term) is also discussed along the way. Self-dual nature of the fourth cohomology of Calabi--Yau fourfolds is essential for the latter issue, while we employ Brill--Noether theory to set upper bounds on the number $\ell$ of vector-like pairs of chiral multiplets in the SU(5) 5+5bar representations. For typical topological choices of geometry for F-theory compactification for SU(5) unification, the range of $0 \leq \ell \lesssim 4$ for perturbative unification is not in immediate conflict with what is already understood about F-theory compactification at this moment. Introduction "Who ordered that?" The Standard Model of particle physics contains three generations of quarks and leptons. Particle theorists have long been wondering what can be read out from the number of generations, N gen = 3. If the Standard Model as a low-energy effective theory is obtained as a consequence of compactification of a high-energy theory in higher dimensional space-time, N gen is often determined by index theorem (or an equivalent topological formula) on some internal geometry. Historically, it was first considered to be χ(Z; T * Z) = χ(Z) top /2, the Euler characteristic of the cotangent bundle of a Calabi-Yau threefold Z, in a (2,2) compactification of Heterotic string theory [1]. Its generalisation in Heterotic string (0,2) compactifications is χ(Z; V ), where V is a vector bundle on Z. In Type IIB / F-theory language, N gen is given by χ(Σ; K 1/2 Σ ⊗ L) = c 1 (L), where L is a line bundle on a holomorphic curve Σ in a complex threefold M int . In any one of those implementations, the fact that N gen = 3 only means that one number characterising topology of compactification data happens to be 3. Study of string phenomenology in the last three decades provides a dictionary of translation between the data of effective theory models and those for compactifications. An important question, then, is whether such a dictionary is useful. 1 The former group of data have direct connection with experiments, while we need to be lucky to have an experimental access to the latter in a near future; this means that the dictionary may not be testable. Compactification data may still provide correlations/constraints through the dictionary among various pieces of information in the effective theory model data-that is the remaining hope. From this perspective, it is crucial which observable parameter constrains compactification data more. This letter shows, in section 2, that the value of N gen brings virtually no constraint on the topology of the curve Σ, threefold M int or Z; this is due to the self-dual nature of the middle dimensional cohomology group of Calabi-Yau fourfolds, in F-theory language. This is a good news for those who seek for existence proof of appropriate compactifications, and a bad news for those who seek for profound meaning in N gen = 3. In section 3, we focus on the number of matter fields in a vector-like pair of representations, as in the title of this article. It has often been adopted as a rule of game in bottom-up model building that vector-like pairs of matter fields are absent unless their mass terms are forbidden by some symmetry. Papers from string phenomenology community, on the other hand, often end up with such vector-like pairs in low-energy spectrum; difficulty of removing them from the spectrum is reflected the best in the heroic effort the U. Penn group had to undertake until they find a Heterotic compactification with just one pair of Higgs doublets. We will see, in section 3, that there is no reason to trust the bottom up principle based on the current understanding of F-theory/Heterotic string compactification; in the meanwhile, there is a good reason to believe (cf [5]) that generic vacua of F-theory compactification (and Heterotic dual) will predict smaller number of vector-like pairs than in papers (such as [2,3]) that have been written. 2 Brill-Noether theory sets upper bounds on the number of vector-like pairs ℓ for a given genus g of a relevant curve Σ; given the typical range O(10)-O(100) for g(Σ) for the matter fields in the SU(5) GUT -(5 +5) representations, the range of 0 ≤ ℓ 4 for perturbative unification are not in immediate conflict with most of internal geometry for F-theory / Heterotic string compactifications. Discussions in section 2 and section 3 are mutually almost independent. Despite many math jargons, logic of section 3 will be simple enough for non-experts to follow. Observations in both sections will have been known to stringpheno experts already to some extent (e.g. section 7 of [5]), but have not been written down as clearly and in simple terms as in this article, to the knowledge of the author. So, there will be a non-zero value in writing up an article like this. Language of supersymmetric and geometric phase F-theory compactification is used in most of discussions in this article. Heterotic string compactification on elliptic fibred Calabi-Yau threefolds is also covered by the same discussion, due to the Heterotic-F-theory duality. It is worth noting that large fraction of Calabi-Yau threefolds admit elliptic fibration [6]. 3 2 In this article, we are concerned about vector-like pairs in string compactification that are not associated in any way with symmetry or anomaly (and its flow). In compactifications that have an extra U(1) symmetry (which may be broken spontaneously or at non-perturbative level), low-energy spectrum tends to be richer, partially due to the 6D box anomaly cancellation of U(1) (cf [4]). This article is concerned about more conservative set-ups, where there may or may not be an extra U(1) symmetry; matter parity is enough for SUSY phenomenology. 3 M-theory compactification on G 2 -holonomy manifolds is not discussed here, because the author is not a big fan of it. It is difficult to obtain realistic flavour pattern in SU(5) GUT in that framework [7], and a solution to this problem has not been known so far. If SU(5) unification is not used as a motivation, however, almost all kinds of string vacua (including IIA, IIB, Type I and those in non-geometric phase) will be just as interesting. Self-dual Lattice Let X be a compact real 2n-dimensional oriented manifold. Combination of the Poincare duality and the universal coefficient theorem implies that the middle dimensional homology group [H n (X; Z)] free forms a self-dual lattice; 4 the intersection pairing matrix in is symmetric and integer-valued, and its determinant is ±1. F-theory Applications Warming-up We begin with the simplest example imaginable. Consider using the sextic fourfold X = (6) ⊂ P 5 for M-theory compactification. We have an effective theory of 2+1dimensions then. For a generic complex structure of X, algebraic two-cycles (real four-cycles) generate a rank-1 sublattice M := Z H 2 | X of L := H 4 (X; Z); the generator 5 is H 2 | X , where H is the hyperplane divisor of P 5 , and (H 2 , H 2 ) = 6. Let M ′ := [M ⊥ ⊂ L] be the orthogonal complement of M. Since the dimension of the primary horizontal and primary vertical components of H 2,2 (X; R)-h 2,2 H (X) and h 2,2 V (X)-add up to be h 2,2 (X) in this example, M ′ ⊗ R ⊂ L ⊗ R corresponds to the primary horizontal component of X = (6) ⊂ P 4 . M ′ must be a lattice of rank-(b 4 (X) − 1) whose intersection form is given by a matrix with the determinant 6. Due to the property of self-dual lattices stated earlier, When a fourform is restricted within a class it is guaranteed to be purely of (2,2) Hodge component for any complex structure of the sextic fourfold. Its integral over the algebraic cycle H 2 | X can take a value in the value is quantised in units of 6, and cannot be 0, 1, 2, 4 or 5 modulo 6. When we allow the flux to be in [9] G ∈ c 2 (T X) however, the self-dual nature of the lattice L = H 4 (X; Z) indicates that the integral H 2 | X G = (H 2 | X , G) can take any integer value. Such a flux G is not purely of (2,2) Hodge component in an arbitrary complex structure of X, but the Gukov-Vafa-Witten superpotential drives the complex structure of X to an F-term minimum, where the (1,3) and (3,1) Hodge components of the flux G vanish (see also a comment later) 6 SU(5) GUT models: Let us consider F-theory compactification on a fourfold X 4 so that there is a stack of 7-branes along a divisor S in B 3 . This means that there is an elliptic fibration π : X 4 −→ B 3 , there is a section σ : B 3 −→ X 4 , and X 4 has a locus of codimension-2 A 4 singularity in π −1 (S). LetX 4 be a non-singular Calabi-Yau fourfold obtained by resolving singularities of X 4 (see [10,16] for conditions to impose onX 4 ). For concreteness of presentation, we choose the base threefold to be a P 1 -fibration over P 2 , and the remaining five These 9 cycles generate a rank-9 sublattice M vert of a self-dual lattice L = H 4 (X 4 ; Z). The intersection form is given by in the basis of those 9 cycles; the determinant of this 9 × 9 matrix is discr(M vert ) = (3 + n)(18 + n), which does not vanish in the range −3 < n < 3 of our interest. It is not obvious whether the lattice M vert generated by the nine elements above is a primitive sublattice of L; since L is not necessarily an even lattice, we have a limited set of tools to address this question. When it is not, however, we just have to replace the nine generators appropriately, so that M vert becomes the primitive sublattice of L. Arguments in the following needs to be modified accordingly, but not in an essential way. discr(M vert ) may not be the same as (3 + n)(18 + n) after the replacement, but the sublattice M vert still remains non-degenerate. Let M ′ be the orthogonal complement, [M ⊥ ⊂ L], in the lattice L. In the examples considered here, M ′ corresponds to the horizontal components, M horz , because M ⊗ Q = M vert ⊗ Q and the non-vertical non-horizontal component is empty [5]. The quotient is a finite group isomorphic to M ∨ /M = M ∨ vert /M vert . For a flux G to preserve the SO(3,1) and SU(5) symmetry, it has to satisfy all of [11] (G, When we choose a fourform flux G from c 2 (TX 4 )/2 + M, the conditions above leave as the only possible choice. This flux is always of pure (2,2) Hodge component for any complex structure ofX 4 , and hence defines a supersymmetric vacuum. This is the flux constructed in [12]; see [13,14,15,16]. Within this class of choice of the fourform flux, the number of generations is quantised as follows [17]: although λ FMW can change its value by ±1, N gen cannot change by ±1. This would serve as a tight constraint in search of a geometry with "right topology" for the real world; the value of |λ FMW (3 + n)(18 + n)| would never be as small as 3 for the choice of (B 3 , S) we made here. In fact, we do not have to choose the flux from c 2 (TX 4 )/2 + M. The condition of [9] does not rule out choice of flux from a broader class c 2 (TX 4 )/2 + L. Because of the self-dual nature of L, the homomorphism L −→ M ∨ is surjective. This means that we can change the flux by ∆G ∈ L whose image in M ∨ is anything one likes. In particular, there exists a change ∆G ∈ L so that (∆G, x) = 0 for all the eight generators in (10), while N gen is changed by (∆G, E 2 · E 4 ) = ±1. Therefore, the flux G can be chosen within c 2 (TX 4 )/2 + L so that N gen = 3, and the SO(3,1) and SU(5) symmetry is preserved. Certainly such a flux is not purely of (2,2) Hodge component for generic complex structure ofX 4 , but the complex structure ofX 4 is driven to an F-term minimum of the Gukov-Vafa-Witten superpotential, where the (1,3) + (3,1) Hodge component of the flux is absent automatically, and the moduli are stabilised (cautionary remark follows shortly, however). To put it from a slightly different perspective, the surjectivity of the homomorphism L −→ M ∨ means that we can choose the M ∨ ⊂ M ⊗ Q component of the flux in L ⊗ Q arbitrarily, to suit the need from phenomenology (such as symmetry preservation and choosing N gen ); this is, in effect, to relax the condition λ FMW ∈ (1/2) + Z and allow the overall coefficient (denoted λ instead of λ FMW ) to take any value in [1/(3 + n)(18 + n)] × Z. Once the M ∨ component is chosen, then one can always find some element in (M ′ ) ∨ so that their sum fits within L ⊂ (M ∨ ⊕ (M ′ ) ∨ ). Depending upon phenomenological input, such as N gen = 3, we may not be able to choose the flux so that the (M ′ ) ∨ component vanishes, but that is an advantage rather than a problem, since complex structure moduli ofX 4 tend to be stabilised then. One can see that the M ∨ -component of the flux, (11) with a relaxed quantisation in λ, satisfies the primitiveness condition J ∧ G = (t S S + t P 2 H P 2 ) · G = 0, where J is the Kähler form on B 3 . This is enough to conclude that the primitiveness condition is satisfied, because the non-vertical component does not contribute to J ∧ G. A cautionary remark is in order here. First, the (M ′ ) ∨ = M ∨ horz component of the flux G horz needs to be chosen so that (G horz ) 2 > 0, or otherwise there is no chance of finding a supersymmetric vacuum. This condition is not hard to satisfy, because we can change G horz freely by +M horz without changing the value of N gen or breaking the SO(1,3) and SU(5) symmetry, and the lattice M horz is not negative definite. An open question is, for a given [G horz ] ∈ M ∨ horz /M horz , how one can find out whether there is a choice of Hodge structure ofX 4 so that there exists G horz ∈ M ∨ horz with the vanishing negative component; note that a choice of Hodge structure introduces a decomposition of M hor ⊗ R into (2h 4,0 + h 2,2 H )-dimensional positive definite directions and 2h 3,1 -dimensional negative definite directions. 7 Due to the absence of a convenient Torelli theorem for general Calabi-Yau fourfolds, the author does not have a good idea how to address this problem. Generalisation: The argument above can be used in set-ups where more phenomenological requirements are implemented. One can impose an extra U(1) symmetry (for spontaneous R-parity violation scenario instead of Z 2 parity), and a flux for SU(5) → SU(3) C × SU(2) L × U(1) Y symmetry breaking can be introduced in the non-vertical non-horizontal component of H 4 (X 4 ) [19]. One just has to take the lattice M ⊂ L = H 4 (X 4 ; Z) so it contains all the cycles relevant to symmetry (symmetry breaking) and the net chiralities of various matter representations in the low-energy spectrum. The self-dual nature of H 4 (X 4 ; Z) is the only essential ingredient in the argument above, and hence the same argument applies to more general cases. 8 Heterotic Dual The same story should hold true, when the argument above in F-theory language is translated into the language of Heterotic string. N gen can be chosen as we want it to be, by choosing the value of λ FMW characterising the vector bundle for Heterotic compactification not necessarily in (1/2) + Z. Supersymmetry can still be preserved, presumably by choosing the complex structure of a Calabi-Yau threefold Z and vector bundle moduli appropriately and introducing a threeform flux and non-Kählerity of the metric on Z. It is hard to verify this statement directly in Heterotic string language, but that must be true, if we believe that 7 Even when such G horz ∈ M ∨ horz and an appropriate Hodge structure is present, too large a positive value of (G horz ) 2 would violate the D3-tadpole condition. So, this is another physics condition to be imposed. 8 The algebraic cycles S to be used in χ = S G to determine net chiralities need to be primitive elements of the primitive sublattice M ⊂ L, for the argument to apply. If some cycle S were an integer multiple of another topological cycle, mS ′ for some m ∈ Z, then the net chirality on S is always divisible by m, no matter how we choose a flux. The Madrid quiver [18]-fractional D3-branes at C 3 /Z 3 singularity-is the best known example of that kind. The matter curve is effectively the canonical divisor of the vanishing cycle there is one-to-one dual correspondence (even at the level of flux compactification) between elliptic fibred Calabi-Yau threefold compactification of Heterotic string and elliptic fibred K3-fibred Calabi-Yau fourfold compactification of F-theory. 9 Number of Vector-Like Pair Multiplets We often encounter in supersymmetric string compactification with SU(5) GUT unification that there are multiple pairs of chiral multiplets in the SU(5) GUT -5 + 5 representations left in the low-energy spectrum and no perturbation in moduli can provide large masses to those vector-like multiplets. A good example is the one in [2], where the low-energy spectrum has 34 + N ′ chiral multiplets in the 5 representation and 34 + N ′ + N gen of those in the5 representation. 10 The N ′ > 0 copies of chiral multiplets in the 5 +5 representations have ∆W = φ ·5 · 5 coupling with moduli fields φ, but 34 other vector-like pairs remain in the low-energy spectrum (at least without supersymmetry breaking) in the example studied in [2]. It is likely that those 34 vector-like pairs have nothing to do with some symmetry in the 4D effective theory. Symmetry has been one of the most important guiding principles in bottom-up effective theory model building for more than three decades. It has often been assumed in model building papers that matter fields in a vector-like pair of representations are absent in lowenergy spectrum, unless their mass terms are forbidden by some symmetry principle. Does the bottom-up guiding principle overlook something in string theory, or is there something yet to be understood in string phenomenology? This guiding principle in bottom-up model building corresponds to the following statement in mathematics. Let us first note 11 that the number of SU(5) GUT -5 and5 chiral multiplets are given by and respectively, for some holomorphic curve Σ and a line bundle O(D) on Σ, quite often in supersymmetric and geometric phase compactifications of F-theory for SU(5) unification models [12,17,2,20,14,15]. We assume that the flux (i.e., O(D)) is chosen to realise the appropriate net chirality (cf discussion in section 2) Thus, this general statement in math is in line with the bottom-up principle. The gap between the bottom-up guiding principle and the predictions of multiple vector-like pairs as in [2,3] must be due to non-genericity of the complex structure of the holomorphic curve, of the flux configuration, or of both, in the math moduli space M g and Pic χ+g−1 (Σ g ). Most of papers for spectrum computation in F-theory or Heterotic string compactification so far employed the flux (11) or something similar. With more general type of flux configuration (as discussed in section 2), however, more general elements of O(D) ∈ Pic χ+g−1 (Σ g ) can be realised than, for example, in [2,3]. Smaller number of vector-like pairs may be predicted in F-theory and elliptic fibred Heterotic string compactifications then ( [5]). The question is how general τ ∈ M g and O(D) ∈ Pic χ+g−1 (Σ) can be in such string compactifications. It is easy to see that the complex structure of the holomorphic curve Σ for the 5 +5 matter cannot be fully generic. Let us take the example (5) for illustration purpose. The genus g of Σ is given by [21,15] 2g − 2 = (3n + 24)(3n + 21) − 2(3 + n)(9 + n) = 7n 2 + 111n + 450, (16) and the dimension of M g is 3g − 3. On the other hand, the defining equation of the curve Σ involves 5 + n 2 + 8 + n 2 + 11 + n 2 + 14 + n 2 + 21 + n 2 − 9 = 5n 2 + 113n + 770 2 (17) complex parameters; the first five terms correspond to h 0 (P 2 ; L) for line bundles L = O(3+n), O(6 + n), O(9 + n), O(12 + n) and O(18 + n); the last term accounts for the isometry of P 2 and the overall scaling of the defining equation. The freedom (17) available for the complex structure of Σ in F-theory compactification remains to be smaller than the 3g − 3 dimensions of the math moduli sapce M g , as long as −3 ≤ n, which allows SU(5) GUT models. The condition (a) necessary for the general math statement ℓ = 0 (and absence of vector-like pairs) is not satisfied in string compactifications. 12 We will also find more direct evidence for this in footnote 14. To summarise, predictions of multiple vector-like pairs in string compactifications, such as those in [2,3], do not have to be taken at face value, because only purely vertical flux was considered in those works; more generic choice (that involves horizontal components) would predict smaller number of vector-like pairs. But, the bottom-up guiding principle does not have to be trusted too seriously either, because the holomorphic curve Σ for SU(5) GUT -5 +5 matter fields is not expected to have a generic complex structure. Brill-Noether theory [22] 13 tells us a little more than the general math statement quoted above. Let Σ be a genus g curve and O(D) a line bundle on Σ whose degree is d = χ + g − 1. First of all, When 0 ≤ d ≤ g − 1, there are soft upper bound and hard upper bound. Clifford's theorem provides the hard upper bound, 12 An intuitive (but not rigorous) alternative explanation is this. In Heterotic string, with a gauge field background in SU(5) str (which breaks E 8 symmetry down to SU(5) GUT = [SU(5) ⊥ str ⊂ E 8 ]), the5 GUT matter fields are determined by the Dirac equation in the 10 = ∧ 2 5 str representation of SU(5) str . Despite the 10 components participating in this Dirac equation, the structure group remains SU(5) str , not SU (10). 13 The phenomenon that the values of ℓ and (ℓ − χ) jump up and down over the math moduli space M g and Pic χ+g−1 is a math translation of the coupling ∆W = z · 5 ·5. The remaining question, which is partially discussed with (17) vs (3g − 3), is how much of the math moduli space is covered by the physical moduli space (fields) of compactification. In other words, it is to study z(φ, G), where φ denotes physical moduli and G the flux. which holds for any complex structure of smooth curve Σ. When the complex structure of Σ is not non-generic, there is a stronger upper bound, 14 because the Brill-Noether number ρ := g − ℓ(ℓ − χ) becomes negative for ℓ beyond this upper bound. Due to the Serre duality, it is enough to focus on the cases with d ≤ g − 1. In the case of SU(5) GUT -10 + 10 matter fields, string compactification often ends up with g ≤ −χ = N gen = 3 (though not always), and hence the d < 0 case applies. The vector like pair of 10 + 10 is absent then. In the case of SU(5) GUT -5 +5 matter fields, however, g often takes a much larger value (as in the example (16)), and hence the ℓ = 0 result does not apply. Typical values of g listed in Table 1 For such large values of g, d = χ + g − 1 is close to g − 1 for χ = −N gen = −3 or χ = 0. The upper bounds (19,20) for those g and d have no conflict with vector-like pairs in the range of 0 ≤ ℓ 4 for perturbative gauge coupling unification. 15 It requires much more dedicated study to go beyond. One could try to characterise what the physically realised subspace-one with the dimension given in (17)-in M g would be like, or to work out the image of not necessarily purely vertical fluxes mapped into Pic χ+g−1 (Σ); the cautionary remark in page 6 also needs to be taken care of along the way. They are way too beyond the scope of this article, however. It is also worth studying how discussion in this article needs to be modified, when spontaneous R-parity violation scenario is at work (where an off-diagonal 4D scalar field breaking a U(1) symmetry to absorb a non-zero Fayet-Iliopoulos parameter (cf section 5 of [12] and [25,26,27,28,29,30])). 14 This upper bound is not always satisfied (hence this is a soft upper bound), when the complex structure of Σ is somewhat special. A good example is found in [3]. There, a flux is chosen as in (11), including the quantisation condition on λ FMW , so that χ = −N gen = −17. In addition to this net chirality in the SU(5) GUT -5 +5 sector, non-removable ℓ = 11 vector-like pairs are predicted in that example. In this case, g = 174, and hence d = 156. The hard upper bound ℓ ≤ d/2 + 1 = 79 is satisfied, but the stronger upper bound for Σ with a generic complex structure, ℓ ≤ 7.15, is not satisfied. So, this computation is a direct evidence that the curve Σ for the 5 +5 matter in F-theory does have a special complex structure (even after choosing the complex structure ofX 4 completely generic). The dimension counting argument using (17) is not the only evidence for the non-genericity of τ ∈ M g . It will be possible to carry out similar study for the examples in [23]. 15 The H 2,1 moduli of F-theory compactification (and also presumably their Heterotic dual) do not receive large supersymmetric mass terms from the Gukov-Vafa-Witten superpotential, and are likely to change O(D) = K 1/2 Σ ⊗ L in Pic χ+g−1 (Σ g ). So they are good candidates of a singlet field S that have a coupling ∆W = S · 5 ·5; some of the H 3,1 moduli may also remain unstabilized supersymmetrically (i.e., in the low-energy spectrum) and play the same role. There is nothing new in that observation, but there will be some value to leave such a footnote in this article as a reminder.
6,581.4
2016-07-31T00:00:00.000
[ "Physics" ]
MIRIAM Resources: tools to generate and resolve robust cross-references in Systems Biology Background The Minimal Information Requested In the Annotation of biochemical Models (MIRIAM) is a set of guidelines for the annotation and curation processes of computational models, in order to facilitate their exchange and reuse. An important part of the standard consists in the controlled annotation of model components, based on Uniform Resource Identifiers. In order to enable interoperability of this annotation, the community has to agree on a set of standard URIs, corresponding to recognised data types. MIRIAM Resources are being developed to support the use of those URIs. Results MIRIAM Resources are a set of on-line services created to catalogue data types, their URIs and the corresponding physical URLs (or resources), whether data types are controlled vocabularies or primary data resources. MIRIAM Resources are composed of several components: MIRIAM Database stores the information, MIRIAM Web Services allows to programmatically access the database, MIRIAM Library provides an access to the Web Services and MIRIAM Web Application is a way to access the data (human browsing) and also to edit or add entries. Conclusions The project MIRIAM Resources allows an easy access to MIRIAM URIs and the associated information and is therefore crucial to foster a general use of MIRIAM annotations in computational models of biological processes. Background Computational Systems Biology relies on developing large quantitative models of biological processes. Because of their size and complexity, those models need to be exchanged and reused, rather than rewritten. Standard formats have been created by the community to encode Systems Biology models, such as SBML [1], CellML [2] or BioPAX [3]. However, the fact that a model is syntactically correct does not ensure its semantic accuracy. Moreover, because of thematic or personal preferences, the terminology used to name model components varies widely. The community had therefore to define a set of guidelines to improve the quality of models aimed to be exchanged. The Minimal Information Requested In the Annotation of biochemical Models (MIRIAM) [4] fulfils this need by providing a standard for the annotation and curation of biochemical models. MIRIAM is a project of the international initiative Bio-Models.net [5], which aims are multiple: define agreedupon standards for model curation, define agreed-upon vocabularies for annotating models with connections to biological data resources and provide a free access to published, peer-reviewed, annotated, computational models. Others projects of this initiative includes BioModels Database [6], a free, centralised database of curated, published, quantitative kinetic models of biochemical and cellular systems; and the Systems Biology Ontology (SBO) [7]. All these projects together support the exchange and reuse of quantitative models. MIRIAM originates from the specific requirement to facilitate the exchange of kinetic models between databases, standards and software, as witnessed by the original authors, involved in BioModels Database, CellML, COPASI, DOCQS, JWS Online, MathSBML, Reg-ulonDB, SBML, SBMLmerge, SBW and SigPath. The support of MIRIAM in the community has been growing steadily since its release, as witnessed by the growing number of citations, the recognition in community surveys [8] and the incorporation of MIRIAM annotations in widely used standard formats such as SBML [9]. Because quantitative modelling is only one facet of modern integrative biology, MIRIAM has now joined the Minimum Information for Biological and Biomedical Investigations (MIBBI), a broader effort to enhance cooperation between guidelines in life science [10]. An important part of MIRIAM requirements consists in the controlled annotation of model components, based on Uniform Resource Identifiers (URI) [11]. To summarise, all the components of a model need to be unambiguously identified in a perennial and standard way. This annotation should be consistent across all the data types used to annotate a model. MIRIAM URIs have been developed for this purpose. In this article we present the URI scheme used by MIRIAM annotations and the resources we have developed to support their usage by modellers and model users. Although these resources have been developed with the annotation of quantitative models in mind, they can be used as a generic resolving system for resources in biology. MIRIAM URIs An identifier is a single unambiguous string or label or name, that references or identifies an entity or object (that can be a publication, a database, a protein, a gene, etc.). The scientific community needs unique and perennial identifiers [12], to reliably describe, define or exchange objects, and therefore construct an integrated and fundamentally interoperable "bioinformatics world" [13]. An object identifier must be: Unique: an identifier must never be assigned to two different objects; Perennial: the identifier is constant and its lifetime is permanent; Standards compliant: must conform to existing standards, such as URI; Resolvable: identifiers must be able to be transformed into locations of on-line resources storing the object or information about the object; Free of use: everybody should be able to use and create identifiers, freely and at no cost. In addition an ideal identifier should be semantic-free, in the sense that it should not contain the information it is pointing to. A possible exception often mentioned are the InChIs [14], although this is debated. In particular they are not unique. Several objects can have the same InChI, for example of cis and trans-platin [15]. The precise form of InChI beyond the basic connectivity and stereochemistry layers depends on some parameters and different InChIs can be generated for the same compound. Finally, InChIs cannot be generated for some classes of compounds, for instance polymers. Because of the perenniality requirement, one cannot use physical addresses, such as URLs [16] corresponding to physical documents, to reference pieces of knowledge. The use of numerical identifiers by themselves cannot be sufficient. "9606" represents Homo sapiens in the taxonomy databases, but a German article on social services for PubMed. Those identifiers of dataset acquire a meaning only within the context of a data type (generally, but not always, a given data resource). Some catalogues of data types in life science have been developed, recording the usual acronyms, such as the Gene Ontology database abbreviation [17]. However, the non-uniqueness of these acronyms makes them hardly usable. For instance, CGD is the acronym of the Candida Genome Database, but also the Cattle Genome Database. One approach to overcome this problem is to use unambiguous URI [11] instead. This approach has been successfully used for instance by the publishing industry with the Digital Object Identifier (DOI) [18,19] or by the astronomical community with the International Virtual Observatory Alliance (IVOA) Identifiers [20]. DOI have not been used widely in the scientific community because of the mandatory registration and their cost. Other generic systems of URI construction have been proposed such as the BioPAX URIs [21] or the PURL-based Object Identifier [22] (based on Persistent Uniform Resource Locator (PURL) [23] and Open Archive Initiative Identifiers [24]), but their structure does not allow to avoid the problems enumerated above. The closest effort to what is needed to annotate quantitative models are the Life Sciences Identifiers (LSID) [25,26]. And as a matter of fact, LSIDs are valid MIRIAM URIs. MIRIAM URIs are identifiers based on URI to uniquely refer to data entities. For more flexibility, they can follow two syntaxes: Uniform Resource Locator (URL) [16], like a common physical address on the Web, or Uniform Resource Name (URN) [27], like LSID. MIRIAM URIs are identifiers, as described previously, so they are unique, persistent, resolvable and freely usable. Moreover, they are case-sensitive, since URIs are. It is important to notice that, even when they comply with the URL scheme, they do not describe a physical resource, and several physical documents can present the information identified by one MIRIAM URI. Nevertheless, these physical locations can be retrieved by a resolution service described below. This feature is not unique, for example, DOI, PURL and LSID can be resolved through dedicated services. MIRIAM URIs are composed of two parts. First comes the URI of the data type, which is a unique, controlled description of the type of the data. For example, if the entity to annotate is a protein sequence, the data type could be UniProt. If the entity is an enzymatic activity, the data type could be the Enzyme Nomenclature of the International Union of Biochemistry and Molecular Biology, etc. The second part of the URI is the element identifier, which identify a specific piece of knowledge within the context of the data type. As a result, a MIRIAM URI looks like: <URI of the data type> # <identifier of the element> , summarised by <Authority> # <ID>. For example, in order to identify the publication describing MIRIAM, we can use: http://www.pubmed.gov/#16381840. Note that the "hash" is only necessary in the URL scheme but not in the URN one. In order to enable interoperability of this annotation, the community has to agree on a set of recognised data types. MIRIAM Resources are an online service created to catalogue the data types, their URIs and the corresponding physical URLs or resources, whether these are controlled vocabularies or databases. Anybody can propose new data types that are included if they fulfil the necessary requirements of stability, openness and provide suitable identifiers and programmatic access. It is important to understand that MIRIAM data types do not represent kinds of biological information. They represent a standardised identification scheme for a type of biological information associated with a set of resources using the same set of identifiers. In some cases, different players in a domain agreed to unify the access to the data, such as PIR, SwissProt and TrEMBL with UniProt for protein sequences. In such a case, the data type corresponds mostly to the type of biological information. In other cases, several MIRIAM data types represent the same type of biological information presented independently by different resources. This is the case for chemical compounds for instance, for which MIRIAM uses ChEBI, KEGG Compound, PubChem Substance and Compound. See Figure 1 for a subset of the data types listed in MIRIAM Database. MIRIAM Resources is therefore not designed to handle multiple aliases used to refer to a same biological information stored in different data resources using different identifiers. Other resources and tools already exist for that kind of purpose, such as AliasServer [28], Sequence Globally Unique Identifiers (SEGUID) [29] or the International Protein Index (IPI) [30] for protein sequences. Moreover, MIRIAM data types do not belong to anybody, and in particular to the corresponding data providers. MIRIAM Resources is an open project, whether regarding its source code, the data stored and its access. It is divided into four components ( Figure 2): • MIRIAM Database: core element of the resource, storing all the information about the data types and their associated information; • MIRIAM Web Services: SOAP-based application programming interface (API) for querying MIRIAM Database; • MIRIAM Library: library to use MIRIAM Web Services; • MIRIAM Web Application: interactive Web interface for browsing and querying MIRIAM Database, and also submit or edit data types. All these components have been developed using the UTF-8 character encoding in order to allow the storage and display of international data. The usage of existing standards, where appropriate, has been preferred to enhance interoperability. Moreover, the project has been designed in order to allow its evolution and improvement, by including new data types to the database or by addition of new methods to the Web Services. MIRIAM Database The core element of the resource is a relational database, using a MySQL database management system. The central elements are the data types. For each data type, the following information is stored: identifier Internal stable and perennial identifier. name Expression commonly (and in general "officially") used to identify the data type. synonyms Synonym(s) of the name (used for instance to store the expanded version of an acronym). definition Short description of the data type, and the associated resources. identifier pattern Regular expression of the identifiers used by this data type. official URL URI used to identify the data type, following the Uniform Resource Locator syntax. official URN URI used to identify the data type, following the Uniform Resource Name syntax. deprecated URIs Deprecated versions of the URIs (which can be URLs or URNs). resources Online data resources which provide datasets corresponding to the data type. -identifier Internal stable and perennial identifier. -data entry Physical address used to access a particular element stored by the data type. -data resource Physical link to the main page of the resource. -information Information about the resource. -institution Name of the institution managing the resource. -country Location of the institution managing the resource. -documentation Link towards pieces of documentation about the data type. The first items represent general information. They are all mandatory, except the synonyms. The identifier is automatically generated during the submission process. It is perennial and stable. It varies from MIR:00000001 to MIR:00099999. An example of MIRIAM Database entry, At least one official URI (whether it is a URL or a URN) needs to be provided for each data type. No more than one official URL and one official URN can be provided for a given data type. It may happen, although it should be rare, that data types merge, or an URI needs to be changed for various reasons. MIRIAM URIs are unique and persistent. Accordingly the root URI defining the data type must also be unique and persistent. It cannot be deleted, only deprecated. Deprecated URIs are stored to allow backward compatibility with models annotated using old identifiers, so that their annotation does not need to be rewritten. It is important to notice that the URI used to describe a data type is not a valid physical address. It is only an identifier and it should not be used to try to access a dataset on the Internet, whether using a Web browser or Web Services. If it happens to also be a valid physical address, this physical resource should be disregarded for MIRIAM purposes. MIRIAM Database browser A resource is a service providing datasets corresponding to a data type. It could be a database accessible online through a Web-based interface, or a series of datasets available through FTP, etc. Several resources may exist for a given data type. Some are pure mirrors, but others may provide a different datasets, or datasets with slightly different metadata. A data type is always linked to at least one resource. Each resource is described by a stable and perennial identifier. It varies from MIR:00100001 to MIR:00199999. Documentation about a data type can be added as a full physical address (URL) or just as a MIRIAM URI (example: pubmed.gov/#16333295). The second choice is favoured to avoid any problem of resources unreachable in the future (it only relies on MIRIAM Resources). MIRIAM Web Services MIRIAM Resources provide several resolution and conversion services, such as retrieving the information stored about a data type, generating a MIRIAM URI from a data type name and the identifier of a dataset, resolving all the physical locations corresponding to a MIRIAM URI, etc. MIRIAM Resources are not designed to be end-user software, but rather tools used by other programs via application-to-application communications. We provide a Web interface to perform queries on the database only as a demonstration of what MIRIAM Web Services can offer. On the contrary, the programmatic access to MIRIAM URI is the "raison d'être" of MIRIAM Resources. MIRIAM requires to annotate quantitative models using standard URIs, that are perennial and shield the user from the resources distributing the datasets. A software developer, working for instance on a modelling environment or a simulation software, cannot develop support for all the possible Web Services offered by the data-providers in lifescience. This developer would not even know which data types would be used by the end users to annotate their models, or would be present in the annotation of models imported. A resolving system had necessarily to be unique. Furthermore, there is not a single source of information for a given data type. For instance UniProt is accessible through the EBI (UK), the SIB (Switzerland) and the PIR (USA). Gene Ontology is available through dozens of resources around the world. We offer a programmatic access through the Internet to MIRIAM Database via Web Services [31]. They are based on Simple Object Access Protocol (SOAP) [32], which is itself based on XML [33]. A public definition, which fully describes the methods provided, is available using the Web Services Description Language (WSDL) [34], also an XML-based language. The choice for an access based on SOAP, instead of other solutions, like Common Object Request Broker Architecture (CORBA) [35] or Distributed Component Object Model (DCOM) [36], relies on the fact that we wanted a standard, interoperable, reliable and easy to develop solution. Structure of MIRIAM Resources The interoperability is brought by all the protocols used: they are standards, mainly created by the World Wide Web Consortium (W3C). Moreover, all the messages are sent using the HTTP protocol [37], therefore, the access through firewalls is possible, without any special configuration. Finally, the success of SOAP-based Web Services [38] over the last half-decade means that software exist to make the development of MIRIAM clients very easy. MIRIAM Library In order to encourage a rapid and widespread usage of MIRIAM Web Services, it was important to decrease the amount of work necessary to implement clients. The creation of a library, written in Java, was undertaken for that purpose. The package distributed comprises a precompiled library (jar) (running on all operating systems with a Java Virtual Machine available) and the source code. It is available from the MIRIAM project on SourceForge.net [39], the world's largest open-source software repository and project hosting service, as well as from the MIRIAM Resources pages at the EMBL-EBI Web site. Figure 3 Detail of an entry of MIRIAM Database. The example represents the entry of Enzyme Nomenclature. Note the three alternative resources giving access to the same data types. Detail of an entry of MIRIAM Database Two versions of the library are available: a standalone version, which does no needs any extra software to function properly, and another version, lighter, without all the dependencies (such as Apache Axis [40], Web Services Description Language for Java Toolkit (WSDL4J) [41] ...). MIRIAM Web Application MIRIAM Web Application is the most visible part of MIRIAM Resources. It is a traditional Web application, based on the 1.4 Java 2 Platform Enterprise Edition (J2EE) technologies [42] (such as JavaServer Pages, Servlets ...). No special framework (like Struts, Spring or Shale) was used in the development, but the internal structure of the application follows the Model-View-Controller (MVC) design pattern [43]. Moreover, a Servlet Controller has been created to handle all the requests. The application runs inside a Apache Tomcat Web container [44], version 5.0. Several other tools from the Apache Software Foundation are used, like Log4j [45] or Database Connection Pooling (DBCP) [46]. The application allows users to browse and query MIRIAM Database, submit new data types for inclusion in the database, export the whole content of the database and access all the information about the project (see the left-menu on Figure 1). The inclusion of new data types submitted through the interface depends on validation by members of the MIRIAM-team after verification that the submission fulfils MIRIAM requirements. In order to allow a dynamic display of the query interface, Asynchronous JavaScript and XML (AJAX) [47] has been used, via the library AjaxTags [48]. MIRIAM Resources A vast number of biological data resources and services have arisen over the last decades. However, whether they are located in bioinformatics "hubs" (NCBI, EBI ...) or distributed, their structure and mode of access is always specific. Past the institution front-page, there is little or not unification or standardisation of access to the data. MIRIAM Resources enable computational systems biologists to access them using a unified scheme. Basically it is both an identifier scheme registry and a resolution service. It provides several services to the user, mainly dealing with generation (and storage) of URIs and retrieval of physical data from those URIs. One of the core features is to provide a unified interface to particular pieces of knowledge, regardless of the specifics of the sources. MIRIAM Resources can be considered as an interoperability framework for scientific collaboration on computational modelling [49]. Curator's point of view A model curator is a person who encodes, in a standard description format, a model created and described by somebody else, or corrects a model already encoded. For those curators (or even for the model creators, who are the people who initially designed the model), there is a need to put additional information on top of the model structure and mathematics. Whatever the format used to encode the model (SBML, CellML, BioPAX, MML, VCML ...), all the components of the model must be unambiguously identified. Accordingly, MIRIAM Standard requires that each model constituent is linked to relevant entries in existing freely accessible resources ("External data resources annotation" in the main publication of MIRIAM). One way for a model to be declared MIRIAM compliant is to be accompanied by MIRIAM URIs linked to all the components. The annotation of a model is a tedious but enlightening process. It is nevertheless much easier when coupled with the encoding or curation of the model. Indeed a curator had to already acquire a deep understanding of all the components of a model in order to correct its syntax and semantics. Therefore, the only thing needed is to use a model edition software to generate the appropriate URIs, such as SBMLeditor [50], based on the knowledge of a relevant accession for a given data type. Of course, this is possible only if the tool uses the method getURI() of MIRIAM Web Services (see below) or has a local version of MIRIAM database. Developer's point of view The developer of a software to be used in computational systems biology will have to import models already encoded. If an interface to display them is to be created (Web-based or rich client), one needs to convert all the MIRIAM URIs for instance into physical addresses, which can be used to recover the knowledge stored in the entities pointed to by the annotations. The conversion from MIRIAM URIs to physical addresses can be done using the getDataEntries() method of MIRIAM Web Services. Current status and future developments A fully functional version of MIRIAM Resources is already available online, providing all the services described in this article. Around forty different data types are currently recorded. Several projects already use MIRIAM Resources to resolve their annotation, such as BioModels Database [6] or the E-MeP project [51]. As the adoption of MIRIAM Resources spread in the community, the number of data types should grow accordingly. With the increase of the usage of the resources, new needs will necessarily appear. New methods will be developed and added, via new releases, to the application. Users are encouraged to provide new data types as well as ideas to improve the resources. Another way of improving the data already stored is to provide, in addition to the addresses of Web pages presenting information about a relevant dataset, a programmatic access to the dataset itself (for instance via Web Services). Therefore, a wider range of applications would be able to retrieve the information. MIRIAM and MIRIAM Resources were born in the field of Computational Systems Biology, in order to fulfil the needs of a better annotation of biochemical models. Nevertheless the current tools can be used in many other fields where similar issues exist: to identify datasets and be able to retrieve them consistently via a network. This is why the source code of the whole project (including the Web application, the Web Services and the library) is released under the terms of the GNU General Public License. Therefore, everybody is able to setup its own local resource to manage the data types they use and need. Conclusions The project has now reached a fully functional and stable state. Therefore MIRIAM Resources can be safely adopted by model databases and software projects. As an example, it is currently used by BioModels Database to process the annotation of the models into relevant hyperlinks. It is also used by SBMLeditor for the creation of models compliant with MIRIAM in SBML. We hope that this work will help the adoption of MIRIAM as a standard rather than a mere set of guidelines, by providing tools to allow the community to easily create and annotate MIRIAM compliant models. Availability and requirements MIRIAM Resources are accessible on the EMBL-EBI Web site, at the following address: http://www.ebi.ac.uk/ miriam/. The source code of the whole project (MIRIAM Web Services, MIRIAM Library and MIRIAM Web Application) is currently available under the GNU General Public License (GPL) and can be downloaded at: http://sourceforge.net/ projects/miriam/.
5,749.8
2007-12-13T00:00:00.000
[ "Computer Science", "Biology" ]
Subsurface structure identification uses derivative analyses of the magnetic data in Candi Umbul-Telomoyo geothermal prospect area Telomoyo geothermal prospect area is located in Central Java, Indonesia. One of the manifestations around Telomoyo is a warm spring, called Candi Umbul. The hydrothermal fluids from the manifestation could be from the subsurface flowing up through geological structures. The previous research about 2D magnetic modeling in Candi Umbul showed that there was a normal fault with strike/dip N60°E/45° respectively. This research aims to know the distance boundary and the kind of the geological structure in the study area. We also compared the geological structure direction based on the geologic map and the derivative maps. We used derivative analyses of the magnetic data, i.e. First Horizontal Derivative (FHD) which is the rate of change of the horizontal gradient in the horizontal direction. FHD indicates the boundaries of the geological structure. We also used Second Vertical Derivative (SVD) which is the rate of change of the vertical gradient in the vertical direction. SVD can reveal normal fault or thrust fault. The FHD and SVD maps show that the geological structure boundary has the same direction with the north west-south east geological structure. The geological structure boundary is in 486 m of the local distance. Our result confirms that there is a normal fault in the study area. Introduction Indonesia is a country that located in the ring of fire zone. This condition makes Indonesia has about 127 active volcanoes around it [1]. These volcanoes are the sources of some geothermal prospect areas in Indonesia, especially in Java Island that dominated by volcanism activities. One of the geothermal areas in Java is Telomoyo. One of the manifestations around Telomoyo is a warm spring, called Candi Umbul. This warm spring has a chloride water type which the temperature is about 36 0 C and pH 7,6 [2]. Based on the research of structural lineaments satellite imagery data, the geothermal prospect area in Candi Umbul is correlated with Telomoyo activities [3]. Besides that, there were VLF-EM and 2D magnetic modeling research. The VLF-EM result showed that there was a high conductivity zone. It associated with the subsurface 2 1234567890 ''"" structure. The high conductivity zone was located in 5600 m of the local coordinate [4]. The 2D magnetic modeling in Candi Umbul showed that there was a normal fault with strike/dip N60 0 E/45 0 respectively [5]. This research aims to know the distance boundary and the kind of the geological structure in Candi Umbul-Telomoyo geothermal prospect area. We used derivative analyses of the magnetic data, i.e. first horizontal derivative and second vertical derivative. We also compared the geological structure direction based on the geologic map and the derivative maps. [6]. The survey design points overlay the geologic map [7]. The yellow rectangle map is the map of Indonesia, while the green rectangle map is the map of Java Island. The black rectangle in Java Island's map shows the research location. Materials and Methods This research is an advance part of the previous research. The method and the data are same, but we used different analyses. In this research, we used the magnetic method. The magnetic method is a potential method which measures the earth's magnetic field intensities. The magnetic field intensities are obtained from the magnetic properties of the underlying rocks and the environment where the rocks are in it [8]. To delineate and analyze the structures beneath the surface in Candi Umbul-Telomoyo geothermal prospect area, we used First Horizontal Derivative (FHD) and Second Vertical Derivative (SVD) to the magnetic data. Both analyses aim to know the margins of the magnetic sources [9]. Before doing FHD and SVD, the data were transformed into a Reduction to Pole (RTP) because the study area still produced dipole data ( Figure 2). RTP is a filter to make the data easier to be interpreted. It provides a simple approach to improve realistic estimations of the source of anomalies [10]. The dipole data have an asymmetric pattern [11]. This asymmetric pattern depends on the shape of perturbing body, the direction of the magnetic field and the inclination angle of the study area [12]. The First Horizontal Derivative (FHD) is the rate of change of the horizontal gradient in the horizontal direction. FHD indicates the boundaries of the geological structure. This derivative analysis is used to delineate high-frequency features clearly [13]. The gradient amplitude of first horizontal derivative can be defined by [14]: the SVD can reveal normal fault or thrust fault. If the maximum absolute value is greater than the minimum absolute value, we conclude it as a normal fault. Vice versa, we conclude it as a thrust fault [16]. Besides that, we also used the geologic map from figure 1 to determine the geological structure in the study area. The FHD and SVD maps show that the geological structure boundary has the same direction with the north west-south east geological structure. The geological structure that appears in the FHD and SVD maps is only the west side. The other geological structures do not appear on the map because the lithology is dominated by andesite in the middle and east side. Andesite is an igneous rock that contains some magnetic minerals. Those minerals will have a strong response to the magnetic anomalies. So, the lithology response will be stronger than the geological structure response. Results and Discussions We made two graphs of FHD and SVD from the A-B slicing line of figures 3 and 4. The first horizontal derivative tends to have characteristics with maximum or minimum value in the area with the geological structure. In figure 5a, the peak of the graph shows the maximum value. This maximum value indicates the geological structure boundary. From the graph, we know that the geological structure boundary is located in the local distance 486 m with the FHD value is 0,489 nT/m. To determine the kind of the geological structure in the study area, we used the SVD graph in the figure 5b. In this graph, the anomaly that is caused by the geological structure will have maximum absolute value and minimum absolute value. The maximum absolute value is 0,00261 nT/m 2 and the minimum absolute value is 0,00239 nT/m 2 . Since the maximum absolute value is greater than the minimum absolute value, we conclude it as a normal fault. This research has confirmed the previous 2D magnetic modeling research. The kind of the geological structure in the study area is a normal fault. Besides that, this research has confirmed the geological map about the direction of the west side geological structure is north west-south east. Conclusions Based on the results, only the west side geological structure boundary appears in the FHD and SVD maps. The FHD and SVD maps show that the geological structure boundary has the same direction with the north west-south east geological structure. This geological structure boundary correlates with the maximum value of the FHD map and it has 0 value in the number scale range of the SVD map. The FHD graph shows that the geological structure boundary is in 486 m of the local distance. It has maximum FHD value 0,489 nT/m. While the SVD graph has maximum absolute value at about 0,00261 nT/m 2 and minimum absolute value at about 0,00239 nT/m 2 . Since the maximum absolute value is greater than the minimum absolute value, we conclude that the kind of the geological structure in the study area is a normal fault.
1,747.2
2018-04-01T00:00:00.000
[ "Geology" ]
A Comparison Analysis Between Pre-departure and Transitioned Expat-Preneurs This paper contributes to the understanding on the reasons that lead to entrepreneurship in other countries. We focus on expat-preneurs, those who decided to undertake business opportunities in other countries (before or after settling there). Using comparison analysis and logistic regression, we examine pre-departure and transitioned expat-preneurs’ demographic characteristics and push-pull factors that lead them to expatriate. From a survey conducted in 2015-2016 of 5,532 Lithuanians expatriated in 24 countries, a sample of 308 respondents with their own businesses abroad was selected. This research contributes to the literature on expat-preneurs, with empirical evidence on pre-departure and transitioned self-initiated (SI) expat-preneurs. The results revealed that demographic features matter when studying such global entrepreneurs. It is a process experienced differently by males and females and, as such, it can be considered as gender selective. Thus, more pre-departure expat-preneurs are male than female, but there is a growing number of female transitioned expat-preneurs. Pre-departure expat-preneurs are older and less educated than transitioned ones and have been pushed to move abroad by issues such as political corruption or a non-supportive tax system, and are attracted by a higher possibility of self-realisation as well as the prestige of the host country. Meanwhile, transitioned expat-preneurs have been pushed to emigrate due to family reasons or too few employment opportunities in their home country. INTRODUCTION Nowadays, more and more people work abroad. In 2017, it was estimated that there were 66.2 million expatriates worldwide, which represents 0.77 percent of the total global population (Finaccord, 2018;Hussain et al., 2019). "Being rooted in a profession rather than a country and trying to find the best possibility to work in that profession without being limited by national borders is what reflects the reality of many -especially highly skilled -individuals of our time" (Agha-Alikhani, 2018, p. 2). The growing involvement of expatriates in the development of entrepreneurial businesses has been observed together with the increasing expatriation numbers (Sekliuckienė et al., 2014;van Rooij and Margaryan, 2019;Internations, 2020). Moreover, Baycan-Levent and Nijkamp (2009) highlighted that, in general, foreigners are more likely to become entrepreneurs than similarly skilled native-born workers, and self-employment rates of foreigners in many countries exceed those of native-born. To date, expat-preneurs by themselves are not a very much analysed phenomena, despite the current context of globalisation. Vance et al. (2016) presented a concept of expat-preneurs, dividing them into pre-departure and transnational expatpreneurs, and posed potential research questions in this field. Paik et al. (2017) theoretically analysed self-initiated expatriates (SIEs) who become expat-preneurs and Selmer et al. (2018) focused on a comparison of SIEs with expat-preneurs coming from assigned expatriates (AEs). However, the aim of this paper is to compare the demographic characteristics and motivations to expatriate of pre-departure and transnational SI expat-preneurs, something that has not been done in previous studies. As the basis for this study, we concentrate only on Lithuania. Since the restitution of Lithuanian independence in 1990 and the collapse of the Soviet Union, the Lithuanian net migration indicator has been negative (Migration in numbers, 2020). Therefore, Lithuania is a good example for a deeper look at the phenomena of expatriation. The following comparison analysis is based on Lithuanian expat-preneurs (people who moved from Lithuania and established businesses abroad). Our paper is organised as follows. First, the meaning of expatpreneur is presented, with the focus on two types in particular: pre-departure and transitioned expat-preneurs. Second, the concept of expat-preneur and its demographic profile is reviewed, and an analysis of push-pull factors influencing the decision to leave the home country finalises the theoretical part of the paper. The research model and method are presented in the methodology section. The results of the quantitative research of Lithuanian expat-preneurs in 24 countries are provided later. Discussion, conclusion, limitation, future research directions, and practical implications finalize the paper. Self-Initiated Expatriates The concept of SIEs was first introduced by Suutari and Brewster (2000), where the authors presented self-initiated expatriates in contrast with assigned expatriates, these being expatriates sent abroad by their employer (Arp et al., 2013). In comparison with AEs, SIEs are described as individuals who decide to look for international work-experience on their own initiative (Fitzgerald and Howe-Walsh, 2008;Andresen et al., 2014;Meuer et al., 2019;Andresen et al., 2020). In other words, they are conceptualised as free agents who cross organisational and national borders, unobstructed by barriers that constrain their career choices (Inkson et al., 1997). Froese and Peltokorpi (2013) and Fee and Gray (2020) highlight that the demand for SIEs is on the rise, especially in Europe and Asia (McNulty et al., 2013). In addition, skilled SIEs constitute a valuable asset to the worldwide economy (Doherty and Dickmann, 2008;Fairlie, 2010;Hussain et al., 2019). Comparing statistical data of SIEs, 15 percent of them found a job on their own, 13 percent were sent by an employer, and 6 percent were recruited by a local company (Statistics Lithuania, 2016). An essential characteristic of SIEs is that they leave their home country voluntarily for a predetermined period of time without the intention of becoming permanent citizens of the host country (Baruch et al., 2007;Tharenou, 2010;Du Plessis, 2015;Vance and Paik, 2015;McNulty and Brewster, 2016;Meuer et al., 2019;Andresen et al., 2020). However, Al Ariss and Özbilgin (2010, p. 276) note that "the difference between SI expatriates and immigrant workers often remains implicit <. . .>. Both forms of expatriation are, in fact, not so different; many SI expatriates stay on a permanent basis and thus become permanent immigrants". Therefore, another feature presenting the difference between migrants and expatriates is status in the host country. While foreigners do not always have a permanent permit or visa pass to stay in the host country, they remain as expatriates and after this their status changes to migrants (Al Ariss and Özbilgin, 2010;McNulty and Brewster, 2016). Any intention of becoming permanent citizens increases with the duration of the stay in the host country (Kumpikaitė-Valiūnienė and Žičkutė, 2017). Pre-departure and Transitioned Expat-Preneurs 'Expat-preneurs' is a concept presented by Vance et al. (2016). It defines employees who go or remain abroad to start a new business in a host country, or who join in local host-country entrepreneurial activities (Vance et al., 2016). Therefore, we could describe expat-preneurs as self-employed expatriates. Literature on the subject establishes three main differences between ethnic entrepreneurs and expat-preneurs (Vance et al., 2016;Girling and Bamwenda, 2018). Firstly, expat-preneurs stay temporarily in the host country, but ethnic entrepreneurs stay long-term. Also, expat-preneurs are not "necessityentrepreneurs." Finally, expat-preneurs usually come from a developed economy. It means expat-preneurs are in a more advantageous position than ethnic entrepreneurs, and they are not compelled by circumstances to stay in the host country or start their own business, but they do so of their own free will. Vance et al. (2016) distinguish two different types of expatpreneurs. Some move abroad with an entrepreneurial purpose, or they try to expand their business from their home country to a new location. It means that these people have 'entrepreneurial intentions' before moving abroad, which explains individual willingness to start a business (Díaz-García and Jiménez-Moreno, 2010;Bastian, 2017). These expatriates are called 'pre-departure expat-preneurs' (Vance et al., 2016). The other type of expat-preneurs do not have any intention of being self-employed before departure. They decide to move abroad, leaving their employer or the status of unemployment. After being in the host country for some time, they then start up their own business. This group of expatriates is called 'transitioned expat-preneurs' (Vance et al., 2016). In addition, Block and Wagner (2010) call such type of entrepreneurs 'opportunity entrepreneurs' as they are more likely to be alert to business opportunities than others. The rising field of research on 'pre-departure' and 'transitioned' expat-preneurs and the need for empirical evidence provides the drive for further exploration of these types of expat-preneurs, and to identifying their characteristics and differences. Reasons of Foreigners to Become Entrepreneurs Schumpeter's theory addresses how entrepreneurs take risks in the pursuit of their goals and profits (Girling and Bamwenda, 2018). According to Kirkwood (2009), research on entrepreneurship motivation shows that both push and pull factors play a role for any individual entrepreneurs wanting to open a business. Patil and Deshpande (2019), when analysing female entrepreneurial motivation, note that among the pull factors are passion, independence, capital availability, and selfgrowth of a person, and among the push factors are economic necessity, financial burden, and loss of employment. In addition, environmental conditions for establishing and developing a business are important too. Regarding foreigners, more factors need to be considered. Theoretical approaches that accommodate this emerging trend come from studies into international ethnic entrepreneurship and migration flows (Ilhan-Nas et al., 2011;Kumpikaitė-Valiūnienė and Žičkutė, 2017;Girling and Bamwenda, 2018). In addition, in the context of entrepreneurial venture, theories such as the cultural approach and the mixed embeddedness theory pointing out demographic and cultural traits (that a population shares) could explain the level of entrepreneurial success for foreigners (Masurel et al., 2002;Girling and Bamwenda, 2018;Arseneault, 2020). The literature on migrant entrepreneurs focuses on migrants coming from undeveloped or developing countries to developed countries. The study by Moremong-Nganunu et al. (2018) on the biggest migrant entrepreneurial ethnic groups, such as Arabian, African, Asian, and South Asian, noted that entrepreneurial capabilities vary among different ethnic groups. Corresponding to the embeddedness theory, Bloch and McKay (2015); Rogerson and Mushawemhuka (2015), and Dannecker and Cakir (2016) found that good support in the host country and social-cultural capital are very important for entrepreneurial success. After literature analysis on migrant entrepreneurs, Agoh and Kumpikaite-Valiuniene (2018) highlighted the main conditions leading migrants to become entrepreneurs. These conditions include lack of jobs abroad, highly competitive job markets, lack of skills in certain cases, lack of language skills, cultural differences, discrimination in workplaces, determination to grow, personal entrepreneurial spirit, knowledge of the business, and internet business skills. Therefore, quite often the decision of migrant entrepreneurs to start their own business is based on necessity. However, according to the expat-preneurial definition by Vance et al. (2016) expat-preneurs move from developed to developed countries. Therefore, we suppose that they should be less necessity-driven entrepreneurs. Usually, these expatriates are educated, and do not face any issues with language or discrimination. Factors that are important for them in starting their own business include a lack of career possibilities, a wish for independence and self-development, and finding a suitable business environment. We propose that some differences in pre-departure and transitioned expatentrepreneurs might be revealed by looking at gender, age, and educational background. The Demographic Characteristics of Expat-Preneurs Concerning the gender issue, until the 20th century, men predominated in moving to another country in order to pursue business opportunities. The scientific literature reflected this reality. Based on liberal feminist theory, men and women are essentially similar (Harding, 1987) and are seen as equally able to think rationally. Therefore, males and females and any subordination of females is connected with discrimination or structural barriers, such as unequal access to education. Bruni et al. (2004) noted three main barriers against female entrepreneurship. The first one could be described as the socio-cultural status of women, which is connected to the role of women with respect to responsibilities toward family, children, and housing. The second barrier is associated with the access to networks of information and assistance. Finally, the third highlighted barrier is access to capital. Women face problems searching for financial support and this is associated with a stereotype that 'women can't handle money' and is connected to the two previous barriers. This corresponds with the mixed embeddedness theory (Girling and Bamwenda, 2018). Empirical evidence from the study of Azmat and Fujimoto (2016) on Indian female entrepreneurs in Australia highlighted that their success massively depended on their family embeddedness and cultural heritage. According to the Global Entrepreneurship Monitor (2015), the phenomenon of entrepreneurship is growing among women, although they are still less involved in entrepreneurial activities in comparison to men. This can be seen in both developed and developing countries (Patil and Deshpande, 2019). Figures taken in 2014 for Lithuania show that 59,700 (8.9 percent) of females and 83,300 (12.9 percent) of males were self-employed. In 2015, the number for women slightly increased but the percentage slightly decreased: 58, 600 (8.6 percent), with both figures for men decreasing 59,900 (9.3 percent) (Department of Statistics, 2017). Concerning entrepreneurial age and gender, studies by Brockhaus (1982) and Hisrich and Peters (1996) demonstrated that entrepreneurial decisions in general are taken between the ages of 25 and 40. However, some differences in relation to females could be noted. Langowitz and Minniti (2007) highlighted the most entrepreneurially active age of females was between 25 and 34 years, declining thereafter, which corresponds with the findings of Hisrich and Peters (1996). However, Still and Guerin's (1987) earlier findings showed female entrepreneurs tended to be older -between the ages of 30 and 40. Also, Boden and Nucci (2000) analysed new business ventures with data on men and women from 1982 to 1987. This study pointed out differences in education and the amount of work experience, confirming a certain disadvantage in the case of female entrepreneurs. In addition, in the study by Gathenya et al. (2011) carried out in Kenya, the majority of female entrepreneurs were between 22 and 48 years. As Gathenya et al. (2011) highlight, this "age bracket is considered as the most entrepreneurially active age which contributes positively to the performance of enterprises." However, if speaking about the situation of expatriates, the situation is a bit different. A study on expatriates by Selmer et al. (2018) showed that expat-preneurs were older than companyemployed expats with an average age of 44. Speaking about the level of attainment of entrepreneurs, Brockhaus (1982) noted that managers tend to be more highly skilled than entrepreneurs, but entrepreneurs tend to have a higher level of education than the general public. Moreover, Leonard (2010) noted that entrepreneurship is popular among SIEs and particularly for women who usually are less involved in assigned expatriation agreements. The motivation for the expatriation and careers of female SIEs are complex and varied (Muir et al., 2014). Based on the study by Vance and McNulty (2014), 34 percent of females were SIEs and self-employed as consultants or small business owners versus 25 percent for men. With this in mind, the assumption is that expat-preneurs could be older than regular entrepreneurs and, moreover, pre-departure expat-preneurs are older too as they had their own business in their home country already formulated. In comparison to men, more females are taking expat-preneur experience. However, there is not much evidence about the demographic characteristics of expat-preneurs, especially with regard to pre-departure and transitioned expatriates. Therefore, we propose the following hypothesis H1, in relation to demographic characteristics: H1. There are significant differences between demographic characteristics of pre-departure and transitioned expat-preneurs. Push and Pull Factors Explaining Decision to Expatriate The Push and Pull theory is the most popular theory explaining the process of human migration. Therefore, in order to analyse the reasons for the expatriation of pre-departure and transitioned SI expat-preneurs, push-pull factors were taken as the basis. In this sense, Kumpikaitė-Valiūnienė and Žičkutė (2017) reviewed the decision-making theories of migration and highlighted the main push-pull factors (see Table 1). Economic or non-economic determinants can be attributed to "demand-pull" in the destination country, "supply-push" in the homeland, and network factors as the linkage between these two (Kumpikaitė-Valiūnienė and Žičkutė, 2017;Mihi-Ramirez et al., 2017). In conjunction with the SIE concept and the traditional migration theories, push and pull factors in the context of expatriation were applied. Looking at the rationality that pre-departure and transitioned expat-preneurs moved abroad with different previous entrepreneurship experiences and, therefore, different primary intentions, we suppose their decisions to expatriate differ and so we propose the hypothesis H2. H2: There are significant differences on push and pull factors between pre-departure and transitioned expat-preneurs. To summarize, a theoretical model of study is presented in Figure 1. Context of the Research Lithuania is a small EU country situated along the south eastern shore of the Baltic Sea, to the east of Sweden and Denmark. Its population is just 2.7 million, which has steadily decreased because of low birth rate and high expatriation. This decline started back in 1990 when Lithuania's independence was restored after 50 years of Soviet occupation. The whole period after independence can be divided into four emigration waves (Kumpikaitė-Valiūnienė, 2019). The first wave includes the period of independence from1990 to 2003, the second wave started after joining the EU in 2004, the third wave started in 2009 with the economic crisis and Lithuania joining the Schengen Area, and the last wave started after joining the Euro zone in 2015. Most Lithuanians moved to more developed European countries and to the United States. Historically, Lithuanians used to migrate to the United States, with large numbers doing so from the end of the 19th century, and it remained the most attractive country to move abroad to until 2004 when Lithuania joined the EU. At this time, the United Kingdom, Ireland, Germany, and Spain became more popular and later, after the economic crises, Norway joined the list of favourite countries. Although Lithuania is a developed country, it is economically weaker than the majority of older EU member states. Comparing information about the purchase power standard (PPS) and the average salary among EU countries, in 2015 the EU PPS average was 1.0, in the United Kingdom 1.7, Germany 1.6, Ireland 1.4, Spain 0.9, and in Lithuania 0.6 (Statistical office of the European Union Eurostat, 2016). At similar or lower levels were Slovakia, Latvia, Hungary, Czechia, Romania, and Bulgaria. Average salaries in 2014 were 2,690 EUR in Sweden, 2,597 EUR in the United Kingdom, 2,160 EUR in Ireland, 2,054 EUR in Germany, and 524 EUR in Lithuania (Fischer, 2018). In Lithuania, more than 80 percent of all companies are small and have up to only nine employees (Versli Lietuva, 2017). Therefore, career perspectives are very limited in Lithuania. In summary, Lithuanians move to foreign countries for better work, career, and economic perspectives and therefore provides a good example to analyse its expat-preneurs. Sample and Procedure The survey method was selected for the research. Data gathering was completed online for several reasons: Shaffer et al. (2006) note that the response rate for expatriates is low, averaging 15 percent. In addition, it is difficult to access expat-preneur information as there is no available statistical data about Lithuanian expat-preneurs. Therefore, a decision was taken to separate expat-preneurs from the general group of expatriates. An invitation to participate in the survey with a link to an online questionnaire was delivered to Lithuanian expatriates abroad through social media and websites. A call to participate in the study was also listed in Lithuanian expatriates' webpages in different countries. The data was collected in October 2015 and from October to December 2016. Of course, the verification of the answers and their analysis also took much more time. In total, 1,586 respondents completed the questionnaire in October 2015 and 3,946 respondents participated in the survey from October to December 2016. Of the total participants, 308 respondents according to their current occupation were selected as the sample for this study. The sample was taken only from those respondents who had their own business outside of the home country, i.e., SI expat-preneurs. The status of SI expatriation was checked with the question 'Who initiated your expatriation?' and with a selection of multiple answers. In addition, all respondents did not have citizenship in the host country and, therefore, based on the approach we apply in this paper taken from Al Ariss and Özbilgin (2010) and McNulty and Brewster (2016), they could not be called migrants. The sample consisted of two particular groups: pre-departure expat-preneurs and transitioned SI expat-preneurs. Of this, a total of 250 respondents (81.2 percent of the sample) started their businesses abroad with previous experience of being employed by others, studying, or being unemployed in Lithuania. These were attributed as being transitioned expat-preneurs. The remaining 58 respondents (18.8 percent of the sample) were self-employed entrepreneurs in Lithuania before leaving and represented predeparture expat-preneurs in the sample. The demographic characteristics of pre-departure and transitioned expat-preneurs in the sample are presented in Table 2. In general, expat-preneurs from 24 countries participated in this study. The most attractive destination countries for the sample participants were the same as for the total Lithuanian population of expatriates, i.e., the United Kingdom, Norway, and the United States. Almost half of the respondents (46.4 percent) were 30-39 years old, with two additional groups having similar percentages: 40-49 years and 20-29 years old (respectively, 23.7 and 21.1 percent). Additionally, 67.9 percent of the sample were females (209 respondents), and 68.8 percent of the sample had a degree of higher education (212 respondents). Respondents were divided into four groups based on the period of their departure. This grouping was done according to the four emigration waves in Lithuania, highlighted by Kumpikaitė-Valiūnienė (2019). Measures The study had an exploratory nature with single question items for several key concepts and their constructs (Wanous et al., 1997). Push and pull factors of an economic and noneconomic nature (respectively, 8 and 4 of push, 11 each of pull) were measured as independent variables for pre-departure or transitioned SI expat-preneurs' paths. The list of factors provided and tested by Kumpikaitė-Valiūnienė and Žičkutė (2017) were used in the questionnaire. A general question about the reasons for initiating self-expatriation was given to respondents, along with the list of factors, unlimited choices, and including an open answer to provide any other factors not in the list that might come out of the expat-preneur's experience. Each factor was coded as a separate variable (0 = not selected, 1 = selected). The occupation of respondents was measured by two questions, asking for identification of the last occupation in their home country and the current occupation in their host country. The same list of 14 occupations (army officers, managers, specialists, technicians and younger specialists, office employees, services' employees and sellers, qualified specialists of agriculture, qualified workers and masters, plant and machine operators and assemblers, unskilled workers, self-employed, students, unemployed, and housewives) was used for both questions with one open answer for other options, taken from Kumpikaitė-Valiūnienė and Žičkutė (2017). This measurement allowed for the selection of expat-preneurs only, composing the sample of 308 respondents, and affiliated them into a particular group of pre-departure or transitioned. A dummy variable for groups of pre-departure (1) and transitioned (0) expat-preneurs was created. In addition to demographic characteristics, such as gender, age, and education, another two characteristics related to Lithuania as the research context, such as the departure period and host country of respondents, were included. The departure period reflects the four Lithuanian migration waves (Kumpikaitė-Valiūnienė, 2019) and was measured by a question with five ranges for an answer (from 1 = until 1990, to 5 = since 2015 and later). The list of countries was provided for the host country, used for analysis as a nominal variable. Other demographic characteristics of respondents, like their gender, age, or education, were measured by a single question each. Age was recorded in five ranges (from 1 = 19 years and less, to 5 = 50 years and more) and used for further analysis. Education was measured in several levels and coded later into dummy variables (1 = secondary and professional, 2 = higher education). Methods of Analysis A comparison of pre-departure and transitioned expat-preneurs' demographic characteristics and push-pull factors was conducted using the Mann-Whitney U rank test. Logistic regression was used for measuring the impact of push and pull factors (independent variables), departure period and host country (control variables from the research context), and demographic characteristics like gender, age, and education (control variables) on pre-departure or transitioned SI expat-preneurs' paths (dependent variable). Comparison Analysis Two independent groups of pre-departure and transitioned expat-preneurs were analysed according to demographic characteristics and push and pull factors of expatriation. Differences between the two groups were found in cases of gender, age, and education but not in the departure period (see Table 3), confirming the Hypothesis 1 (H1). Comparative analysis results show that pre-departure expatpreneurs were older and less educated than transitioned expatpreneurs, and there were more males than females among them. Looking at the work positions, 15.8 percent of transitioned expatpreneurs worked in the services sector, 14.5 percent studied, and 11.2 percent were specialists in Lithuania before they expatriated. The biggest amount (more than 40 percent) within both groups left Lithuania during the third emigration wave. Of the predeparture expat-preneurs, 90.7 percent were satisfied with their career, compared to 80.5 percent of transitioned expat-preneurs. The analysis of all push and pull factors for expat-preneurs' groups (pre-departure and transitioned expat-preneurs) revealed significant differences only for six single factors (see Table 4). We found differences in these economic push factors between expat-preneurs (pre-departure and transitioned). Our results show that a significant pushing effect from expat-preneurs is a non-supportive tax system. This was more important for predeparture expat-preneurs than for transitioned expat-preneurs. However, having too few employment opportunities was a more important push factor for transitioned expat-preneurs. Similar effects were found in non-economic push factors. Political corruption in Lithuania was a more common non-economic push factor for pre-departure expat-preneurs, while family reasons played a more important role for transitioned expat-preneurs. Only two non-economic pull factors from the whole group revealed differences between pre-departure and transitioned expat-preneurs, with differences being of the same direction. The higher possibility of self-realisation, as well as host country prestige, revealed a stronger pull effect to pre-departure expatpreneurs than to transitioned ones. Comparing results in the profiles of pre-departure and transitioned expat-preneurs (see Figure 2), differences existed, but in general, they appeared only in the case of six factors from 34, so it confirmed Hypothesis H2, but just for these factors. Regression Analysis According to the theoretical model, three models were tested using logistic regression (see Table 5). The results showed that push and pull factors (model 1) that differ between pre-departure and transitioned expat-preneurs correctly predicted 81.8 percent of the expat-preneurs' type. Adding demographic variables to the models (model 2 and model 3) raised the prediction up to 86.3 percent with an R square of 0.375. In all three models, too low employment played an important economic push role on the path of pre-departure and transitioned expat-preneurs. In the first and second models the additional impact of a non-supportive tax system can be seen. The first model also included the impact of political corruption in Lithuania. Hereafter, age and education were significant in the third model, but not gender, improving the R square even more. In summary, all three models represented a good fit and confirmed the impact of tested variables on the types of expat-preneurs. DISCUSSION Traditionally, most theories and studies describe foreign entrepreneurs as people who migrate to more developed countries out of necessity. Our results highlight how entrepreneurs from developed countries deepen their motivations, and the differences between pre-departure and transitional expat-preneurs, through a focus on expatriation reasons and demographic characteristics. Theories about international entrepreneurship, such as the cultural approach and the mixed embeddedness theory, have had a limited empirical evidence so far. Our results support them confirming the relevance of a demographic profile for different types of expat-preneurs. Thus, the analysis of international business activity should include differences between traditional ethnic migrants and new expatriate pre-departure and transitioned entrepreneurs, broadening the scope of the analysis of such theories. In this line, our results highlight the existence of discrepancies between international ethnic entrepreneurs (South to North) and expat-entrepreneurs (from developed countries), thus contributing to research calling for space to include expatpreneurs in entrepreneurship theories (Andresen et al., 2014(Andresen et al., , 2020Vance et al., 2016;Girling and Bamwenda, 2018;Meuer et al., 2019). Some new insights about gender issues were revealed in the study. The gender issue matters when studying global entrepreneurs. Thus, any overseas venture is a process experienced differently by males and females and therefore could be considered to be sexselective. Males especially dominate among assigned expatriates. Tendencies have been changing in the last 20 years, and the gender approach in international entrepreneurship processes has become very important. Besides this fact, the data analysis of this study found that more females who were not self-employed in Lithuania became expat-preneurs in their host countries. This could be explained by the fact that more females left their home country due to family reasons and therefore came to entrepreneurial activities later (Leonard, 2010). Our study revealed that more females are transitioned expat-preneurs. It is probable that after some time spent abroad, females see expat-preneurship as an opportunity to be employed (Lewin, 1998) and/or to take up and follow activities that they have always wanted to do. No statistical inference was found in education according to gender in our sample. This did not correspond with the findings of Boden and Nucci (2000), who highlight that females are seen as having insufficient education or experience. Such findings provided new insights into expat-preneurs that, based on their nature, they are less necessity-driven entrepreneurs than migrant entrepreneurs are. Expat-preneurs come from developed countries and their education does not depend on gender, and the majority of them have reached a level of higher education. However, these results based on a one country case provided only a few insights and they need deeper analysis and comparison with other developed and developing countries. Looking at other demographic characteristics, results show that pre-departure expat-preneurs are older and less educated than transitioned expat-preneurs. It partly corresponds with the study of Selmer et al. (2018), which showed that expat-preneurs were older than company-employed expatriates. According to the study, some respondents who graduated from high school abroad and decided to start their own business were younger and more highly educated. As previously mentioned, the business environment is an important factor for entrepreneurship (Kirkwood, 2009;Patil and Deshpande, 2019). Due to the specifics of our study, analysis was based on expatriation push-pull factors and economic indicators of the home and the main host countries. Political corruption in the home country and a non-supportive tax system were identified as the most important expatriation factors for predeparture expat-preneurs. This showed that people were looking for better business opportunities abroad. As an example, the 2015 corruption perception index (where 0 means highly corrupt and 100 very clean) was 81 in the United Kingdom, 76 in the United States, 89 in Sweden, 88 in Norway, 75 in Ireland, 81 in Germany, 91 in Denmark, and 58 in Spain in comparison to 59 in Lithuania (Transparency International, 2018). Based on this data, we saw that the main destinations for Lithuanians were less corrupt than Lithuania. It was more complicated to compare tax systems in different countries as they depend on types, size of business, and various regulations in each country. In terms of corporate tax in these destination countries, this varied from the lowest of 12.5 percent in Ireland, up to 40 percent in the United States, with Lithuania having 15 percent (KPMG, 2018). Comparing the ranking of 80 countries in 2019 in terms of where best to start a business, Lithuania was #53, the United States #11, the United Kingdom #13, Sweden #18, Germany #25, and Spain #33 (U.S. News, 2020). However, the business environment is even more important in order to be successful in starting a business. Forbes (2015) provided the list of Best Countries for Business by grading 144 nations on 11 different factors which encourage entrepreneurship [property rights, innovation, taxes, technology, corruption, freedom (personal, trade, and monetary), red tape, investor protection, and stock market performance]. According to these factors, Denmark was #1, Norway #3, Ireland #4, Sweden #5, United Kingdom #10, Germany #18, and the United States #22 in 2015. Summing up, based on reviewed factors and the conducted study, Lithuania's general business environment was not very attractive and was the reason for pre-departure entrepreneurship. The most important non-economic pull factors are a higher possibility for self-realisation and the possibility of self-development. This shows that the sample of analysed self-employed respondents truly represents expat-preneurs, as they left their country of origin for reasons connected with better job opportunities. This could be related to the classical Schumpeter Theory (Girling and Bamwenda, 2018), meaning that pre-departure expat-preneurs pursue better opportunities by establishing themselves in other countries, as does the traditional ethnic migrant. However, research by Stone and Stubbs (2007) on the motivations of 41 British expatriate entrepreneurs managing 71 family businesses in other countries, such as Spain and France, found that, rather than profit, they settled in those countries to improve their lifestyle. According to Schumpeter, all expatentrepreneurs would have the advantage of possessing innovative and risk-taking skills that enable them to achieve success. Our results allow us to qualify the assumptions of Schumpeter' Theory and Stone and Stubbs (2007), so that in the case of pre-departure entrepreneurs, they would use their skills to take advantage of the best opportunities that exist in other countries, such as a more favourable tax system, less corruption, and better labour market conditions. But also, in the case of transitioned entrepreneurs (already established in the destination country and without the pressure of home country circumstances), entrepreneurship is motivated by improved lifestyle, greater prestige, and self-realisation. Implications for Managerial Practice A deeper understanding of expat-preneur phenomena is useful for both the home and host countries. Received results could be useful for Lithuania, as policy makers should consider the main push factors behind moving business abroad, like political corruption and taxes and their burden. Possible solutions to prevent other entrepreneurs expatriating to other countries as well as how to motivate expat-preneurs to start transnational business and expanding it into home countries might be elaborated. This would help to bring financial and human capital into countries that lose valuable employees, such as Lithuania. In addition, countries in Central and Eastern Europe that experience similar flows and tendencies of expatriation might also benefit from the findings of this research. In addition, according to Vance et al. (2016, p. 212), 'expatpreneurs can further contribute to the long-term economic health and growth of a host country through knowledge transfer.' They contribute not only knowledge and human capital, but also physical capital, and they pay taxes and contribute toward the development of the host country. According to human capital theory (Chorny et al., 2007), expatriates are young and qualified individuals and, in addition, our study revealed that transitioned expat-preneurs are younger that pre-departure ones. Therefore, the decision to move abroad is an investment because an individual increases his or her employment perspectives (Sjaastad, 1962). Not only countries, but also organisations in Lithuania and CEE countries, need to encourage changes in the areas that influence the factors of expatriation. CONCLUSION It should be noted that expatriation is a growing phenomenon in developed countries. People expatriate to where they see better possibilities for employment, self-realisation, and personal development. Often, these expatriates become selfemployed and turn into expat-preneurs. The Lithuanian case presented here, studying the similarities and differences of expatpreneurs, contributes to the exploration of the expatriation process and provides a profile of an expat-preneur. Introducing demographic characteristics helps to forecast the type of expatpreneur. Differences are found in the cases of gender, age, and education. Pre-departure expat-preneurs are older and less educated than transitioned ones. According to the results, more males are pre-departure expat-preneurs and more females are transitioned expat-preneurs. There are more similarities than differences between the expatriation factors of pre-departure and transitioned expatpreneurs, bridging them more than dividing them. With regard to differences, the results show that pre-departure expatpreneurs are pushed to move abroad because of a better business environment, while they are pulled by the higher possibility of self-realisation as well as the prestige of the host country. At the same time, transitioned expat-preneurs are pushed more by family reasons, along with too few employment/career opportunities. The present study contributes to the expatriation research field by empirically tested pre-departure and transitioned expatpreneur phenomenon based on their demographic characteristics and decision to leave their home country. Our results extend the scope of traditional theories of entrepreneurship, such as the cultural approach and the mixed embeddedness theory, as well as Schumpeter's theory of the case of expat-preneurs. Limitations and Guidelines for Future Research Due to difficulties in directly accessing expat-preneurs, and instead taking them as a sample from a general group of expatriates, not all the questions were connected with their entrepreneurial activities, but this is a very small number among a large number of questions which did not affect the purpose of the research. In addition to a quantitative nature of the research, the majority of respondents had not indicated what kind of business they were in. Therefore, we propose as a future research line to study the diversity and popularity of business types among Lithuanian expat-preneurs. Furthermore, respondents were from 24 different countries. Such a limited geographic spread did not allow an analysis in accordance with countries that might be valuable in exploring the impact of the host country on expatriation decision making. However, this also means some advantages in the study of their demographic characteristics, such as belonging to the same culture. In addition, as indicated in Section "Sample and Procedure, " focusing on a small country with high migration rates is convenient for our analysis of push-pull factors and migration. In any case, we would like to extend and replicate this research in the future by including a sample of more countries with similar characteristics, or groups of countries with differences between them. Decisions to locate businesses in the host and/or home countries usually depend on different tax rates, growth prospects, laws, and attitudes toward foreign businesses (Vemuri, 2014). However, in this case, due to the shortage of time to access expatpreneurs, the push-pull factors were analysed as the reason to expatriate but not in the context of the decision to establish a business abroad. However, we propose as a future line of research the perspective of the destination country. In addition, the time when transitioned expat-preneurs started their business abroad after they moved to the host country was not controlled. Such data would contribute to the exploration of expatriates' entrepreneurship field. One of the main shortages was a lack of questions about marital status and children. Without this, it was not possible to complete an analysis of the family's impact on the decision of respondents to move and to become entrepreneurs. Gender issues are already partly covered, but they are important in developing this research further as the majority of our expat-preneurs were females. In addition, the gender issue should be studied further in terms of 'entrepreneurial intentions' (Díaz-García and Jiménez-Moreno, 2010) and 'accidental entrepreneur' (Lewin, 1998) differences because females, as previously mentioned according to Bruni et al. (2004), face three main barriers in becoming entrepreneurs. Moreover, there is still a lack of studies into what extent predeparture and transitioned expat-preneurship in their various forms are influenced by gender. As for the motivations for expatriation, even taking into account the above limitations, it would be interesting to continue this research by delving into the similarities and differences between different ethnic expatriates, and also expand the sample to other nationalities. For example, corresponding to a cultural approach, Andrejuk (2017) in studying a unique case of EU-15 and the EU-12 entrepreneurs in Poland, revealed that cultural differences play an important role in entrepreneurial success. Also, entrepreneurs from the EU-12 succeeded in their business when they fully integrated into the host communities but expatriates from the United Kingdom and Spain were successful when they employed their cultural heritage. Therefore, more studies on ethnic expat-entrepreneurs would allow the scope of entrepreneurship theories to be extended. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT Written informed consent from the participants was not required to participate in this study in accordance with the national legislation and the institutional requirements. Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements.
9,322.6
2021-01-08T00:00:00.000
[ "Business", "Economics", "Sociology" ]
YOLOv7-Ship: A Lightweight Algorithm for Ship Object Detection in Complex Marine Environments : Accurate ship object detection ensures navigation safety and effective maritime traffic management. Existing ship target detection models often have the problem of missed detection in complex marine environments, and it is hard to achieve high accuracy and real-time performance simultaneously. To address these issues, this paper proposes a lightweight ship object detection model called YOLOv7-Ship to perform end-to-end ship detection in complex marine environments. At first, we insert the improved “coordinate attention mechanism” (CA-M) in the backbone of the YOLOv7-Tiny model at the appropriate location. Then, the feature extraction capability of the convolution module is enhanced by embedding omnidimensional dynamic convolution (ODconv) into the efficient layer aggregation network (ELAN). Furthermore, content-aware feature reorganization (CARAFE) and SIoU are introduced into the model to improve its convergence speed and detection precision for small targets. Finally, to handle the scarcity of ship data in complex marine environments, we build the ship dataset, which contains 5100 real ship images. Experimental results show that, compared with the baseline YOLOv7-Tiny model, YOLOv7-Ship improves the mean average precision (mAP) by 2.2% on the self-built dataset. The model also has a lightweight feature with a detection speed of 75 frames per second, which can meet the need for real-time detection in complex marine environments to a certain extent, highlighting its advantages for the safety of maritime navigation. Introduction With the rapid development of marine equipment, the requirements for the accurate and reliable identification of ship object detection are increasing [1].Maritime enforcement officers can access visual information intuitively through maritime surveillance videos.However, supervisors may be affected by the complex environment and visual fatigue, which can cause them to overlook important information and pose a risk of safety hazards to ships traveling at sea.With the help of image vision and neural network algorithms in deep learning, automatic ship detection has been a critical technology in ship applications, significant in marine monitoring, port management, and safe navigation.It guarantees the orderly anchoring of ships and the smoothness and safety of maritime traffic [2,3]. Ship target detection technology has flourished under the rapid development of artificial intelligence technology.Ship target detection technology based on deep learning has become popular in the application field because of its better performance and lower workforce cost than traditional ship detection technology.The dataset images used in this technique mainly include remote sensing, SAR, and visible light images.The detection of remote sensing images is easily affected by factors such as cloud cover and light, and due to the vast amount of data, the time for data preprocessing and image transmission is too long, which leads to specific difficulties in real-time detection.SAR images cannot provide sufficiently rich texture features due to the absence of rich spectra, and it is not easy to accurately provide the classification information of multicategory ships [4].On the other hand, visible light images have the highest resolution and contain rich feature information, such as details and colors, which can intuitively present real human vision.In addition, visible light images can be easily acquired by standard acquisition devices such as cameras.Therefore, how to detect targets faster and more accurately on visible light ship images has become one of the leading research directions. Deep-learning-based object detection models are gradually becoming the main research method in visible ship detection.Object detection algorithms can be classified into two types: single-stage and two-stage algorithms.Representative two-stage algorithms are R-CNN [5], faster R-CNN [6], and mask R-CNN [7].However, they need to generate many candidate regions, which increases the computation and time complexity.The algorithms represented by single-stage algorithms are the single-shot multibox detector (SSD) [8], you only look once (YOLO) [9], and RetinaNet algorithm [10]; these algorithms do not require additional generation of candidate regions, simplifying the detection process with faster detection speed. The YOLOv7-Tiny model [11] has high speed and efficiency in real-time target detection.Still, it may be limited by the resolution and prone to localization bias when dealing with small targets, and there may be omission or misdetection of partially occluded ship targets.A lightweight ship target detection model that can effectively identify ship positions in complex and changing environments, help ships avoid collisions, and reduce the risk of accidents is significant for maritime navigation safety.Therefore, we propose the YOLOv7-Ship detection model based on YOLOv7-Tiny in this study.A concise overview of the primary contributions of this study is summarized below: • We introduce the improved CA-M attention module to the YOLOv7 backbone module to weaken the background feature weight, introduce ODconv in the neck, and propose an improved aggregation network module, OD-ELAN, which efficiently enhances the network's feature extraction capacity for ships in complex scenes with less computational increase. • We use the lightweight CARAFE method in the feature fusion layer, which can utilize learnable interpolation weights to interpolate the low-resolution feature maps, thus reducing the loss while processing small-target ship feature information. • We adopt SIoU as the loss function, which more accurately captures the orientationmatching information between target bounding boxes and improves the convergence speed of algorithm training. • We construct a ship target detection dataset containing thousands of accurately labeled visible ship images in complex marine environments. Related Work Compared with general-purpose target detection, ship target detection is more likely to be affected by unfavorable factors such as complex marine areas and bad weather.In addition, ship targets have significant differences in scale, and their visualization features are more likely to be disturbed.Therefore, the practicality and robustness of ship target detection algorithms are more demanding.Liang et al. [12] proposed a ship target detection method based on SRM segmentation and hierarchical line segment feature extraction to solve the problem of difficulty in analyzing high-resolution ship images.The method uses hierarchical line segment search updating and merges line segments near the subthreshold to achieve the detection of ship targets.Zhu et al. [13] proposed a method based on neighborhood feature analysis for detecting ships on the sea surface.The method analyzes the mean variance product characteristic of the neighborhood window and initially performs segmentation to eliminate most of the sea surface background, and then verifies the detection of the target by ship-related features.Yang et al. [14] proposed a detection algorithm based on saliency segmentation and local binary pattern descriptors combined with ship structure and used the morphological contrast method to improve the detection accuracy of ship targets on optical satellite images. With the booming development of computer technology, much research has been conducted on deep-learning-based visible ship object detection technology.Among these, single-stage algorithms have been the mainstream of visible ship object detection.For example, Yang et al. [15] combined the repulsion loss function and soft nonmaximum suppression algorithms with the SSD model, which can effectively reduce the leakage rate of tiny ships.Li et al. [16] combined the adaptively spatial feature fusion (ASFF) module with the YOLOv3 algorithm and used the ConvNeXt module to ameliorate the problem of insufficient feature extraction capability when ships occlude from each other.Huang et al. [17] incorporated a multiscale weighted feature fusion structure into the YOLOv4 model, improving small ships' detection efficiency.Zhou et al. [18] used mixed depthwise convolutional kernels to improve the traditional convolutional operation and coordinated attention mechanism (CA) based on YOLOv5, which enables the model to extract a more comprehensive ship feature while reducing the computation effectively.Gao et al. [19] proposed a lightweight model for small infrared ship detection by replacing the backbone of YOLOv5 with that of Mobilev3, resulting in an 83% reduction in parameters.Wu et al. [20] introduced the multiscale feature fusion module into the YOLOv7 model and established suitable anchor boxes to replace the fixed anchor boxes, effectively improving the ship feature's capture ability.Chen et al. [21] combined the convolutional attention mechanism and residual connectivity into the YOLOv7 model, enabling the model to accurately locate ships in dark environments and achieve effective ship classification detection.Lang et al. [22] proposed LSDNet, a mobile ship detection model that introduces partial convolution in YOLOv7-Tiny to reduce redundant computations and memory accesses, thereby extracting spatial features more efficiently.Xing et al. [2] integrated the FasterNet module into the backbone of YOLOv8n and employed the lightweight GSConv convolution method instead of the traditional convolution module, which retains detailed information about the ship target. Although there have been many studies on ship detection, their results in complex realtime environments are often unsatisfactory.The above methods are often difficult to balance the high accuracy and speed of ship detection.On the one hand, the ship's target scale varies greatly.It is prone to problems such as multiple targets overlapping, small targets carrying little information, and background interference information such as land buildings, reefs, and buoys.On the other hand, the marine environment is complex and changeable, with frequent fog, rain, snow, sun glare, and other inclement weather [23].Especially when the image clarity is not enough, the recognition ability of small targets decreases, resulting in severe problems of ship object false detection and missed detection [24].The detection models currently studied are mainly large-volume models with high requirements for equipment, and there is an urgent need for a lightweight model that can be deployed on low-configuration computing devices to accomplish the detection task of ships in complex scenarios efficiently [3]. YOLOv7 Network Structure Wang et al. optimized the network structure, data augmentation, and activation function to propose the YOLOv7 algorithm model in 2022.Its comprehensive performance improves the detection efficiency and accuracy compared with those of the algorithms of YOLOv4 [25], YOLOv5, YOLOX [26], and YOLOv6 [27].The YOLOv7-Tiny algorithm is a lighter version of the YOLOv7 algorithm, which simplifies the E-ELAN module to the ELAN module and maintains the path aggregation idea.Compared with the original version, it has fewer computational and parameter counts, which improves detection speed at the expense of some accuracy.In particular, the YOLOv7-Tiny model has good compatibility on ship mobile devices and has shown superior performance in detecting small objects, making it well suited for detecting ships, so we chose it as an improved baseline model.Figure 1 illustrates the architecture of the YOLOv7-Tiny model.Its four main components are the input, backbone, neck, and output. The backbone mainly consists of CBL layers, ELAN modules, and maximum pooling layers.The ELAN modules are layer aggregation architectures with efficient gradient propagation paths, which can mitigate the gradient vanishing problem.To show the network's simplified effect, we also offer the E-ELAN module used in YOLOv7 in Figure 1.The neck module uses the path aggregation feature pyramid network (PAFPN), which achieves the effect of multiscale learning of different levels of features by fusing the semantic information conveyed by feature pyramid networks (FPNs) [28] from the more profound level and the localization information conveyed by the path aggregation network (PANet) [29] from the shallower level.The output part uses the IDetect detection head, which classifies the detection scale into three scales, including large, medium, and small targets. OD-ELAN Module Static convolution convolves the input feature mapping with a constant kernel size.However, due to its fixed weights, it cannot adapt to input data changes and cannot capture global context information.The dynamic convolution method uses a linear combination of kernel weights to perform an attentional weighting operation on the input data.Unlike traditional convolution, the dynamic convolution kernels can automatically resize their receptive field according to the input image information.In addition, the dynamic convolution kernel dynamically generates different weights at each position, significantly reducing the computational complexity and memory utilization.Current dynamic convolution techniques like CondConv [30] and DyConv [31] solely concentrate on the dynamic nature of kernel numbers and dynamically weight the convolution kernel to adapt to different inputs only in the two-dimensional plane.Equation (1) provides the definition of the dynamic convolution operation. Chao Li et al. [32] proposed a new dynamic convolution, ODconv.ODconv utilizes a multidimensional attention mechanism to make the convolution kernel adaptively weight adjustment in four different dimensions of the kernel space, fully utilizing the number of convolutional kernels, spatial size, input channel, and output channel information, with improved multiscale perception and global context information, which is calculated as Equation ( 2): where the input and output features are denoted by the symbols x and y, respectively; the symbol W i represents the ith convolutional kernel, while α wi serves as the attention scalar for W i ; for the convolutional kernel W i , the attentions α ci , α f i , and α si are assigned along the input channel, output channel, and spatial dimension of the kernel space, respectively; * represents the convolution operation; and ⊙ represents the multiplications performed along the various dimensions. ODconv first squeezes the feature x into a feature vector of the same length as the input channel using channel-wise global average pooling (GAP) operation.Next, the squeezed feature vectors are mapped to the low-dimensional space through a fully connected (FC) layer and a rectified linear unit (ReLU).Each of the four head branches has an FC layer and a sigmoid or softmax function that generates the attentions α wi , α ci , α f i , and α si , respectively.Figure 2 displays the ODconv structure.The ELAN network module is used in the YOLOv7-Tiny model to extract target features.The structure consists of convolutional layers, but the smaller number makes it difficult to extract deep target features, and there may be ineffective feature fusion or redundancy.Hence, the module cannot sufficiently extract features from small or lowdefinition ship targets in a real complex environment.For this reason, we introduce ODconv into the ELAN module and construct an improved ELAN-OD module in the neck part.This module can effectively enhance the mining ability of the network for the deep feature information of ship targets while reducing the computational complexity. CA-M Attention Mechanism To optimize the model's emphasis on the ship's priority edge feature regions, an attention mechanism needs to be introduced into the network to suppress confusing information interference, such as wake and partial occlusion.The traditional channel attention mechanism, such as squeeze-and-excitation networks (SENets) [33], only considers the importance level between feature map channels and ignores the target's location information.The convolutional block attention module (CBAM) [34], which adds a spatial attention mechanism, uses sequential channel and spatial attention operations.However, it ignores the interrelationships between channel and space and loses information across dimensions. The CA mechanism can comprehensively analyze the inter-relationship between feature map channels and spatial information [35].To better enhance the performance of the attention mechanism, this paper proposes the improved CA-M mechanism. Coordinate attention encodes the decomposition of the channel relationship into one-dimensional features containing precise positional information, and the fusion of features along both spatial directions enables the model to concentrate on an extensive range of positional features.Since ships have more significant detail features, such as flat hulls, slender masts, straight chimneys, and hull markings, the global average pooling in the original coordinate attention cannot retain the relative differences between the original features.It may blur certain detailed feature information, while the global adaptive maximum pooling takes the maximum value in the input image region as the output, further increasing the network's sensitivity to critical information. Consequently, we use the adaptive global maximum pooling layer in the coordinate information embedding so that the model can better extract salient features of ships in complex scenes.Figure 3 The first decomposes the global adaptive maximum pooling into two separate 1D feature encoding operations.Subsequently, two spatially scoped ensemble kernels are used to encode each channel in horizontal and vertical coordinates, respectively.Consequently, the output of the cth channel with the vertical dimension h can be mathematically expressed as The output of the c-th channel with the horizontal dimension w can be formulated as The above encoding operation enables the CA-M mechanism to obtain long-range dependency information in one dimension and positional information in another.Then, the above two aggregated feature maps are subjected to the concatenate operation, followed by the transform operation using a shared 1 × 1 convolutional transform function. Subsequently, the intermediate feature mapping is split into two distinct tensors along the spatial dimension.These two tensors are then converted into tensors equal to the input.The expanded results g h and g w are utilized as attention weights.Ultimately, the output of the coordinate attention block Y can be denoted as Compared with the original CA mechanism, adding the CA-M mechanism to the backbone improves<EMAIL_ADDRESS>and mAP@0.5:0.95 by 0.2% and 0.3% in ship detection, respectively.Compared with other attention mechanisms, the CA-M mechanism can focus more on the areas with high feature weights in the inference and only adds a small amount of computational overhead.Its specific results are shown later in the experimental section. CARAFE Upsampler The CARAFE upsampler is an efficient and lightweight image upsampling algorithm proposed by Wang et al. [36].It can avoid the problem of nearest neighbor interpolation upsampling algorithms weakening small targets' feature information while bringing computational effort to integrate more feature information in large receptive fields.Small target ship detection is susceptible to complex environmental interference.Thus, we use CARAFE in the neck module, thus replacing the original nearest neighbor interpolation algorithm to extract smaller target features.Figure 4 First is the kernel prediction module, where the feature map with an input size (H, W, C) is compressed by a 1 × 1 convolution, and then a convolution layer of the kernel size K encoder = k up − 2 is used to predict the upsampling kernel to generate features (σH, σW, k 2 up ), where σ and k up indicate the upsampling ratio and the reorganization kernel size, respectively.Subsequently, the channels undergo spatial dimension expansion, and the softmax function is employed to normalize the upsampling kernel. Next, every position in the output feature map is mapped back into the input feature map during the content-aware reassembly processes, taking the k up × k up feature region at its center and the predicted upsampled convolution kernel at that position to make a dot product.Finally, the new features (shape = (σH, σW, C)) are obtained by repeating the above operations. SIoU Loss Function The YOLOv7-Tiny model contains three loss functions: classification loss, confidence loss, and localization loss.A practical bounding box loss function is essential for target localization.By default, YOLOv7-Tiny employs the complete intersection over union (CIoU) [37] localization loss function, which considers three distinct factors, and its calculation formula is displayed below: where B and B gt represent the centroids of the prediction box and the ground truth box, respectively.The value of Euclidean distance between B and B gt is denoted by ρ B, B gt .Additionally, c indicates the minimum outer rectangle's diagonal length necessary to encompass both boxes.The consistency of the aspect ratio v is calculated as where w gt h gt and w h denote the aspect ratios of the prediction box and the ground truth box, respectively.α is the weight coefficient, which is calculated by the formula The CIoU loss function introduces w gt h gt and w h into the loss value calculation.It adds a penalty term, which effectively improves the degradation problem of the GIoU [38] loss function and addresses the challenge posed by the DIoU [39] loss function when the prediction box does not overlap with the ground truth box but still gives the bounding box a moving direction.However, the CIoU loss function does not consider the angular mismatch between the ground truth box and the prediction box.It can be seen from Equation ( 7) that when w gt h gt and w h are the same, v takes 0, at which time the penalty term fails, leading to large fluctuations in the convergence of the training and a lack of precision in the prediction box. Gevorgyan [40] proposed the SIoU loss, which also investigated the orientation matching problem between the prediction box and the ground truth box based on the consideration of the distance between the frame centers, the aspect ratio, and the overlap area.Additionally, it added an angle cost term.In this paper, we adopt this efficient bounding box regression loss function SIoU, which effectively improves the total degrees of freedom of the loss and penalty terms, further increasing the training convergence performance of the model so that the target box has a better regression localization accuracy [41].The formula of SIoU is shown below. 1 − e −(2−Λ)ρ 1 ( 10) In Equations ( 9)-( 11), Λ is the angle cost, ∆ is the distance cost considering Λ, Ω is the shape cost, and θ determines the level of concern for shape loss.As shown in Figure 5, C h and σ are the height difference and geometric distance between B and B gt , respectively. The definition of the final bounding box loss SIoU is The YOLOv7-Ship Model The original YOLOv7-Tiny model has a fast detection rate.However, in complex marine environments, such as in severe weather and illumination conditions, or when the ship is multiscale or partially obscured, the model may have the problem of false or missed detection.For this reason, we propose the YOLOv7-Ship model, which optimizes the network structure of the baseline model while keeping it lightweight.We first consider the performance advantages of ODconv and insert the proposed OD-ELAN module in the neck network.The structure of the OD-ELAN module acquires the feature in different dimensions, which is illustrated in Figure 6.It utilizes the dynamically changing convolution kernel structure to achieve learning of multidimensional feature information.Second, to further enhance the backbone network's feature extraction capacity for targets with minimal information or clarity, we add the CA-M module in the backbone network.The CA-M module allows the network to concentrate on the linkage of the ship's salient detail features between space and channel, suppressing the irrelevant interfering information and efficiently extracting the critical location information of the ship object detection.Next, to address the issue of the model's missing detection for multiscale targets and low-resolution images, we use CARAFE as the upsampling operator in the feature fusion network.CARAFE utilizes cross-scale feature information fusion, which predicts the upsampling kernel and reorganizes the features based on it to better retain the semantic information of the original picture, effectively reducing the loss of feature information processing for small targets.CARAFE can also adjust the parameters through training to obtain better upsampling results.Finally, we employ the SIoU loss function, which can better handle the box regression problem between targets at different scales to capture the directional matching information between the target bounding boxes.The YOLOv7-Ship model's structure is displayed in Figure 6. The training and validation process of the YOLOv7-Ship model is relatively simple, but the hyperparameters must be carefully tuned to ensure the generalization performance on the dataset.The YOLOv7-Ship model is designed with cross-platform support in mind and has good portability.It can run on different operating systems and supports a variety of hardware accelerators, such as GPU and CPU.However, the YOLOv7-Ship model relies on more powerful hardware and may suffer from some performance limitations on low-end hardware. Experiments In this section, we present the dataset construction and the experimental design part.Specifically, we first introduce a new self-constructed ship dataset designed for studying the ship target detection problem in complex scenarios.Then, we report our experimental platform setup and training details and provide comprehensive evaluation metrics for the target detection task. Data Collection and Processing The complex environment around ships in the natural environment, such as land buildings, reefs, buoys, and other background interference information, is prone to cause interference to the target detection of ships.Existing public ship datasets, such as the SeaShips dataset proposed by Shao et al. [42], contain 7000 images of ship detection in six categories.However, the majority of the photos in this dataset are a collection of shots taken of the same ship at nearby moments, and the data scene is single and little affected by interference information.Therefore, we constructed a ship dataset containing more complex marine scenes to enhance the algorithm's detection accuracy and generalization ability under interference conditions such as bad weather, multiple occlusions, and small targets. The images in this dataset were self-collected by the team at sea using a visible light camera, supplemented by adding some images from the publicly available ship dataset.In this paper, data cleaning was performed on the collected images.After deleting the damaged or blurred images, the images that meet the requirements were used as the labeled dataset, which contains diversity-rich environments, such as a harbor with heavy traffic, a fishery area with dense ships, and a mixed traffic scene between ships and shore.Most images also have different climatic interferences, such as solid illumination, rain, and snow, for 5100 original images.Selected images of some sample datasets are shown in Figure 7.We categorized the target labels into six groups based on the Pascal VOC dataset format: sailboat, island reef, container ship, linear, and other ships.The sample images were labeled sequentially using the LabelImg software to generate XML files, which were then converted to YOLO format.For experimental purposes, we arbitrarily partitioned the dataset into three sets: training, validation, and test, in the following proportion: 8:1:1.Table 1 shows the number of images in the dataset under different weather and light conditions.This dataset uses the Microsoft COCO dataset's method of defining scales.Table 2 shows the definition of different object scales.Table 3 shows the number of objects labeled as small, medium, and large for each category in the dataset.The total number of small objects is 7737 (38.2%), the total number of medium objects is 6975 (34.5%), and the total number of large objects is 5528 (27.3%).Figure 8 illustrates the distribution of the dataset's bounding boxes, including their center points and sizes.Figure 8a depicts the normalized bounding boxes' center coordinate distribution.Figure 8b illustrates the proportions of the labeled box's width and height to the original figure.It is evident that the overwhelming majority of our dataset comprises small objects, and most targets are mainly concentrated in the central region, which indicates that these features make the dataset well suited for detecting small and multiscale ship targets. To avoid overfitting, we used the basic methods of random brightness, horizontal flipping, cropping images, and so on for the dataset images.We also used the mosaic data enhancement method, which not only enriches and expands the original detection dataset but also reduces the occupancy of GPU video memory.The input to the model after a series of operations is shown in Figure 9.The numbers 1 to 6 represent liners, container ships, bulk carriers, islanders, sailing ships, and other ships, respectively.In addition to the data enhancement mentioned above, this paper employs standard methods such as early stopping, dropout, and batch normalization during the training phase to prevent model overfitting. Experimental Environment All experiments in this article were performed on the Windows 10 system.The system utilized an Intel(R) Core(TM) i7-10750H CPU @ 2.60 GHz and NVIDIA GeForce RTX 2060.The model was built based on the programming language Python 3.9 and the deep learning framework PyTorch 2.0.0.The specific configuration information of the experimental platform is outlined in Table 4. The experiment employed an input image size of 640 × 640 pixels and conducted 200 training epochs.The momentum was set to 0.8, and the initial learning rate was set to 0.01.Detailed experimental parameters for model training are shown in Table 5. Evaluation Metrics To evaluate the quality of the ship target recognition and detection results more comprehensively, precision (P), recall (R), mean average precision (mAP), and frames per second (FPS) evaluation metrics are used in this paper, as shown in Equations ( 13)- (16). In Equations ( 13)-( 16), the variables denoted as TP, FP, and FN correspond to the quantity of true-positive, false-positive, and false-negative samples, respectively.AP means the average recognition accuracy for a single category whose definition is the area under the P-R curve, m represents the number of detected categories, and<EMAIL_ADDRESS>indicates the average precision across different objects when the intersection over union (IoU) threshold is set to 0.5.AP S , AP M , and AP L are used to evaluate the detection performance of the model for small, medium, and large targets, respectively.n represents the quantity of images the model processes, while T denotes the time required for consumption.In addition, we use the number of parameters (Params) and floating point operations (FLOPs) to measure the computational space and time complexity of the model. Effectiveness of CA-M Module Distinct parts of the Yolov7 network have different extraction roles for the input features.In order to optimize the attention mechanism's optimal placement, we inserted the CA-M module before the backbone's three feature layers for combination.Figure 10 illustrates the display diagram of different positions inserted by the CA-M module.The results of the experiments we subsequently performed on the models utilizing these six methods are presented in Table 6.The findings in Table 6 indicate that inserting CA-M at any position does not always enhance the network's detection performance.Model (c) introduces the CA-M module alone after the last ELAN in the backbone part, and<EMAIL_ADDRESS>decreased to 78.1, with the worst effect; Model (f) introduces CA-M at the position before all three feature layers, and<EMAIL_ADDRESS>and AP@.5:.95 improved to 78.9 and 53.9, respectively, with the highest accuracy improvement.From this, we can analyze that the use of the CA-M module after the efficient aggregation network module during the initial stage of feature extraction can capture the information of the region of interest, weaken the interference of the pseudo-target feature information, and improve the network's ability to detect the detailed features of the ship's target. CA-M Position To assess the efficacy of introducing the CA-M module in the baseline model, we compare it with six different types of attention mechanisms introduced into the same location, namely, SE, CBAM, ECA [43], GAM [44], SimAM [45], and CA, and their outcomes are displayed in Table 7. Table 7 shows that the effects of introducing various attention mechanisms into the backbone part differ.SE, CBAM, and ECA are all channel attention mechanisms, and their AP@.5:.95 values are reduced by 1.9%, 1.6%, and 1.3%, respectively, compared with the baseline model.In contrast, the GAM global attention mechanism increases the network's sensitivity to local noisy information and tends to cause the overfitting problem, and its<EMAIL_ADDRESS>is reduced by 1.2% at most.SimAM is a 3D attentional mechanism close to the boosting effect of our proposed CA-M mechanism, and its<EMAIL_ADDRESS>and AP@.5:.95 are improved by 0.4% and 0.1%, respectively.After introducing the proposed CA-M in the trunk section<EMAIL_ADDRESS>and AP@.5:.95 were boosted by 0.6% and 0.4%, respectively.Compared with the introduction of the original CA, the<EMAIL_ADDRESS>and AP@.5:.95 of CA-M are improved by 0.2% and 0.3%, respectively.Therefore, we introduce the CA-M module to improve the object detection performance. Comparative Analysis of Loss Functions To determine whether the improved loss function can strengthen the model's performance and accelerate convergence, we conducted comparative experiments on the EIoU [46], CIoU, DIoU, GIoU, and SIoU loss functions using YOLOv7-Ship as the baseline model.Figure 11 shows their comparative effects.By analyzing Figure 11, we found that the model with the SIoU loss function has the fastest reduction in loss values during training.To ensure the integrity of the comparison experiment, we present the analysis outcomes in Table 8, which comprises the loss value and mAP at the 200th epoch.Using the SIoU loss function compared with the CIoU loss function, the bounding box loss decreases by 0.00052, and mAP improves by 0.2%.SIoU achieves the lowest loss value of 0.04229 and the highest mAP value of 80.5%, showing optimal performance compared with the other loss functions.It also shows that, compared with the YOLOv7-Tiny model, the YOLOv7-Ship model converges faster in training and accurately captures the orientation matching information between the target bounding boxes. Ablation Experiment In order to assess the efficacy of our proposed enhancement approach in optimizing ship inspection performance, we performed a sequence of ablation experiments on our self-constructed ship dataset using YOLOv7-Tiny as the baseline model.The corresponding improvement method is denoted in the table by "✓" if it was implemented and "✕ " if it was not.The data of the ablation experiments are shown in Table 9.According to Table 9, the first group of experiments utilized the original YOLOv7-Tiny model, with an<EMAIL_ADDRESS>and AP@.5:.95 of 78.3% and 53.5%, respectively.In the second group of experiments, we introduced an improved CA-M attention mechanism in the backbone section.Compared with the baseline model<EMAIL_ADDRESS>increased by 0.6%.These results indicate that the model's capability to extract pertinent target depth features is enhanced due to the improved network structure. Subsequently, in the third group of experiments, we introduced ODconv and replaced the efficient aggregation module in the neck section with the OD-ELAN model, which led to a further increase in<EMAIL_ADDRESS>by 0.7% while reducing GFLOPS by 0.4 M. Next, in the fourth group of experiments, we replaced the upsampling method with the CARAFE method, resulting in another 0.7% increase in<EMAIL_ADDRESS>suggests that the CARAFE upsampling improvement method can more accurately capture semantic information of images.Finally, we improved the loss function to SIoU in the fifth group<EMAIL_ADDRESS>was increased by 0.2%, while the detection speed was increased to 75 frames per second. The comprehensive performance results of the YOLOv7-Tiny and YOLOv7-Ship models are shown in Table 10.The<EMAIL_ADDRESS>and mAP@0.5:0.95 of the YOLOv7-Ship model are 80.5% and 55.4%, which are improved by 2.2% and 1.9%, respectively, compared with the baseline model.In particular, the AP S value of 37.7% for small target detection is improved by 2.5% compared with the baseline model, resulting in a more accurate identification of small-sized targets.However, the APL value decreased by 0.8%.Additionally, the detection speed of YOLOv7-Ship is maintained at 75 FPS, and GFLOPS are reduced by 0.3 M. In conclusion, the YOLOv7-Ship model greatly improves the detection accuracy of small ship objects in complex marine scenarios while meeting the real-time detection needs of embedded marine equipment and the requirements of a lightweight model.However, the detection performance of large targets at different scales still needs further improvement. Comparison Experiment In this section, we select the two-stage object detection model and other mainstream YOLO series models to conduct comparative analysis experiments on the self-constructed ship dataset.The models contain faster R-CNN, SSD, YOLOv3 [47], YOLOv4, YOLOv5s, YOLOv5m, YOLOv7, YOLOv7-Tiny, and YOLOv8.Table 11 displays the outcomes of the experiments, which were all conducted in the identical training environment.Our proposed model outperforms widely used models in ship target detection.The detection accuracy of faster R-CNN, SSD, and YOLOv4 algorithms is relatively low due to the anchor frame-fixed parameters that cannot be fully adapted to the multiscale ship target.Compared with YOLOv5s and YOLOv3, the YOLOv7-Ship model exhibits significant accuracy gains: 3.2% above YOLOv5s and 5.4% above YOLOv3, while maintaining similar detection speeds.Although YOLOv7-Tiny and YOLOv8 have faster detection speeds (77 and 120 FPS), their detection accuracies remain comparatively modest, 78.3% and 78.5%, respectively.The YOLOv7-Ship model not only achieves the highest detection accuracy but also preserves real-time performance, demonstrating superior overall performance in ship target detection within complex environments compared with similar algorithms. Our proposed model outperforms widely used models in ship target detection.The faster R-CNN, SSD, and YOLOv4 algorithms have relatively low detection accuracies with an<EMAIL_ADDRESS>of 74.9%, 72.2%, and 74.8%, respectively, which is due to the anchor frame-fixed parameters that cannot be fully adapted to the multiscale ship target.The<EMAIL_ADDRESS>of the YOLOv7-Ship model is improved by 5.4% and 3.2% compared with that of YOLOv3 and YOLOv5s, respectively, and demonstrates significant accuracy improvement, while its GFLOPS are reduced by 6.5 and 1 M, respectively.Although YOLOv7-Tiny and YOLOv8 have faster detection speeds (77 and 120 FPS), their<EMAIL_ADDRESS>values remain comparatively modest, 78.3% and 78.5%, respectively.Compared with similar algorithms, AP S is the highest at 37.7%, and real-time performance is maintained.YOLOv7-Ship demonstrates superior overall performance in ship target detection within complex environments. Overfitting is a problem to be aware of in deep learning and may lead to a worse generalization ability of the model.In the experimental preparation phase above, we have taken many regularization methods.From the experimental results, we found that the training and testing errors of the YOLOV7-Ship model decreased synchronously with the increase in training rounds, so the model did not suffer from overfitting problems during training. Analysis of the Detection Results In this section, we employ the Grad-CAM visualization to assess the performance of the YOLOv7-Ship model in ship detection [48].Grad-CAM is a technique used to visualize the degree of contribution to the prediction results.We randomly chose three images from the ship dataset and used Grad-CAM to visualize the output features of the YOLOv7-Tiny and the YOLOv7-Ship models.The computational results of the corresponding hidden layer feature maps are shown in Figure 12.Through the heat map image, we can intuitively observe that the YOLOv7-Ship model focuses on the critical features of the ship, especially for the parts of small targets, which showcases the effectiveness of our suggested approach in enhancing the precision and accuracy of ship object detection. Qualitative Analysis of Detection Effects In this section, we experiment the YOLOv7-Ship model with other models on a self-constructed ship dataset.Images from three different scenarios were selected for the experiments, from top to bottom: partially occluded multiship detection, small ship detection, and harbor ship detection.The visual detection results of YOLOv5s, YOLOv5m, YOLOv7-Tiny, YOLOv8, and YOLOv7-Ship are presented in Figure 13. Original YOLOv5s YOLOv5m YOLOv7-Tiny YOLOv8 YOLOv7-Ship Figure 13 illustrates that under complex conditions, most models yield unsatisfactory ship detection results due to frequent instances of both leakage and misdetection.YOLOv5s cannot recognize the obscured ship target during partial occlusion, resulting in missed detection.Small target detection is also a challenge, as YOLOv5s, YOLOv5m, and YOLOv8 fail to detect small target ships due to the limited extractable features, which are susceptible to interference from waves and water reflections.In intricate harbor settings, where ships often exhibit multiscale dimensions, numerous pseudo-targets, and background interference, detection difficulty is amplified.However, the YOLOv7-Ship model excels in detecting partially occluded ship targets and accurately identifies small ship targets even at considerable distances.In summary, the YOLOv7-Ship model can detect ship targets more accurately and reduce the ship object detection miss rate in complex environments characterized by multiscale dimensions, high noise levels, and small targets. Conclusions and Discussions This paper proposes an improved YOLOv7-Ship model, which can accurately detect ship targets in complex marine environments.First, we introduced an improved CA-M attention mechanism after each aggregated network module of backbone, which weakens the interference of irrelevant background noise.Next, we introduced the OD-ELAN module in the neck part, which significantly improves the information mining ability of the detected targets in space and depth.Then, we improved the upsampling method to the CARAFE algorithm, which increases the network sensory field and retains more detailed semantic information.Subsequently, we used SIoU in the loss function part, and the training convergence of the YOLOv7-Ship model was further accelerated.In addition, we have self-constructed a ship dataset in complex environments, aiming to promote the research and development of maritime safety.Experimental results showed that the YOLOv7-Ship model improves the average detection accuracy by 2.2% compared with the baseline model on the self-built ship dataset and adds the computation and parameters only slightly.As a result, the YOLOv7-Ship model provides better detection accuracy for multiscale, partially obscured, and small vessel targets, helping mariners to provide more accurate and comprehensive detection information. The model proposed in this paper preliminarily achieves the detection of ships in complex marine environments, but there are still the following deficiencies: 1.The research in this paper is limited to the algorithm level, and the algorithm has not yet been deployed to the embedded computing platform.2. In this paper's self-constructed dataset, there is an imbalance in the category labels. The number of liner and container ship labels is small, leading to insufficient feature extraction and model training for these two categories.In addition, the virtual dataset may not be able to fully simulate the actual scenario, which may lead to the performance degradation of the model in real applications.3.Although the YOLOv7-Ship model improves the accuracy on small targets, there are still problems of ship missed detection in foggy and dark day scenarios.4. Compared with the latest YOLOv8 model, the network structure of the YOLOv7-Ship model is more complex and requires more computational resources in the inference stage. Figure 2 . Figure 2. Schematic diagram of the structure of ODconv. Figure 3 . Figure 3. Structure of the improved coordinate attention (CA-M) mechanism. Figure 4 . Figure 4.The overall structure of CARAFE. Figure 5 . Figure 5. Schematic diagram for calculating the angular cost contribution in the loss function. Figure 8 . Figure 8. Dataset visualization and analysis results: (a) distribution of dataset object centroid locations and (b) distribution of dataset object sizes. Figure 10 . Figure 10.Different positions for inserting the CA-M module in the backbone of YOLOv7-Tiny. Figure 11 . Figure 11.Comparison plot of loss function curves of the model validation set. Figure 12 . Figure 12.Visualization results of the Grad-CAM feature heat map: (a) original image, (b) feature heat map of the YOLOv7-Tiny model, and (c) feature heat map of the YOLOv7-Ship model. Figure 13 . Figure 13.Visual comparison of the YOLOv7-Ship model with the YOLO family. Table 1 . Distribution of ship images in the dataset. Table 2 . The definitions of small, medium, and large objects in the COCO dataset. Table 3 . Statistics of the number of small, medium, and large objects of six label types. Table 4 . The configuration information of the experimental platform. Table 5 . Experimental parameters of model training. Table 6 . Detection effects of inserting the CA-M module at different positions. Table 7 . Comparison experiments of different attention modules. Table 8 . Loss values and mAP for different loss functions. Table 9 . Comparison experiments of different object detection algorithms. Table 10 . Performance evaluation results of YOLOv7-Tiny and YOLOv7-Ship. Table 11 . Comparison experiments between YOLOv7-Ship and other object detection algorithms.
9,611
2024-01-20T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Automatic and Robust Segmentation of Multiple Sclerosis Lesions with Convolutional Neural Networks The diagnosis of multiple sclerosis (MS) is based on accurate detection of lesions on magnetic resonance imaging (MRI) which also provides ongoing essential information about the progression and status of the disease. Manual detection of lesions is very time consuming and lacks accuracy. Most of the lesions are difficult to detect manually, especially within the grey matter. This paper proposes a novel and fully automated convolution neural network (CNN) approach to segment lesions. The proposed system consists of two 2D patchwise CNNs which can segment lesions more accurately and robustly. The first CNN network is implemented to segment lesions accurately, and the second network aims to reduce the false positives to increase efficiency. The system consists of two parallel convolutional pathways, where one pathway is concatenated to the second and at the end, the fully connected layer is replaced with CNN. Three routine MRI sequences T1-w, T2-w and FLAIR are used as input to the CNN, where FLAIR is used for segmentation because most lesions on MRI appear as bright regions and T1-w & T2-w are used to reduce MRI artifacts. We evaluated the proposed system on two challenge datasets that are publicly available from MICCAI and ISBI. Quantitative and qualitative evaluation has been performed with various metrics like false positive rate (FPR), true positive rate (TPR) and dice similarities, and were compared to current state-of-the-art methods. The proposed method shows consistent higher precision and sensitivity than other methods. The proposed method can accurately and robustly segment MS lesions from images produced by different MRI scanners, with a precision up to 90%. Introduction Multiple Sclerosis (MS) is a common inflammatory neurological condition affecting the central nervous system (brain and spinal cord). It results in demyelination and axonal degeneration predominantly in the white matter of the brain [1]. Symptoms vary greatly from patient to patient, with common symptoms including weakness, balance issues, depression, fatigue, or visual impairment. Depending on the location of the inflammation, called plaques, different symptoms arise. These plaques can be detected by magnetic resonance imaging (MRI) but not computer tomography (CT). MRI is not only used for diagnosis but also considered to be the best tool to monitor progression of disease. Yearly MRI is nowadays considered standard of care. Detection rate of new lesions vary between radiologists from 64-82% [2]. As current MRI technologies only detect 30% of actual pathology [3], current research is focused on improvement of MRI techniques and analysis to detect lesions more accurately. Radiologists use T1-w, T2-w and FLAIR sequences to detect inflammatory lesions and axonal damage but sensitivity depends on slice thickness and is laborious and time-consuming. T1-w, T2-w and FLAIR are different pulse sequences of MRI which are attained with different relaxation times. Automated segmentation and detection algorithms can overcome these issues. Significant advances have been made for the segmentation of medical imaging with the help of traditional machine learning techniques [4,5] and over the last few years advanced deep learning techniques made significant advancements in segmentation, detection, and recognition tasks. Several methods have been described for the automatic detection and segmentation of lesions in MRI of MS [6][7][8]. Some online challenges for the segmentation of lesions in MS, like International Symposium on Biomedical Imaging (ISBI) [9] and Medical Image Computing and Computer Assisted Intervention (MICCAI) [10] are providing a platform for researchers to showcase their innovations. These challenges not only provide MRI datasets but also provide the opportunity to compare different automated segmentation algorithms on the same cohorts. The need for such algorithms has arisen due to limited human capacity to analyse the large number of clinical images and the prohibitive increase in health care prices. For example, if there are more than 150 brain slices of a single MS patient and all slices have several lesions, it is nearly impossible to manually detect all lesions accurately. Machine learning and deep learning techniques can improve the detection rate of lesions and minimise analysis time, irrespective of the number of slices and lesions. To cope with such type of issues, several algorithms have been proposed claiming good efficiency for the segmentation of MS lesions. These algorithms can be categorized into two main types, supervised and unsupervised [11]. After a detailed literature review, results indicate that supervised methods are more favoured and have an edge over unsupervised methods due to several reasons [12]. Unsupervised methods are not very popular for medical segmentation but have some promising results as well. Unsupervised methods mostly depend on the intensity of MR brain images where high intensities in the MRI are considered as outliers. Garcia-Lorenzo et al. [13] have published a specific example of such an unsupervised method that uses intensity distributions. Several other unsupervised methods like the one presented by Roura et al. [14] proposed a thresholding algorithm and Strumia et al. [15] presented a probabilistic algorithm. Tomas-Fernandez et al. [16] claimed that if we have additional information about the distribution of intensity and also know the expected location of normal tissue it could help to outline lesions more precisely. Sudre et al. [17] proposed an unsupervised framework in which no prior knowledge is needed to differentiate between the patterns of different abnormal images. They were able to detect such clusters that were abnormal in nature known as lesions, on clinical and simulated data. They segmented lesions restricted to white matter. Supervised methods use templates consistent of MR images with manually segmented lesions from qualified radiologists. One of the best examples is of Valverde et al. [18], who proposed a cascaded network of two CNN. Our proposed algorithm also follows this principle of cascaded CNNs. Han et al. [19] also proposed 2 deep neural networks by teaching every network with mini batch and these two networks were communicated with each other to check which mini batch should be used for training purposes. They used different image datasets like CIFAR-10, CIFAR-100, and MNIST, etc., to check the robustness of their proposed network. One similar approach was proposed by Zhang et al. [20], who proposed deep mutual learning which is also called DML strategy. In this technique, instead of one-way transfer among students and a static teacher, students collaboratively learn and teach separately during the training. Valcarcel et al. [21] proposed an automatic algorithm for the segmentation of lesions which uses covariance features from regression. They also have taken part in a segmentation challenge and their results were based on a dice similarity coefficient (DSC) of 0.57 with a precision of 0.61. Jain et al. [22] proposed an automated algorithm in which they segmented white matter lesions as well as white matter, grey matter, and cerebro-spinal-fluid (CSF). Their method depended on prior knowledge of the appearance and location of lesions. Deshpande et al. [23] proposed a supervised based method with the help of healthy brain tissues and learning dictionaries. They also used a complete brain in which CSF, grey matter and white matter were included. They claimed that the dictionary learning technique was superior while performing lesions and non-lesions patches. For every class, their method automatically adapted the dictionary size according to the complexity. One more supervised approach was proposed by Roy et al. [24]. They proposed a 2D patch-based CNN including two pathways, which accurately and robustly segmented the white matter lesions. After the convolutional pathway, they didn't use fully connected layer but again used another pathway of convolutional layers for the prediction of the membership function. They claimed that it was much faster as compared to fully connected layer. Brosch et al. [25] suggested an approach to segment the whole brain with the help of 3D CNNs and Hashemi et al. [26] presented a novel method by implementing a 3D patch wised CNN. It used a densely connected network idea. One latest and best method in this group of CNNs is of Valverde et al. [27]. They have written two papers for the segmentation of lesions. In the first paper, they proposed a 3D patch-based approach in which two convolutional networks were used. The first network was able to find possible lesions whereas the second network was trained to remove misclassified voxels that got from the first network. While in the second paper, they examined the intensity domain effect on their previously CNN based proposed method. They won the ISBI segmentation challenge. All these methods described in this section, used the same parameters as DSC, precision, and sensitivity to check the accuracy. These methods are compared with the proposed method in Tab. 4. The marvelous development of deep learning especially CNNs has revolutionized progress in the medical field. CNNs are helping with segmentation, detection and even prediction of diseases [28]. Contrary to traditional machine learning techniques that require handcrafted features, deep learning techniques can learn features by themselves [29] and know how to do fine tuning according to the input data which is a remarkable achievement. These methods have excellent accuracy. Literature is accumulating online for deep learning medical imaging. There are several advantages of CNNs which compromise two main features. First, it doesn't need handcrafted features and can handle 2D and 3D patches and learn features automatically. Secondly, convolutional neural networks can handle very large datasets even within a limited time span. This achievement is due to advancement in graphic processing units (GPUs) which can help us to train our algorithm within a portion of the time. So, CNNs with the help of GPUs have remarkably progressed the medical field to solve complex problems. Due to deep learning accuracy and achievements, we proposed a novel deep learning-based architecture which segments MS lesions accurately and robustly. The proposed method follows the principle of two convolutional neural networks in which first CNN finds the possible lesions and then second CNN rectifies the false positives and gives better results with regards to accuracy and speed. Our algorithm uses parallel convolutional pathways where the fully connected (FC) layer is replaced with CNNs, similar to the approach of Ghafoorian et al. [30]. The replacement of the FC layer with CNNs not only increases the speed but is also much accurate. The reason for the increment of speed is due to the removal of FC layer as we know that FC layer use memory and we are still not sure how many layers are needed at the output. Several recent articles [31][32][33][34][35][36][37] may be read for better understanding of deep learning techniques. In the proposed algorithm, three MRI sequences, T1-w, T2-w, and FLAIR are used as input for CNNs and then concatenated. T1-w and T2-w only are used to remove artifacts of MRI. Fig. 1 shows the general diagram of the proposed method. Moreover, a detailed description of the proposed architecture is provided in Section 3.2 and illustrated in Fig. 2. Data/Material For the evaluation of the proposed method, two publicly available datasets are used. These two datasets ISBI and MICCAI are available for challenge purposes also. ISBI dataset consists of 82 scans in total, where 21 scans of 5 subjects are available for training purposes and already preprocessed with several steps like skull stripping, denoising, bias correction, and co-registration. Figure 1: The general structure of the proposed method. First CNN consists of 6 convolutional layers with a decreasing number of filters from 256 to 8. Every filter has a size of 3 × 3 and 5 × 5 respectively. For example, the first layer has 256 filters with size 5 × 5, and then 128 filters with size 3 × 3 and so on. Three sequences have been used as inputs; T1-w, T2-w, and FLAIR. The second CNN is used as a parallel pathway to reduce the false positive (CHB) from a 3T Siemens scanner. 25 scans are provided for testing purposes. For a better understanding of what we used for evaluation, all these details are tabulated in Tab. 1. CNN Architecture Before training, all T1-w, T2-w, and FLAIR images were converted into 2D patches ðp  pÞ. The benefit of making 2D patches is to speed up the process. Results show that 2D patches are far better in speed and robustness compared to 3D patches. 3D patches need more memory, resulting in a slower speed. According to ISBI, dataset contains 1% lesions of total brain size, so we used large patches like 25 × 25 or 35 × 35 (2D patches). It helped us to reduce the balancing issue, which works better with 2D patches. According to Ghafoorian et al. [30], larger patches produce more accurate results. These 2D patches are made so that p is the size of two dimensions and these are stacked in an array containing ðm  p  pÞ where m is "input modalities", in our case m ¼ 3. Due to the increased popularity of cascaded convolutional neural networks, the proposed method was also implemented with the help of such networks. The first network finds the possible true positives of required lesions, then the second network refines the results of already segmented lesions, and also decreases the false positives, which can be seen in Fig. 4, where (b) shows the manual segmentation of lesions, (c) shows the results of the first network (d) shows the results from the second network, and if we compare (c) and (d), it is clear that after the second network, false positives are decreased. After constructing 2D patches, these are passed through convolutional filter banks and then instead of fully connected layer, again convolutional filter banks are introduced for the prediction of the possible membership function. The reason for not using FC is that we are not sure how many layers are needed for prediction and the system might become too complex. We have applied the approach of GoogleNet and the recently proposed ResNet by He et al. [40], where they use fully convolutional layers instead of FC layer. Traditional CNNs use fully connected (FC) layer for the prediction of the probability of membership function, but the proposed method uses convolutional filters bank for several reasons. The main reason of not using FC layer is parameters complexity. Mostly we are not sure, how many parameters are needed to handle the features from previously convolved filters, which results, unused free parameters. The consequences may be overfitting. The second reason for not using FC layer is that it can increase the time for prediction even using GPU. Because there are hundreds of slices for one scan and when slices are converted to patches, resulting in numerous patches for calculations. Therefore, using FC layer can reduce the time for prediction. The internal architecture of the proposed method can be seen in Fig. 2. It consists of 6 layers of convolutional filters and every convolutional filer has max-pooling and rectified linear unit (ReLU). The objective of max pooling is to down-size or reduce the dimensions of output and discarding the unwanted features to make the system fast. ReLU is used here because it is a speedy activation function. 6 layers of convolutional filters consist of different sizes of filters. So, it can be seen that the first layer has 256 filters After convolving the 2D patches with these 6 convolution layers, the output is concatenated. Then the concatenated output is passed again through parallel implemented convolutional filters bank for prediction purposes. Here we used small filter sizes like 3 × 3 and 5 × 5 for the convolution function. Small filter size is better as compared to big filters because a small filter can help to find the exact boundaries of lesions and train well for segmentation purposes. At the input, we have made 2D patches of three available modalities to speed up the process, and also to improve training. Patch size is denoted by p, where p is set to a different size and gathers results. For experiments, first, we started from small patches p ¼ 5 then we increased the size of patches. After reaching size p ¼ 25, the outcome improved. Valverde et al. [27] use small patches but they have good results, the reason is that they use a balanced training dataset. The training dataset consists of the voxels with lesion and also without lesions. This gives good results for smaller patches, but the drawback is that it consumes more time and is computationally expensive. Roy et al. used large patches. We also adopted large patches since we don't need balanced training. Large patches contain large area which includes voxels with lesions and also without lesions. We only use those large patches which have lesion label as a center pixel. These large patches have an area "with lesion" and "without lesion". It means all lesion patches with center voxel with lesion labels are selected. It not only speeds up the system but also makes it applicable for real applications due to less computations. Therefore, we tried patch size p ¼ 20, p ¼ 25, p ¼ 35 and received good results as mentioned in the evaluation Section 4. Also, 3 modalities T1-w, T-2 w, and FLAIR are taken as input. The reason for taking 3 modalities is that, FLAIR is mainly used for segmentation purposes whereas the other 2 modalities T1-w and T2-w are used to reduce artifacts and help us to reduce false positives which are mentioned in Fig. 4. The comparison of one modality versus three modalities is shown in Fig. 5. The results show that when three modalities are used, MRI artifacts can be reduced greatly. The exact implementation details are discussed in Section 4.1. Evaluation Metrics For evaluation purposes, two datasets ISBI and MICCAI have been used. For comparison, different state-of-the-art methods are compared, which are described in detail in Section 4.4. The evaluation is performed with the following metrics described below by comparing the performance against the human experts. The different metrics are sensitivity, precision and dice similarity coefficient. Sensitivity The sensitivity of the method can be calculated in terms of lesions true positive rate (LTPR) between automated segmentation and manual annotations of lesions where T p denotes the correctly segmented lesions and also called true positive where F n is false negative or missed lesion region candidates. Precision Precision is considered as false discovery rate or lesions false positive rate (LFPR) between automatically segmented lesions and manually annotated lesions and expressed as Here F p denotes false positive or the lesions which are incorrectly classified as lesions. DSC The overall accuracy of % segmentation in terms of dice similarity coefficient (DSC) between automated segmentation masks and manual annotated lesions area can be defined as .1 Implementation Details The proposed method is implemented using python language with Keras and TensorFlow due to open source and comprehensive libraries for machine learning. Results were taken at 20 epochs. 2.7 GHz Intel Xeon Gold (E5-6150) processor is used with 32 GB memory of Nvidia GPU. Data was divided into two parts: a training set and validation set, occupying 80% and 20% respectively. Best results were obtained by running 20 epochs for training set with a learning rate of 0.0001 and using the optimizer of Kingma et al. [41]. We have used early stopping with patience = 10, as in our case, we observed the best results at 20 epochs. The batch size was set to 128. It took 2 h and 16 min for training and just 15 s on average for testing or segmenting the lesions from an unseen image. MICCAI Dataset MICCAI dataset provides three sequences T1-w, T2-w, and FLAIR which we need, for our proposed algorithm. These modalities are input at the proposed CNN pathway and results were obtained by using evaluation metrics mentioned in Section 3.3. The proposed method has two pipelines CNN. First, dataset is converted into patches before input to proposed CNN, which increases the pseed of training and validation. Then these patches are divided into 80:20 strategy for the training set and validation, respectively. While making patches for the training set, whole patches were divided into two parts, where 80% patches were used for the training set and remaining patches, 20% were used for validation purposes. For example, when the patch size was 25 × 25, the total number of generated patches were 242775. From these total patches, 194220 were used for the training set and 48555 samples were used for validation. Tab. 2 shows the three results for the testing of MICCAI by using different factors like LTPR and LFPR. First three scan's results of MICCAI 01,02 and 03 are shown below. ISBI Dataset From the ISBI dataset, the results of the first 3 scans (ISBI 01, ISBI 02 and ISBI 03) are displayed in Tab. 3. ISBI results are evaluated on a qualitative and quantitative basis. Qualitative results can be seen in Fig. 3, whereas quantitative results have been shown in Tab. 3. Qualitative results in Fig. 3, show the original image, manually segmented image, proposed auto segmented image, and overlap of both manual and auto segmented images. Results show that our proposed method has good results for manual segmentation. Comparison In this section different state-of-the-art methods have been compared with the proposed method. All these methods used the same ISBI dataset and the same evaluation metrics as described in Section 3.3. Benchmark of the dataset was also provided on their website to compare the results. These benchmarks are manually annotated by the experts in the field of MS. These methods are evaluated by using various metrics like DSC, sensitivity, and precision. These results are tabulated as a mean of all results obtained with the help of the proposed method. The quantitative comparison of the proposed method with top rank existed methods is shown in Tab. 4. These values are extracted from the challenge website and some are extracted from their related publications. These results are considered best to date for the segmentation of lesions. The proposed method has more DSC as compared to all existed methods, where DSC is also called overall segmentation accuracy, described by ISBI challenge website. For the qualitative comparison with manually annotated lesions provided by ISBI are shown in Figs. 3, 4 and 7. To check the trade-off between sensitivity and precision, the receiver operating characteristic (ROC) curve is drawn for the proposed CNN approach on the ISBI dataset, shown in Fig. 6. Results show that the proposed method has excellent results not only in precision but also in speed and robustness as testing takes only 15 seconds on average for automatic segmentation of MS lesions. Some scans have less lesion load and some scans have more lesion load. Lesion load is referred as the volume or number of lesions. Average timing is calculated after calculating time for all available testing scans and then taking the average of all the values. In Fig. 7 the results are demonstrated qualitatively and can be compared with ground truth or manually segmented scans. The first row of Fig. 7 shows the results of the first tested scan and compared it with manually segmented masks. (a) shows the original FLAIR image and (b) displays a manual mask or ground truth. (c) is the automatic or proposed method that depicts different colors, where red is true positive, green is false positive, and blue is a false negative. Row 2 shows one more scan from the testing dataset. (d) is the original FLAIR image, (e) is manually segmented, and (f) is the automatic proposed method. After a qualitative comparison, it is clear that the proposed method segments lesions automatically and very accurately. Robustness To check the robustness of the proposed algorithm, the same architecture is used for the training of two different datasets like ISBI and MICCAI. Even both datasets are from different scanners. As mentioned in Section 2, three different MRI scanners like Siemens Aera 1.5T, Siemens Verio 3T, and Philips Ingenia 3T are used for these two datasets. However, the results are promising as mentioned in Tabs. 2, 3 and 4. Even considering that the images were taken at different time points with a gap of almost 1 year, the proposed method is stable for different types of parameters like DSC, precision, and sensitivity and also for different scanners and different types of patients. Conclusion Automatic lesion segmentation is required for diagnostic and monitoring purposes in MS. There is a desperate need for more efficacious and rapid lesion assessments. A six layers CNN is implemented with two cascaded pipelines. This network does not need fully connected (FC) layers but uses CNN for the prediction of the probability of membership function. It not only increases the speed but also decreases the false positive rate. The proposed method follows the supervised method principle which uses templates consisting of MR imaging with manually segmented masks from qualified radiologists. The proposed method accurately and robustly segmented MS lesions even for images from different MRI scanners with a precision of up to 90%. Therefore, this automated algorithm is able to help neurologists to segment lesions fully automatically without time wastage, improving disease monitoring. However, the proposed algorithm has some limitations. While performing experiments, it was noted that if two lesions are very close or overlapping, sometimes the proposed algorithm is unable to segment precisely. Also, when lesions are near the cortex of the brain it was difficult to segment them. According to the ISBI dataset and expert opinion cortical lesions are difficult to detect. In future work, as new MRI sequences are being introduced, the proposed method will be checked with other sequences of large number of MRI scans. The proposed method will be also checked for different parameters, CNN layers, batch size, and different filters, etc.
5,961.2
2020-01-01T00:00:00.000
[ "Computer Science" ]
Strength in numbers: achieving greater accuracy in MHC-I binding prediction by combining the results from multiple prediction tools Background Peptides derived from endogenous antigens can bind to MHC class I molecules. Those which bind with high affinity can invoke a CD8+ immune response, resulting in the destruction of infected cells. Much work in immunoinformatics has involved the algorithmic prediction of peptide binding affinity to various MHC-I alleles. A number of tools for MHC-I binding prediction have been developed, many of which are available on the web. Results We hypothesize that peptides predicted by a number of tools are more likely to bind than those predicted by just one tool, and that the likelihood of a particular peptide being a binder is related to the number of tools that predict it, as well as the accuracy of those tools. To this end, we have built and tested a heuristic-based method of making MHC-binding predictions by combining the results from multiple tools. The predictive performance of each individual tool is first ascertained. These performance data are used to derive weights such that the predictions of tools with better accuracy are given greater credence. The combined tool was evaluated using ten-fold cross-validation and was found to signicantly outperform the individual tools when a high specificity threshold is used. It performs comparably well to the best-performing individual tools at lower specificity thresholds. Finally, it also outperforms the combination of the tools resulting from linear discriminant analysis. Conclusion A heuristic-based method of combining the results of the individual tools better facilitates the scanning of large proteomes for potential epitopes, yielding more actual high-affinity binders while reporting very few false positives. Background The major histocompatibility complex (MHC) is a set of genes whose products play a crucial role in immune response.Peptides derived from the proteasomal degradation of intracellular proteins are presented by MHC class I molecules to cytotoxic T lymphocytes (CTL) [1][2][3], and recognition of a non-self peptide by a CTL can result in the destruction of an infected cell.Peptides that can complete this pathway are called T cell epitopes. Only 0.5% of peptides are estimated to bind to a given MHC-I molecule, making this the most selective step in the recognition of intracellular antigens [4,5].Given the large size of many viral and bacterial proteomes, it is prohibitive in terms of time and money to test every possible peptide for immunogenicity; thus, tools for the computational prediction of peptides that are likely to bind to a given MHC-I allele are invaluable in facilitating the identification of T cell epitopes. Many tools for performing such predictions, of varying quality, are available.We hypothesize that greater predictive accuracy can be achieved by combining the predictions from several of these tools rather than using just one tool.Further, contributions from individual tools should be related to their accuracy.To test this hypothesis, we have built a prediction tool which assigns a "combined score" to each peptide in a given protein by taking into account the predictive performance of each tool, and the score given by that same tool to a given peptide.We also compare our technique with combined predictions made using linear discriminant analysis, a standard statistical method for combining variables to distinguish two groups (in this case, "binder" and "non-binder").In this paper, the acronym "HBM" will refer to our heuristicbased method and "LDA" will refer to the predictor built using linear discriminant analysis. Performance of the individual tools Table 1 shows the ability of each individual tool to discrimine between the binders and nonbinders to HLA-A*0201 derived from the community binding database [6].As we are interested in good sensitivity at high specificity, the sensitivity of each tool at 0.99 specificity and 0.95 specificity are shown.The A ROC value for each tool is also given; these values are very similar, but not completely identical, to those given by the authors of the community binding resource; the small discrepancies are likely due to the use of differing methods of calculating the area under the ROC curve.Individual tool performance data for the HLA-B*3501 and H-2Kd peptides from the community binding database, as well as for the HLA-A*0201 peptides gathered from the literature, are shown in Tables 2, 3, and 4, respectively. Performance of the combined methods The HBM and LDA were evaluated using ten-fold crossvalidation on the same four datasets (the HLA-A*0201, HLA-B*3501, and H-2Kd datasets from the community binding resource, and the HLA-A*0201 dataset from the literature) as the individual tools. The HBM requires that an individual tool specificity parameter be chosen such that the tools' sensitivities at that specificity can be used as the weights in equation 1.The performance of the HBM was determined using individual tool specificities of 0.99, 0.95, 0.90, and 0.80.In general, it was found that using 0.99 individual tool specificity resulted in the best performance, while the use of lower individual tool specificity parameters resulted in somewhat weaker performance.Thus, all of the HBM performance data described below were obtained using 0.99 individual tool specificity.Table 5 shows the performance of the HBM on all four datasets.For two of the three alleles, the HBM showed marked improvements in sensitivity at high specificity compared with the best-performing individual tools.The sensitivity of the HBM at 0.99 specificity for HLA-A*0201 was 0.40, a large increase over NetMHC ANN, whose sensitivity of 0.29 was the best among the individual tools.For HLA-B*3501, the HBM sensitivity was 0.31 at a specificity of 0.99, while the highest sensitivity obtained by an individual tool was 0.24.The HBM showed similarly strong performance when tested using the literature-derived HLA-A*0201 data, achieving a sensitivity of 0.27, compared with 0.19 for the best-performing individual tool.For H-2Kd, however, the HBM was outperformed at 0.99 specificity by the ARB matrix tool, which had a sensitivity of 0.50 versus 0.47 for the HBM.We note, however, that ARB Matrix was trained using binders from the community binding database, so its performance on the community datasets is likely inflated [7] At lower specificity thresholds, the advantage of the HBM was only marginal.For instance, the sensitivity of the HBM at 0.95 specificity for the HLA-A*0201 community dataset was almost identical to that of the best individual tool; for HLA-B*3501, the sensitivity of the HBM at specificity 0.95 was slightly worse than the individual tool with the highest sensitivity at that specificity.Interestingly, however, the HBM actually outperforms the individual tools at specificity 0.95 for H-2Kd. The linear discriminant scores displayed approximately normal distributions, with moderate separation between binders and non-binders.The distributions were closer to normality for HLA-A*0201 dataset from the literature and the H-2Kd datset, with more systematic deviations for the other two datasets.While the nominal sensitivity and specificity of the LDA agreed reasonably well with the actual and cross-validated values, we used the cross-validated values for comparison purposes (Table 6).The distinction between nominal and actual specificity is illustrated in Figure 1. LDA displayed an improvement over the individual tools for the HLA-A*0201 community dataset, attaining a sensitivity of 0.33 at 0.99 specificity -higher than that of all the individual tools, but lower than that of the HBM.The performance of the LDA on the other datasets was less substantial.Its sensitivity on the HLA-A*3501 communtiy data at 0.99 specificity was 0.21, compared to 0.24 for ARB matrix and 0.31 for the HBM.However, we note again that the ARB matrix sensitivity is probably inflated, especially considering that the sensitivity for the secondbest tool at 0.99 specificity (NetMHC 2.0 Matrix) was 0.14.The performance of LDA on the H-2Kd dataset was fairly strong, but still lower than that of both ARB Matrix and the HBM.Finally, the performance of LDA on the lit-erature-derived HLA-A*0201 dataset was fairly weak at both 0.99 specificity and 0.95 specificity. Purely in terms of the A ROC value, however, LDA outperforms the individual tools on all four datasets.This suggests that while LDA provides strong "overall" performance across the entire spectrum of specificities, it achieves less improvement in the region of the ROC curve that is of interest in this study -namely, the regions of very high specificity. Discussion In this paper, results are given only for the three alleles HLA-A*0201, HLA-B*3501, and H-2Kd.The approach can be easily extended to any arbitrary MHC-I allele, provided that a sufficient number of tools make predictions for that allele, and that there exists an adequate number of known binding and non-binding peptides that can be used to test the individual tools on that allele.The effects of the latter conditions are born out in our results for H-2Kd versus HLA-A*0201.For details, see the caption for Table 1.The predictive performance of each tool for the HLA-A*0201 community binding data is shown using two measures: A ROC score, and sensitivity when specificity is 0.99 and 0.95. 1 Indicates how the sensitivity of each tool compares to that of the other tools at the indicated specificity; the tool with rank 1 has the highest sensitivity. 2The scoring threshold corresponding to the indicated specificity. We have used our HBM tool for the prediction of binders from bench-lab experiments, with positive results.For instance, in predicting binders for influenza virus in mice, the best two 9-mers predicted by HBM turned out to generate the strongest responses in immunoassays [8]. Some comparative studies of binding prediction tools use randomly-generated nonbinders.This study used known nonbinders only.We contend that the use of known nonbinders contributes to a stronger practical assessment of each tool's utility.Such nonbinders that might have been selected by an experimenter for binding-affinity testing due to the presence of good anchor residues.Randomlygenerated nonbinders tend to have anchor residues that poorly match established motifs, and thus are typically very easy to classify; in contrast, nonbinders reported in the literature frequently have anchor residues that do conform to an established motif, making them more difficult to classify.For a tool to be truly useful, it must be able to differentiate between peptides that all have good anchor residues, but whose non-anchor residues confer different degrees of binding affinity. Availability The authors have elected not to make the HBM available online, for two reasons: first, frequent server outages and other problems with individual web-based tools often prevent acquisition of all the requisite scores.Automatic operation is therefore not possible.Second, the querying of all the web-based tools can take a long time, making the tool inconvenient for real-time web-based access.Interested researchers may, however, contact the authors regarding obtaining the scripts implementing the HBM. Conclusion We have built a tool that heuristically combines the output of several individual MHC-binding prediction tools, and have shown that it achieves substantially improved sensitivity at high specificity compared to the best individual tools, and is also superior to linear discriminant analysis at high specificity.This technique is very general, and can be updated as new prediction tools become available. Given this, the HBM should be extremely valuable for researchers wishing to scan large proteomes for potential epitopes.Additionally, the combination of the tools using linear discriminant analysis consistently displays improved overall operating characteristics (as measured by the A ROC value) over the individual tools, and thus would be useful for researchers desiring to identify a large number of the potential binders in a smaller dataset, such as a single protein. The success of our heuristic-based tool substantiates the hypothesis that peptides predicted by a number of tools is more likely to bind than those predicted by just one tool, and that the likelihood of a particular peptide being a binder is related to the number of tools that predict it, as well as the accuracy of those tools.In the same vein, our data suggests that the performance of the heuristic-based approach improves when more individual prediction tools are available.The fact that combining the output of several tools results in increased performance indicates that, as of now, no single tool is able to extract all the information inherent in the data currently available.Thus, continued work on improved MHC-binding prediction is necessary. Creating a collection of peptides for evaluating the predictive performance of each tool Prediction of peptide binding was evaluated for three different alleles: HLA-A*0201, HLA-B*3501, and H-2Kd.These alleles differ substantially in the number of available tools that make predictions for them: all of the aforementioned tools predict for HLA-A*0201, eleven make predictions for HLA-B*3501, and just four predict for H-2Kd.Thus, these alleles were chosen so that the performance of our combined tool (HBM) and linear discriminate analysis (LDA) could be evaluated when different numbers of individual tools are employed. Two sources of data were used for comparative analysis of prediction tools in this study.The first was the community binding resource [6], a large, recently published database containing experimentally determined affinity values for the binding of peptides to many different MHC-I alleles.This dataset of testing peptides could potentially be expanded further by incorporating peptides from such online databases as SYFPEITHI [11], MHCPEP [23], HLA Ligand [15], and EPIMHC [24].However, the use of the latter online databases presents a problem for the current study.As the models underlying many existing prediction tools were trained using data from these latter databases, the subsequent testing of the individual tools with these same peptides may result in an inaccurate estimation of each tool's predictive performance.For instance, tool A may be judged better than tool B merely because tool A was trained using the same peptides with which it was tested, while tool B was not.As combining the scores of The sensitivity of the HBM is shown at 0.99 specificity and 0.95 specificity for all four of the datasets used in this study.All values were obtained using a value of 0.99 for the individual tool specificity parameter.The abbreviation "comm" refers to peptides derived from the community binding database, while "lit" refers to peptides gathered from the literature.The sensitivity of the combined tool is shown at 0.99 specificity and 0.95 specificity for all four of the datasets used in this study.The abbreviation "comm" refers to peptides derived from the community binding database, while "lit" refers to peptides gathered from the literature. Q-Q plot showing distribution of LDA scores for the HLA-A*0201 community data set Figure 1 Q-Q plot showing distribution of LDA scores for the HLA-A*0201 community data set.The horizontal axis has been scaled according to normal probabilities, so that points from a normally distributed variable would fall along a straight line (shown in blue).Scores lying above a threshold indicated by a horizontal line would be classified as epitopes.A level exceeding 99% of a normal distribution defines a nominal specificity of 0.99, whereas an actual specificity of 0.99 requires a threshold meeting the actual distribution of points at the 0.99 vertical line.The realized sensitivity of 0.32 for a specificity of 0.99 is indicated as the proportion of epitopes whose scores lie above the threshold of 0.95.For comparison purposes, the tools were also tested using an independent dataset consisting of peptides gathered only from published literature [25][26][27][28][29][30][31][32][33].Again, only nonamers were chosen.Classifying a given peptide as a binder or a nonbinder was performed as follows: if IC 50 values were reported (as in the community binding database and most literature sources), then the standard binding threshold of 500 nM was used; where some other type of assay was done to determine binding affinity, the classification given by the authors was used.In the latter case, if no classification was given by the authors, the peptides were not used.Finally, to avoid bias in the data, peptides were filtered such that where two peptides differed at fewer than two residues, one peptide was randomly removed.The resultant dataset consisted of 108 binders and 108 nonbinders to HLA-A*0201, and are given in Additional File 1. Due to scarcity of published data, it was not possible to construct similar datasets for HLA-B*3501 or H-2Kd. Performance measures Binding prediction programs give a numeric score to each considered peptide.Each score can be converted to a binary prediction by comparing against a tool-specific threshold -if the score is greater or equal, then the peptide is a predicted binder; otherwise, it is a predicted nonbinder. Sensitivity is the proportion of experimentally determined binders that are predicted as binders and is defined as true positives/(true positives + false negatives).Specificity is the proportion of experimentally determined nonbinders that are predicted as nonbinders, and is defined as true negatives/(true negatives + false positives).The traditional way to measure the performance of a classifier is to use a receiver operating characteristic (ROC) curve.However, ROC curves do not always give a good measure of practical utility.For a researcher scanning a large proteome for potential epitopes, specificity may be much more important than sensitivity.Imagine scanning a proteome consisting of 10,000 overlapping nonamers, 50 of which (unbeknownst to the experimenter) are good binders to the MHC-I allele of interest.Consider further that prediction tool A has 0.70 sensitivity at 0.80 specificity and 0.05 sensitivity at 0.99 specificity. Tool B has 0.50 sensitivity at 0.80 specificity and 0.20 sensitivity at 0.99 specificity.While tools A and B might have the same area under the ROC curve (A ROC ), tool A is superior at 0.80 specificity and tool B is superior at 0.99 specificity.If tool A is used at a threshold corresponding to 0.80 specificity, then approximately 2000 peptides must be tested in order to find 35 of the high-affinity binders.In contrast, if tool B is used at a threshold corresponding to 0.99 specificity, only about 100 peptides would have to be tested in order to find 10 of the high-affinity binders.Due to the high cost of experimental testing, and because knowledge of all the binders in a given proteome is usually not needed, the latter scenario would be preferable.We therefore conclude that good sensitivity at very high specificity is a more practical measure of a tool's usefulness than the A ROC value, and have thus used sensitivity at high values of specificity as the primary assessor of the practical utility of each tool.For completeness, however, we also include each tool's A ROC value. Combining the scores of the individual tools We propose a heuristic-based method (HBM) for combining scores from individual prediction tools to make a better prediction.This method takes advantage of the observation that most of the individual tools make very few false positive predictions when the classification threshold is set sufficiently high, but correspondingly make few predictions of positives.If the tools identify different actual binders, combining such predictions may result in a greater number of rrue positives.The method also tries to take advantage of the "collective wisdom" of a group of predictive tools.The individual tools are based on a variety of techniques.Instead of trying to find the "best" technique, we try to combine the best that each technique has to offer.This is an extension of the idea used by prediction tools such as MULTIPRED [19] which combine predictions made by a few methods. Our proposed combined prediction tool ("HBM") takes a protein sequence as input, queries all of the individual prediction tools getting from each the predicted binding affinity for all nonamers in the protein, computes a combined score for each nonamer, and finally predicts binders based on the combined scores for all nonamers.The tool is implemented as a Perl script. The first step in our HBM is to select a specificity for the individual tools.Each tool is then weighted according to its sensitivity at that specificity.Next, the score given to each peptide by a given prediction tool is compared to the tool-specific threshold value for that specificity.If the score is better than or equal to the threshold score, then that tool predicts the peptide as a binder, and the weight (sensitivity at the chosen specificity) for that tool is added to the total score for the peptide.Otherwise, the peptide's total score remains unchanged.For peptide x and each prediction tool t, we have where B t (x) is 1 if peptide x is predicted to bind by tool t and 0 otherwise, and W t is the weight of tool t.Combined-Score(x) is then compared to a threshold in order to classify x as either a predicted binder or a predicted nonbinder. The performance of the HBM was determined using 10fold cross-validation: in each fold, 90% of the peptides (the "training peptides") were used to determine the performances of the individual tools, and these performance data were used by the HBM as described above to make predictions for the remaining 10% (the "testing peptides").Each peptide was used as a testing exactly once.The scores given to each testing peptide were then used to calculate specificity and sensitivity values for the HBM in the same manner as was described for the individual tools.To minimize experimental error due to the random partitioning of the peptides into training and testing sets, the entire process described above was repeated ten times, and the HBM sensitivity at each specificity was taken to be the average of its sensitivity in the ten trials. While A ROC values are shown for the individual tools and for the LDA, no such values could be computed for the HBM.The reason for this is that, at high individual tool specificity parameters, most nonbinding peptides get an HBM score of zero, and therefore the ROC curve contains no points for specificities between 0 and approximately 0.85-0.90. Comparison technique A standard method for combining variables to distinguish two categories is linear discriminant analysis (LDA) [34].If y is the vector of scores from all the tools for a particular peptide, it is classified according to the value of the linear discriminant (μ 1 -μ 0 )'∑ -1 y, where μ 0 and μ 1 are the vectors of means for non-epitopes and epitopes, respectively, and ∑ is the average covariance matrix of the scores within the two groups.This method is optimal (in the sense of minimizing the probability of misclassification) if the scores have a multivariate normal distribution with the same covariance matrix for epitopes and non-epitopes.More sophisticated methods have been developed without the normality assumption, but doubts have been expressed about their advantage [35].The separation between the groups can then be quantified by δ 2 = (μ 1 -μ 0 )'∑ -1 (μ 1 -μ 0 ). Under the normality assumption, if the specificity is fixed at 1 -α, then the sensitivity will be where Φ is the cumulative distribution function (cdf) of the standard normal distribution.A ROC can be calculated as Φ (δ/ ).The threshold for classification is determined by the prior probability p 1 that a peptide is an epitope, which is related to the specificity by p 1 = [1 + exp (-δ 2 /2 -δΦ -1 (α))] -1 . A number of the tools displayed notably non-normal distributions.Most of these were highly skewed, but became close to normal when transformed to logarithms.The scores of three tools (NetMHC 2.0 ANN, Multipred ANN, and the logistic regression-based tool) had sigmoidal distributions.These became approximately normal when converted to scaled logits.A "logit" is a transformation of a probability p (between 0 and 1) to log(p/(1 -p)).For a variable y which is restricted between a and b, a "scaled logit" can be calculated via log((y -a + ε)/(b -y -δ)), where ε and δ are small adjustments to avoid zeros.ε = (y --a)/2 and δ = (b -y + )/2, y -and y + being the smallest and largest observed values greater or less than a or b, respectively.The actual performance of the linear discriminant on the transformed scores was estimated using ten-fold cross-validation.Computations were done using S-PLUS version 7.0.0.Figures were created with MATLAB 7. Except for the H-2Kd dataset, the cross-validated specificities fell short of the nominal ones.To realize specificities of 0.99 and 0.90, the threshold was adjusted to a nominal specificity such that the cross-validated values were as close as possible to the target values.Figure 1 shows the distributions of the LDA scores for the community HLA-A*0201 data set.The diagonal lines indicate where the points are expected to fall for perfectly normal data.A specificity of 0.99 corresponds to a horizontal line such that 99% of the non-epitopes fall below this line.Because of the slight upward curvature of the non-epitope distribution, a nominal specificity of 0.99 falls short of this goal, but the larger nominal value of 0.9975 gives the correct threshold.About 32% of the epitopes give LDA scores above this value.Distributions of LDA scores for the the other datasets are given in Additional Files 2, 3 and 4. Table 4 : Individual tool A ROC values and sensitivity data for HLA-A*0201 using binders and nonbinders gathered from the literature For details, see the caption for Table1.The peptides in this literature-derived dataset are available in Additional File 1.
5,686.8
2007-03-24T00:00:00.000
[ "Computer Science", "Biology" ]
The geological and hydrogeological characteristics of Tamelast landfill site in Agadir, Morocco . The Grand Agadir is confronting with huge production of solid waste. Due to the fact of changes in habits of consumption, the increase of production, and demographic evolution. This production is buried in the controlled discharge of Tamelast. This landfill faces many environmental issues. Our work aims to evaluate the environmental characteristics based on the geological and hydrogeological properties of the site of the Tamelast landfill in Grand Agadir (Morocco). To evaluate the geological and hydrogeological characteristics of the Tamelast landfill, we have generated geological, hydrogeological maps, stratigraphic vertical sections, and cross-sections of the landfill area that have been prepared for further assessment of environmental geological factors. Besides, we also focused on other measures of permeability, and field data we could define the probability and importance of contamination by leachate. The Tamelast landfill is installed on the marl-limestone and carbonate ranges of the Campanian and Maastrichtian. Geologically, the site consists essentially of carbonate deposits, limestones, and Cretaceous marls. These geological outcrops that can play the role of a potential aquifer are the fractured Campanian marl-limestone formations. These soils have a permeability of 5.10-4 to 10-3m/s and transmissivity of 10-2 to 5.10-2 m2/s. Introduction Globally, waste management problems have become a major menace to the natural environment in developing countries (e.g., Morocco) and efficient solutions are urgently needed [1]. Firstly, the most used methods for the elimination of solid waste are composting, incineration, and controlled landfills. Landfills are most often used for the final elimination of municipal solid waste. But an inevitable consequence of this practice is the generation of leachate [1]. Thirdly, Municipal waste disposal has posed a serious environmental threat to human existence in the urban centers of the world's developing countries [2]. Lastly, the decision-makers try to eliminate solid waste in landfills without causing any impacts on the environment, human health, and amenity. The selection of a controlled landfill for the storage of municipal waste requires very careful environmental, geological, and hydrogeological studies [3]. Therefore, selecting a suitable site for municipal waste disposal is considered the most important step in the development and management of solid waste. In the selection of sites, geological nature and hydrology play a very primordial role [3]. These criteria mainly control the suitability of waste disposal sites and the importance of bedrock geology and drift for groundwater protection is the decision-makers emphasized. Since 2010, the landfill has been operating as part of the delegated management adopted by the Agadir Municipality after the rehabilitation and closure of the uncontrolled landfill with a surface area of 41 ha. The landfill of Tamelast is characterized by the installation of two storage lockers on a surface area of 12 ha and six leachate storage basins; to avoid the problem of leachate infiltration, the managers have installed an active membrane system: bentonites, geomembranes, and geotextiles ( Fig. 1) [4]. Materials, methods and data collection To evaluate the geological and hydrogeological characteristics of the Tamelast landfill, we have used many data: geological, hydrogeological maps, and measurement of permeability and porosity. Investigation of outcrops, as well as boreholes in the Tamelast landfill, provides an excellent opportunity for carrying out a complete study of this landfill. Therefore, GIS and spatial databases were created. Geographical setting of the Tamelast landfill The Tamelast landfill is located east of the city of Agadir, on the southwestern slope of the High Atlas Mountains. It is located on the first hills between the plain of Mesguina and the reliefs of the Atlas Mountains. It is limited geographically by the valleys of the river of Tamelast in the NW and Smoumène River in the SE (Fig. 2). It is accessible by the asphalt road leading to the new stadium of Agadir Adrar and the village of Tamelast; the geographical coordinates are N 30°26'29'', W 9°30'40''. The Tamelast site is geologically located on the Upper Cretaceous carbonate formations, which are covered by silt and carbonate deposits of the middle and recent Quaternary deposits of the Mesguina plain (Fig. 3). According to the geological map of Agadir au 1/50 000 [5] and field data, the Tamelast technical landfill is installed on the marl-limestone and carbonate ranges of the Campanian-Maastrichtian ages (Fig. 3). Stratigraphy of Tamelast landfill According to the geological map and field surveys, the Tamelast landfill consists mostly of Upper Cretaceous deposits (Figs. 4, 5, and 6). Sedimentological observations were made along the Tamelast landfill except in the inaccessible south-east area. From the stratigraphical point of view, the series begins with very fine micritic whitish clayey limestones in decimetric banks (Fig.7), surmounted by whitish marls made up of very fine clayey particles. At 5m from the base, there is a sharp transition to yellow limestones in benches of 10 to 20 cm, rich in shells and alternating with fine marly levels. Additionally, the series continues with 10 m thick of yellowish marls and clays, then alternating banks of fine limestone and marly clays. Thin, above the deposits, become more sandstone and sandstone limestone levels with a reddish color showing an influence of alteration by iron oxides (Fig.7). Westward, the thickness becomes finer again by an alternation of marls and fine limestone beds (Fig.8). The uppermost 10 m of the section consists of intervals that are gypsum-rich yellow marly deposits and massive white limestones in metric beds with shell-rich levels ( Fig.9). The description of the stratigraphic series shows an abundance of clay and marl levels alternating with limestone banks. The presence of clay and marl layers plays an important role in the permeability to leachates, as clays are known to have a low permeability to fluids. Structural frameworks From the structural study, the Tamelast site is located south of the South Atlas Fault (SAF: the main structural framework in this region), thus outside the deformed Atlasic domain. Folds and faults are widely developed in different directions (Figs. 10, 11, and 12). Natural hazards In February 2017, the Tamelast landfill site experienced a slide of a household waste landfill locker to a leachate collection basin (Fig. 14), this slide was induced by an earthquake that hit the south of Agadir causing a large quantity of leachate to flow out, causing damage to the geomembranes and inducing a risk of leachate infiltration into surface water and in the groundwater. The study area is located in an area that is highly vulnerable to groundwater pollution. Also, the Tamelast site is influenced by the impacts of geohazards, landslides, and earthquakes that have been registered. Local environmental features such as faults, landslides, and earthquakes (Fig.13) show that the landfill area is under imminent threat from natural hazards. In this context, the waste disposal area is sensitive to natural and environmental disturbances. Hydrographic networks The study site is located in the downstream part between the confluence of the Taquenza and Lahouar River. It is part of the sub-watershed drained by the Smoumène river tributary, the left bank of Tamelast River (Figs. 14 and 15). The hypsometric curves established for the two subbasins of Tamelast and Smoumène characterize young basins where the area is small compared to the initial altitude change, which is characteristic of steep slopes (Fig. 16). The longitudinal profiles of the two main rivers show that the Tamelast River begins its course with a strong slope of 9% which gradually decreases to reach 2.6% at the confluence point over a length of 7.2 km. At the level of the Smoumène sub-basin, the 4.2 km-long river has a slope that decreases irregularly over six stages, from 6% to 1.9% at the confluence point (Fig. 17). The average slopes of the two rivers, calculated according to the weighting method are of the order of 4.5% for Tamelast Local hydrogeology According to some unpublished Environmental Impact Assessment (EIA) studies already carried out in the Tamelast Landfill, the geological outcrops that can act as potential aquifers are the Campanian fractured marlcalcareous formations and the Quaternary. These lands have a permeability of 5.10 -4 to 10 -3 m/s and transmissivity of 10 -2 to 5.10 -2 m 2 /s. Based on data from fifty wells and boreholes located in the Agadir region were collected, analyzed, and inventoried by the Souss-Massa Hydraulic Basin Agency (ABHSM) (Fig. 18). Noted that the wells are located near the old Bikarane landfill and the Tamelast landfill. The wells P45 to P48 located near the Tamelast landfill, capture the water in limestone and marl-limestone (Campanian), and are used for rural drinking water supply. The other water points capture the quaternary levels and are used for industrial purposes and drinking water supply (ONEEP). The data concerning the piezometric level of inventoried wells and boreholes are not contemporary and have not been used for this purpose for the establishment of a piezometric map. These piezometric levels, comparable to those used for the establishment of the piezometric map of the free water tables of Souss and Chtouka, make it possible to connect these inventoried wells to the water table of Souss except for the wells located near the landfill of Tamelast (Fig. 18). A NE-SW piezometric section passing through the Tamelast landfill was carried out with 10 wells and boreholes. It highlights the existence of a hydraulic discontinuity between the wells P47 and F4 (Fig. 19). This discontinuity coincides with the NW-SE great NW-SE flexion mentioned in the section on the geological setting and bringing into contact the limestone and marllimestone of the Campanian with the Quaternary. An inversion of the hydraulic gradient is shown in this section at the level of borehole F4. This inversion can be explained by overexploitation of the water table at the level of borehole F4 or simply by the noncontemporaneity of the piezometric measurements. As a result, the Tamelast technical landfill site is located in an area with a very limited water table in the limestone and marl-limestone of the Campanian. This table cloth can be captured less than 20 m below the landfill of Tamelast. Permeability The generalized aquifer presents a lateral variation of facies that affects the distribution of permeabilities [6]. The map (Fig. 21) shows variable permeabilities ranging from 10 -5 m/s to 2.10 -2 m/s. Transmissivity is also characterized by a very wide spatial distribution (10 -4 m 2 /s to 10 -1 m 2 /s). This variation is explained by the structure of the aquifer, the topography of the aquifer, and the heterogeneity of the materials constituting this aquifer. The study site is located in the permeability zone from 5.10 -4 to 10 -3 m/s (Fig. 21). These values correspond to gravelly alluvial soils with little clay (Quaternary) and the highly fractured marl-limestone outcrops (Campanian). Transmissivity According to an unpublished report (Souss-Massa Hydraulic Basin Agency), the analysis of the transmissivity map suggested that the landfill site is located in the transmissivity zone from 10 -2 to 5.10 -2 m 2 /s. In general, many rocks are impermeable; it is enough for this to be made up of fine grains with extremely small joints between the grains so that water does not circulate: As a result, their porosity is very low [6]. The problem is that these rocks can become permeable when fractured; this is called fracture permeability. In this case, they can allow the circulation of water, this is what characterizes the permeability and therefore contain aquifers [6]. Permeability of carbonate rocks of Tamelast Normally, limestones and marls are impermeable, but in the case of the Campanian carbonate series of Tamelast landfill, but the presence of fractures can make them permeable, and caused the infiltration of the leachate. For this reason, the company managing this landfill site uses geomembranes in all leachate treatment basins to prevent infiltration and avoid contamination (Fig. 22). Conclusions and future suggestions Environmental geology provides methods to study the information required for the installation of a controlled landfill. However, environmental hazards such as runoff, river flooding, landslides, and earthquakes generally affect landfills. They are affected by changing environmental conditions. Further, the environmental geology of landfill areas must be examined in detail. The databases indicate that the geological nature and the position of the landfill play the most determining role in minimizing the pollution's impacts. Therefore, these results involved in this study will guide to select another suitable site to solve the environmental issues caused by Tamelast discharge with the application of the multi-criteria analysis. This short paper provides a basis for the scientific utilization and choice of a landfill site in the future, groundwater resources, and sustainable development of the ecological environment. Municipal solid waste is emerging as a big environmental issue in Agadir City and is nowadays a subject of great importance and complexity. Thus, creating value for waste management is always associated with uncertainty and risk. This study is limited and the future development of this work will focus on practical tests. On the other hand, the work would be also extended to develop an artificial intelligence proposal and the development of new treatment technologies and the implementation of economic instruments for an intelligent and autonomous demand-response waste management system, based on a fully interactive infrastructure that meets specific requirements, the main purpose of which, by the cooperation between the Municipality of Agadir and the university, is to help the related decision-makers can use the results prepared within the scope of the study in accordance with the national/international literature, national/international standards and scientific facts, and adapt them to their own works. For future research, other types of contracts between the Tamelast Landfill Site and the ecological and environmental concerns (such as chemistry, artificial intelligence, geology, hydrology, and environmental impact assessment, evaluation of GISbased multi-criteria decision-making methods for sanitary landfill site selection when the Tamelast Landfill full) are worth studying.
3,196.2
2021-02-02T00:00:00.000
[ "Geology" ]
Deep weathering effects Weathering phenomena are ubiquitous in urban environments, where it is easy to observe severely degraded old buildings as a result of water penetration. Despite being an important part of any realistic city, this kind of phenomenon has received little attention from the Computer Graphics community compared to stains resulting from biological or flow effects on the building exteriors. In this paper, we present physically-inspired deep weathering effects, where the penetration of humidity (i.e., water particles) and its interaction with a building’s internal structural elements result in large, visible degradation effects. Our implementation is based on a particle-based propagation model for humidity propagation, coupled with a spring-based interaction simulation that allows chemical interactions, like the formation of rust, to deform and destroy a building’s inner structure. To illustrate our methodology, we show a collection of deep degradation effects applied to urban models involving the creation of rust or of ice within walls. © 2023TheAuthor(s).PublishedbyElsevierLtd.ThisisanopenaccessarticleundertheCCBYlicense (http://creativecommons.org/licenses/by/4.0/). Introduction Weathering effects have received a lot of attention from the Computer Graphics community [1].Several key techniques have allowed the production of urban landscape images with very realistic degradation effects, including erosion, pollution, flow, peeling, and cracking of the building's surfaces [2,3]. However, in spite of the many achievements of the past decades, physically-based simulations of deep weathering effects (i.e., the ones involving not only the outer surfaces of buildings but also the inner layers of the walls) have not been extensively tackled by the community.In particular, the simulation of the interplay between rust, plaster, brick, and other masonry structural elements, despite being a major factor in the degradation of older buildings [4], has not been addressed. In this paper, we present a technique that simulates deep weathering effects on building materials (see Figs. 1 and 2) leading to the exposure of buildings' internal structures such as bricks or rusted surfaces. Our method approximately simulates the penetration of water particles in a wall volume, their interaction with the inner elements present in the wall, (like concrete, iron, or other metals), and the consequent formation of new compounds, like rust; and similarly for the humidity that remains and is converted later on into ice (e.g., in winter).Since these new compounds have a higher volume than their original counterparts, their expansion leads to the formation of cracks in the wall which result in the exposure of bricks or rusted surfaces.This phenomenon is ubiquitous in any city with old buildings as illustrated in the real pictures from Fig. 4. Our main contributions are: 1.A physically-inspired simulation of the penetration of water particles (e.g., humidity, rain) in the bulk volume of walls, and their interactions with the inner structures of the wall (e.g., concrete, iron lattices, lead pipes, etc.).2.An approximate simulation of the expansion of these elements, like water turning into ice, or metal into rust.3.An approximate simulation of the deformation of concrete, plaster, and bricks in the wall, and of the formation of cracks inside the wall.Cracks can lead to the breaking of concrete, the fall of loose plaster, as well as the acceleration of the two previous steps given the increased exposure of plaster, bricks, and metallic structures to air as well as other natural weathering elements (e.g., rain). In the remainder of this article we start with an overview of the related work in Section 2 before presenting our proposal (Section 3).Section 4 details our simulation framework while Section 5 illustrates deep weathering effects that can be achieved by our method and discuss its limitations.Finally, we draw conclusions and suggest future work in Section 6. Previous work Aging phenomena have attracted a great deal of interest in the Computer Graphics community.Covered topics include the deposition of dust on objects [5], terrain erosion [6], wrinkles generation on organic materials [7] (which was later improved with specific crack generation algorithms e.g.[3]), material peeling [8], surface erosion [9], tarnishing effects [10], flow and its impact on appearance changes [11], metallic patinas [12,13], or destructive corrosion [14].More material-oriented studies were also proposed, especially on material dissolution [15], material fluorescence [16], material decay [17], or organic material growth [18].The main difference between our approach and previous works [15,17] is the objective of our simulations: while previous works focused on appearance changes due to weathering, our objective is to model mechanical strong deformations caused by weathering processes, such as rust and its deformation effect on plaster and bricks. In the past, there has been a lot of very interesting research on the generation of cracks on surfaces, starting from the works by Hirota and coworkers [19,20], dynamic animations [21,22], including the work of Gobron and Chiba [23] using cellular automata, the works using finite element analysis [24], the work by Bosch et al. [25] on the generation of scratches and impacts, or the more recent works by Iben and O'Brien [26], and Muller [27], up to some fast approximations for brittle fracture simulation, as in the work by Hann and Wojtan [28].In all these cases, these works are more concerned with the realistic generation of cracks than with the simulation of their underlying process, as we do in this paper for the building elements interacting with water.We refer the interested reader to recent surveys like the one by Muguercia et al. [3] for an in-depth treatment of the subject.Again, these thorough works do not attempt to model weathering effects, only the result of tears in a model surface. Particle tracing has also been used: γ -ton is a technique developed by Chen et al. [29] for visually simulating weathering.This was later improved by Kider [30] with a system of particles that allowed to simulate a 3D model's shape and appearance aging by a number of phenomena, including physical, chemical, biological, environmental, and weathering effects.Although our work shares the particle-based approach with these works, our aim is not to simulate decay or degradation, but mechanical deformations on the inner elements of an architectural structure.Dorsey and colleagues [15] studied numerous stone weathering behaviors, employing complex modeling of chemical reactions.Later, Mérillou et al. [17] proposed a simple model for simulating the aging of building materials, being able to handle a variety of damage patterns related to salt decay, as well as locating the phenomena with a physically inspired method that leads to plausible results.Our proposed technique is also related to the work of Cutler et al. [31], where a procedural approach to solid model authoring was presented, based on a volumetric approach.However, our implementation is based on voxels, while theirs was based on tetrahedrons and a distance function.Recently, Ishitobi et al. [32] developed a method for weathering simulations of coated metallic objects, with a particular focus on the processes of cracking and peeling.An introduction to these topics can be found in the seminal book by Dorsey et al. [1] or in the survey by Mérillou and Ghazanfarpour [2].As mentioned, the main difference with our approach is on the scale and volume of the effects simulated, as we aim to model the macroscopic mechanical deformation weathering may produce on architectural elements. Overview of our proposal Buildings may suffer from structural defects at different levels (roof, walls, foundations, etc.) and with various causes and degrees of importance.Structural dampness is one cause of structural defects which is due to the penetration of moisture within a building's structure.A high proportion of those damp problems are caused by rain penetration through porous masonry.In this paper, we focus on rain penetration in old brick walls. Our system simulates deep weathering effects and is based on two stages: (i) a physical-inspired simulation of the interactions of water particles with buildings' materials, and (ii) computation of the deformation of structural elements due to internal forces arising from weathering effects (computed from a voxelization of the simulation space).Water particles are the main simulation entities, penetrating the wall and interacting with its inner elements (e.g., bricks, concrete, plaster, beams, wood).For instance, corrosion of reinforcing bars is a major cause of reinforced concrete failure because of rust swelling.Here, voxels are merely used as a data structure to store the accumulated water, which in turn will be used to compute the strength of the materials created by chemical reactions between water and the inner structure's materials.To allow for an efficient simulation, a wall (e.g., brick masonry) and its inner structural elements (e.g., metallic structures) are voxelized.We approximate the contents of each voxel as being of a unique type (i.e., material, plaster, iron, etc.), and each voxel is connected to its surrounding voxels via a net of springs. The simulation loop (see Fig. 3) is as follows: 1. Water particles are instantiated on the wall surfaces, where they diffuse into the wall material.Although currently not implemented, at this point particles could flow on the wall exterior surface, further wearing its appearance and reaching other areas [11,33].Then, their positions are updated by the particle system, diffusing into materials.When a water particle enters a non-empty voxel, its velocity is updated (depending on the voxel's water permeability) and the voxel's water content is updated (depending on the particle's velocity).2. Whenever they interact with water and depending on their types, the voxels can turn into another type of voxel (e.g.iron turns into rust due to corrosion or water into ice).3. Since the transformation may require a larger volume than its original counterpart (e.g.rust takes 4 to 12 times more space [4] than iron), mechanical forces are exerted, leading to the displacement of the neighboring voxels and to the generation of new voxels.4. The mass-spring system is updated.5.Each spring having a length above its maximum length is broken (denoted as s b ) and a crack is instantiated.6. Cracks close enough to each other are merged.(noted v r2 ); if yes, v r1 loses energy, and v r2 is instantiated (with the same energy as v r1 ) and linked to its surrounding voxels.9. if a set of voxels, e.g., belonging to a brick or a broken piece of concrete, is found not to be connected to the rest of the wall anymore, and if these voxels have clear access to the exterior of the wall, then they fall.In our current implementation, they are just removed, but it is not difficult to imagine an animated loose brick falling off a damaged wall. At the end of this process, the inner materials of a wall might be exposed.If the process continues, in the case of rust, the degradation process will accelerate because of the direct interaction of rust with air and rain.These phenomena can clearly be seen in older, poorly maintained buildings, resulting in a characteristic degraded appearance, as seen in Fig. 4. In this paper, we focus on the rain penetration phenomenon and on two types of representative water-material interaction: (i) the creation of rust from iron and (ii) the creation of ice from water (at low temperatures).Both of them can lead to the creation of cracks and can result in deep weathering effects, see Fig. 5. The approximate simulation model In this section, we present more details regarding our simulation model.First, we focus on voxels creation and on the mass-spring system we use to model the forces.Then we detail how deep weathering effects are simulated. Voxelization In order to simulate deep weathering effects, we need building models to be segmented into their distinct elements, depending on their materials (plaster, bricks, iron, etc.).This could prove challenging if one wants to apply our technique to existing, artistcreated 3D models without internal structure.Nevertheless, the recent development of Building Information Modeling (BIM) as well as the procedural shift within the Architecture, Engineering, Construction, and Operations (AECO) industry will greatly simplify the availability of such semantic models.In our current implementation, the designer selects an area on a building wall, and automatically the system generates a generic multi-layer structure for that area, adding further layers of plaster, bricks, and random vertical or horizontal metallic pipes within the structure. After the segmented building model is obtained, the next step in our process is the voxelization of the wall volume to be simulated.As usual, there is a trade-off between the voxels' size, the volume of the simulation domain, and the number of voxels. We settle for enough voxel resolution to capture the important structural details of the model in order to provide visually plausible results.The simulation domain was restricted to specific parts of 3D building models.Those models were not voxelized but directly imported and rendered (cf.Fig. 1), and only a part of their walls were voxelized. For each voxel, we store the following information: its position, the springs connecting it to its neighbors, the force going through it (cf.Section 4.2), the amount of moisture on each of its faces, and its type.Our voxels can be of the following types: brick, plaster, iron, rust, space (empty voxels that can be filled with ice or rust), ice, and ''fixed''.In addition to the aforementioned attributes, voxels producing other voxels (e.g.iron, rust, ice) possess an ''energy'' attribute, to represent whether a voxel has enough chemical energy to produce other voxels. Note that ''fixed'' voxels are a special type of voxels used to represent the interface between our simulation space (composed of voxels) and the rest of the larger model which is composed of vertices and triangles (cf.Fig. 1).As a consequence, those fixed voxels cannot be displaced and indicate a junction with the main structural elements of the building. During the simulation, if a voxel is not linked (either directly or indirectly through other voxels) to at least one fixed voxel, then it becomes ''loose'' (it is part of a ''falling'' set of voxels), meaning that it does not interact with the other voxels but instead is affected by external forces (i.e., gravity).In this case, the loose element can be animated during its fall, or simply removed if a detailed animation over time is unimportant for the final result. Forces In our system, we model contact forces using a mass-spring system, where the object is approximated by a finite set of masses represented by the aforementioned voxels.Mass-spring models are one of the simplest yet most flexible ways to model a deformable body [3]. Basically, mass-spring systems quantify the simulation volume into a finite set of particles {p i ; 1 ≤ i ≤ n}.Each particle p i (a voxel in our system) has its own mass m i and position r i . The particles are pairwise connected with springs, each with its own properties (stiffness, damping factor, and rest length).Each particle is set to a classic equilibrium equation: where f i is the sum of all the forces acting on particle p i [3]. We distinguish between external forces, like gravity; and internal ones, which come from the springs attached to the particle p i . In general, springs follow Hooke's law [34], which can be stated as: with k being the spring constant that characterizes its stiffness, r 0 ij its original or rest length, and ∆r ij = r i − r j its current length, measured as the difference between the positions of voxels i and j.In practice, we use an effective force f ′ i which is defined as where t is a force-damping coefficient that depends on the type of voxel connection.Typically, a connection between two brick voxels will have t ≈ 1, in order to allow the brick voxels to move together, if possible.See Section 4.3.3 for exact values for t in each use case.This choice is justified from a physical standpoint since materials react differently to pressure.Moreover, forces are only transmitted from voxel to voxel when they are above a small threshold, in order to limit the calculation to the surrounding voxels. When the limit of a spring exceeds a given threshold, we model breakage by flagging the corresponding spring as broken, and by removing the corresponding connections between the voxels.See Fig. 6 for an example using rust as expanding material.In practice, when many springs are broken in the same region, cracks appear, and eventually, concrete gets broken or plaster gets loose, and can fall, see Section 4.3.4 for implementation details on this effect. Deep weathering effects In order to simulate deep weathering effects, we use particles to represent water and use them to simulate the way water penetrates and interacts with a building's inner materials.However, depending on the exact materials in the building and the length of the whole simulation, the particles can have different behaviors, from a more plastic one similar to plaster (but with somewhat more rigidity, if needed), to a full rigid body.In the first case, the particles are treated exactly as plaster (see below), simply taking into account their extra rigidity.On the other hand, when they act as full rigid bodies, we have to resort to a different approach to guarantee rigidity, see below.Finally, we provide details of how cracks are generated and how they spread within our model. Water particles As mentioned, humidity is an important source of structural defects.Most of the inorganic materials used in construction are porous.This means they absorb water particles according to their exposure to environmental factors like rain, humidity, groundwater, condensation, etc., and they release them according to the drying properties of the atmosphere.The flow of water on and within a building's materials depends on the water content of these materials, namely whether they are saturated or not.Saturated flow can be explained through Darcy's law, but for the unsaturated case an extended version of the same equation should be used [35,36] (see Eq. ( 4)): where θ is the ratio of liquid volume to bulk volume (called Volume Fractional Saturation), and D is the hydraulic diffusivity, which is generally a function of θ .This equation can be analytically solved in 1D for very simple cases, resulting in a sigmoid-like the function of θ(x) (where x is the spatial dimension), which ''slope'' decreases as water penetrates into the material volume. To simulate the penetration of water into buildings' structures, we decided to use a Monte Carlo-based approach, where water particles are ''sprayed'' onto the building's voxels in contact with weathering elements.These particles penetrate the voxels of the wall volume according their water permeability, which represents the easiness for a water particle to go through this particular material (see Fig. 7 for concrete). For each water particle, we compute its instantaneous water content (p w ) based on three parameters: the surface of our simulation area (a); the rain intensity (i in liters per hour on a unit of surface area), and the number of particles generated per second (nps): Thus, units for p w are liters per second. The particle system is responsible for setting a direction and a velocity for each water particle it generates. According to their directions and velocities, water particles navigate through the simulation space to saturate the voxels of water.Whenever a water particle goes through a voxel, we use the Russian Roulette technique to decide whether the particle is ''killed'' or whether it continues to penetrate the building. A killed particle, adds its water content to the voxel's water saturation level.As a consequence, the behavior of Eq. ( 4) can be approximated with an extinction probability having the form of a sigmoid function, see Eq. ( 6): where k depends on the voxel's type and where θ is the current ratio of water saturation in the voxel.Fig. 7 presents implementation results of Eq. ( 6) obtained with our particle-based simulation. Each voxel can store a given amount of water before being saturated.The more water it contains, the easier it is for the next water particle to go through, see Eq. ( 4).Moreover, only saturated voxels may give rise to chemical reactions favoring deep weathering effects (e.g. the formation of rust from saturated iron voxels), we detail these reactions in the next Section. Water interaction: material transform As mentioned above, saturated voxels may give rise to chemical reactions involved in the deep weathering effects we can simulate, namely: • saturated iron voxels may produce rust voxels; • saturated space voxels may produce ice voxels; • saturated rust (resp.ice) voxels may produce more rust (resp.ice) voxels. Since rust (resp.ice) has a larger volume (see [4]) than the iron (resp.empty space) it replaces, its chemical creation applies a force on the mass-springs of the voxels in its vicinity.Whenever the tension is above a threshold, voxels are moved in order to make space for the newly created ''rust'' (resp.''ice'') voxels, see Fig. 8. Bricks As our domain is represented by a grid of voxels (see Section 4.1), we simulate all these effects at the voxel level.In the case of plastic deformation, bricks are simply dealt with by changing the spring parameters to make them stiffer.However, when brick rigidity is above a given user-provided threshold, we switch to a full rigid body behavior to guarantee perfect rigidity.In this scenario, ''brick'' voxels do not move independently from the other voxels representing the brick. In order to do so, the springs between brick voxels transmit their forces to each other with very little loss (a transmission of 0.99 cf.Eq. ( 3)) to guarantee that a similar amount of force is exerted on them.Therefore when one brick voxel has enough force to move, all the other connected brick voxels are more likely to move at the same time.On the other hand, interfaces between materials (e.g.brick/plaster or plaster/brick), transmit force with a heavy loss (a transmission of 0.1), to ensure that the rigid object, when moving, does it independently from the other material. While our focus remains on bricks, other very rigid objects (as opposed to plaster), could be simulated in a similar fashion. Crack generation Cracks, in our system, are the result of broken springs, i.e., springs that stretch beyond their maximum allowed distances.Whenever a spring breaks, we (i) remove the connection between the voxels it used to connect; (ii) instantiate one crack-point at each end of the spring (cf.Section 4.2).Observe that, in our Fig. 9. 3D view of a cracked wall, where the voxels were displaced in a similar way as in Fig. 6.The cracks are represented with cyan lines.Whenever possible, we join and merge the cracks.From left to right: lateral ''voxel view'' of a wall segment, cracks inside the material with voxel silhouettes, and 3D view of the cracks inside the material where we can see them gathering together. implementation, crack points have fixed positions in space.It should be noted that crack points and cracks are not rendered in the final result of our simulation. Cracks are computed at each iteration of the simulation, depending on the crack points they contain.The process of crack generation is the following: 1. Remove all crack points already assigned to any crack.2. Check each unassigned crack-point whether it is inside a voxel (i.e., voxels may have been previously removed if they belong to fallen material from the wall).If not, the crack point is removed.3.For each crack-point cp cur : 3.1 cp cur is added to an existing crack crack cur .if cp cur is not close enough to an existing crack, create a new crack crack cur at its position.3.2 All crack-points cp other within a radius r of cp cur are added to crack cur . Each spring crossing a line between cp cur and any cp other is broken.At this point, all crack-points around cp cur have been added to crack cur .Now, we check if one of those crack points could also be assigned to another crack c. 4. For each crack c; if at least one of its crack-points is also part of crack cur , then every crack-point of c is added to crack cur and c is removed. This process of crack generation and merging ensures that we track every crack point and that cracks in the vicinity are merged into larger cracks.Springs that connect cracks are removed and may accelerate the deterioration process as is the case in real life. Cracks are typically not supposed to be rendered since they represent ruptures between the materials of our walls.For illustration purposes, we use lines to visualize them in Fig. 9. Breaks and fall of materials will eventually happen by themselves along with the progress of the simulation, see Fig. 10. Results and discussion In this section, we first present some of our deep weathering effects results before discussing limitations. As mentioned, when the user selects a building part, the system automatically generated the wall volume for that area, randomly adding horizontal or vertical pipelines.The simulation produces a voxelized output, which can have a block-based appearance (3D aliasing).Just as in any other of these cases, the output voxels can be post-processed to be refined and smoothed (e.g., with a marching cube algorithm [37]) to improve the result.The results were obtained from a standard consumer-grade laptop (Intel Core<EMAIL_ADDRESS>GHz processor; 8 GB of DDR3 RAM; GPU: NVIDIA GeForce 840M). Results Our particle system, voxel engine, and simulation were implemented in Unity3D V5.4.0.Since our simulation produces a voxelized output, to obtain the images in this paper, and only for aesthetic reasons, we exported the simulation meshes at a given interval of time, and imported them into a dedicated modeling software (Blender v2.78).Also, for illustrative purposes only, and to showcase the possibilities, we post-processed the model (with a smoothing algorithm) and rendered it to obtain the final results, see Fig. 11. Our standard scene, unless stated otherwise, contains 983k voxels of 0.5 cm 3 , allowing a simulation volume of 0.9 m × 0.9 m × 0.20 m similar to Fig. 2. It was also made of 20 bricks of approximately 30k voxels each (of size 0.3 m×0.15 m×0.1 m) as well as an iron pipe (of about 15k voxels, with 0.15 m of diameter and 0.9 m of length) going under the bricks; the rest of the wall was made of plaster voxels.The particle system produces 500 water particles per second, and the mass-spring system contains about 2 million springs. With those parameters, it is only during the displacement of voxels and the calculation of cracks that our simulation slows down.It takes about 1.5 s to displace about 150k voxels (corresponding to the displacement following the update of the massspring system) and about 2.0 s to calculate the resulting cracks (for which almost all of the 1 million voxels are checked).Our implementation is running on a single thread and neither the voxel engine nor the mass-spring system was implemented with any kind of optimization techniques.Beyond this worst case, the simulation runs at an average of 5-6 fps; it takes about 3 min in Blender to obtain nicer rendered results of Fig. 12. Fig. 13 shows the result of our deep weathering technique applied to an iron pipe in a Manhattan-like building.In the figure 12.An older building with plaster and bricks falling off one of its walls.For illustrative purposes, in this render, we have not smoothed the surfaces out, so voxels are visible in some areas, but the original building textures have been kept in non-weathered areas.Also, a sharpening post-processing filter has been applied to enhance voxel visibility. we can appreciate the effect of water corroding an iron pipe deeply inside the wall, and the chemical and physical interactions that resulted in some plaster being removed, exposing concrete and the rusted iron itself.The simulation results were duplicated (the bottom pipe is a duplicate of the top one), before the final smooth render. Fig. 12 illustrates the same effect on a Victorian London building where the plaster has fallen to reveal the bricks underneath.To obtain this rendering, we ran two separate simulations: the first one corresponds to the leftmost fallen plaster; and the second to the right area with fallen plaster.This second area was bigger with a volume of 1.4 m × 1.4 m × 0.20 m and about 2.6 million voxels. In Fig. 14 we can see another degradation example of a house with fallen plaster and bricks that are surfacing because of some material expansion (e.g., frozen water) inside the wall.To obtain this rendering, we ran three separate simulations on three distinct simulation areas. Discussion Our deep weathering simulation system combines a particle system with a voxel engine.Each voxel is linked to its neighbors via a mass-spring system that represents relations between the voxels composing the walls. In this article, we only focused on simulating the creation of rust and cracks in walls that arise from the infiltration of water within the materials of a wall (i.e., concrete, plaster, bricks) to reach the inner part of the wall, and interact with the materials there, like metallic structures.However, our method can easily be applied to other deep weathering effects such as water infiltration and the creation of cracks due to the creation of ice (as seen in Fig. 15 for our standard wall). On the validation side of this work, although qualitative comparison with real-world images is fairly easy, quantitative validation is incredibly hard to do, as well as comparing to previous results.Actually, there are no concrete previous works to compare with, except for isolated aspects.For instance, we could compare our specific crack-generation module with previous approaches, such as the ones by Bosch et al. [25], Iben and O'Brien [26], Muller [27], Muguercia et al. [3], and Hann and Wojtan [28], to cite just a few.However, in this case, we would only be comparing one subpart of our approach with state-of-the-art specialized approaches, which is far from our current objectives.Other previous works, such as the ones mentioned in our Previous Work Section, focus on different phenomena than our proposal, so comparisons are nearly impossible. Although our approach presents promising results on deep weathering effects it nevertheless suffers from a few drawbacks: • The weathered areas should be manually created by the designer, in a way completely isolated from the rest of the building.It would be interesting but left as future work, to integrate this system into a whole-building degradation simulation, which could retrieve aging information from an independent simulation module such as the one by Munoz-Pandiella et al. [38].Also, rain density and direction could be used to automatically determine the most weathered areas, as well as puddle formation and similar effects. • Very rigid bodies (e.g., bricks) were not displaced as realistically as we wanted them to, since our implementation could not exercise any torque on them (and therefore bricks were unable to rotate).Depending on the type of building and the weathering effect, this can be an issue.A possible solution Fig. 1 . Fig. 1.A building with deep weathering effects.Left: the original building.Middle: the same building after the simulation of water particles penetrating the gray area.Right, a closeup view of the resulting degradation is due to weathering effects.Surface noise is part of the wall texture itself. Fig. 2 . Fig. 2. From left to right, top to bottom, part of a wall being pushed over by rust over time.Surface noise is part of the wall texture itself. Fig. 4 . Fig. 4. Two real examples of weathering effects of water and rust on a building facade. Fig. 5 . Fig. 5. Simulation results of weathering effects on a building facade.Notice the now rusty iron pipe. Fig. 6 . Fig. 6.A schematic 2D example of a rusting pipe inside a wall.Forces are represented as strings (purple lines) that propagate stress among the voxels.Cracks lines are shown (in cyan), as well as iron voxels (in dark gray), wall material voxels (plaster or concrete, in light gray), and rust voxels (in dark red).From left to right: the pipe in its initial state inside the wall material.As rust appears to occupy a larger volume, material voxels are pushed over and fall off. Fig. 7 . Fig. 7. Curves representing the Volume Fractional Saturation (VFS) obtained with our particle-based simulation at different time steps for a wall made of concrete.The moisture advance depends on the material's water saturation. Fig. 8 . Fig. 8. From left to right: rust presses against the voxels above and around it; When the above voxels reach a threshold, they are displaced; A new rust voxel is instantiated at the vacant position. Fig. 10 . Fig. 10.Even without the rendering of cracks, we can clearly see where the fractures are going to happen.Noise is part of the wall texture. Fig. 11 . Fig. 11.Left: The original 3D mesh.Right: A heavily smoothed version.This Figure is included for illustrative purposes, as this smoothing operator is not part of the core proposal of this paper. Fig. Fig.12.An older building with plaster and bricks falling off one of its walls.For illustrative purposes, in this render, we have not smoothed the surfaces out, so voxels are visible in some areas, but the original building textures have been kept in non-weathered areas.Also, a sharpening post-processing filter has been applied to enhance voxel visibility. Fig. 13 . Fig. 13.A building with a visible rusting iron pipe exposing concrete and the pipe itself, top: from a distance view; bottom: closeup view. Fig. 14 . Fig. 14.Closeup view from the building where plaster and bricks fell off from a wall.Note that the original building textures have been kept in non-weathered areas.A sharpening post-processing filter has been applied to enhance voxel visibility. Fig. 15 . Fig. 15.Ice formation on a layer of plaster lying on top of bricks, from left to right and top to bottom: (1) a small rain area; (2) small rain density area under a longer exposure; (3) same exposure but with a larger area; (4) a wall with plaster only.
7,775.6
2023-03-01T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Joint queue length distribution of multi-class, single server queues with preemptive priorities In this paper we analyze an $M/M/1$ queueing system with an arbitrary number of customer classes, with class-dependent exponential service rates and preemptive priorities between classes. The queuing system can be described by a multi-dimensional Markov process, where the coordinates keep track of the number of customers of each class in the system. Based on matrix-analytic techniques and probabilistic arguments we develop a recursive method for the exact determination of the equilibrium joint queue length distribution. The method is applied to a spare parts logistics problem to illustrate the effect of setting repair priorities on the performance of the system. We conclude by briefly indicating how the method can be extended to an $M/M/1$ queueing system with non-preemptive priorities between customer classes. Introduction We consider a single-server queueing system shared by N customer classes, numbered 1, . . . , N . The class index n indicates the priority rank; class 1 has the lowest priority and class N has the highest priority. The arrival process of class-n customers is a Poisson process with rate λ n . The service time of class-n customers is exponentially distributed with rate μ n . This system can be described by a multi-dimensional Markov process on the state space N N 0 , where the coordinates keep track of the number of customers of each class in the system. In this paper we present an exact method, based on matrix-analytic techniques [22,23] to determine the equilibrium joint queue length distribution. In particular, it appears to be possible to avoid the use of infinite series and truncation of the state space. The crucial observation is that the Markov process, embedded on states in which there are no customers of priority classes higher than n, is of the M/G/1 type, where the number of class-n customers represents the class-n level. This is due to the fact that during excursions of the Markov process in which higher priority customers are present, any number of lower priority customers may arrive. Thus, a natural way to find the equilibrium joint queue length distribution is by recursive application of the theory of M/G/1-type Markov processes. The joint queue length distribution is required in applications in the area of spare parts logistics and production. Specifically, our interest in the M/M/1 priority system with N classes arose from a spare parts logistics problem, where the joint queue length distribution is necessary for an exact performance analysis. This problem is discussed in Sect. 3. Priority queueing systems have a long history (cf. [7,8,16]) and single-and multiserver priority queues received much attention. Most of the earlier studies concentrate on the transforms of marginal system characteristics such as the queue length and waiting time of a specific priority class. The focus on marginal system characteristics is also seen in recent work in [13,28], where the domain of priority queueing systems with general arrival and service time distributions is treated. Joint queue length distributions have first been studied in [19] using the matrixgeometric method [21] for an M/M/1 priority queueing system with two classes. This study spurred the observation made in [3,29,30] that the matrix-geometric method is a natural choice for studying priority queueing systems with a quasi-birth-death (QBD) structure. In these papers, the matrix-geometric method is generalized to systems with two priority classes, a Markovian arrival process and a phase-type service time distribution. In [4] the same matrix-geometric method is applied to a discrete-time N -class system, leading to an approximation of the joint equilibrium distribution, as the rate matrix R needs to be truncated for actual computation. An M/PH/1 nonpreemptive priority system with N classes with different service rates per class is studied in [15], where an algorithm is derived using matrix-geometric techniques for the computation of the joint queue length distribution for three aggregated classes. The observation that is not made in [4,15] is that lower priority customers see the queueing system as an M/G/1-type system, i.e., an M/M/1 system with an unreliable server (or vacations), where down times correspond to high-priority service interruptions. This observation is made and implemented in [12,31], where the distribution of the down times are approximated by phase-type distributions, the first three moments of which are matched to the moments of high-priority service interruptions. However, only marginal queue length distributions are obtained. There is also a number of papers studying the joint queue length distribution using alternative approaches. Generating functions are used in [9,10] for the analysis of M/M/c priority queueing systems with two classes. Generating functions are also used in [20] for an M/M/c preemptive priority system with more than two classes. Here, customers of higher priority are aggregated, leading to an approximation of the equilibrium distribution. Later, [25][26][27] use a mixture of the matrix-geometric method and generating function technique to analyze preemptive and non-preemptive priority M/M/c queueing systems with two classes, where each class can have different types of customers. The mixture of the two methods leads to an approximation of the joint equilibrium distribution as the number of matrix operations has to be finite for actual computation. The area of priority queueing systems still is an active field of research. More recently, priority queueing systems with impatient high-priority customers have been analyzed using generating functions [5]; by identifying simple Markov processes [6]; using a level-crossing method [14]; or using Laplace-Stieltjes transforms [17]. These systems have applications in, for example, telecommunication systems where voice messages need to be delivered quickly and have priority over data packets. An alternative to impatient customers are queueing systems where customers can reduce their sojourn time by transferring to a higher priority class. This allows impatient customers to be served earlier. In [32], bounds on the equilibrium distribution are given. The study of a queueing system with transferring customers is motivated by the potential application in the design of emergency departments. Here, patients are categorized in classes of different priority, where patients can transfer from a lower priority class to a higher priority class. Approximations for the first and second moment of the waiting time in an M/G/c non-preemptive priority queueing system with an arbitrary number of priority classes are given in [2]. Our main contribution is that we describe a method for the exact determination of the joint queue length distribution for a preemptive priority queueing system with an arbitrary number of classes and class-dependent service rates. We use the property that the embedded Markov process is of the M/G/1 type. A key to the approach is identifying first-passage probabilities which are computed by one-step analysis. We then recursively apply matrix-analytic methods related to M/G/1-type Markov processes and avoid the use of infinite series. The remainder of the paper is organized as follows. In Sect. 2 we describe how the matrix-analytic method is applied to an N -class preemptive priority single-server system. To ease the understanding of the method in general and highlight the recursive nature, we first treat the two-and three-class systems in Sects. 2.1 and 2.2, respectively. 123 Next, in Sect. 3 we present the application in spare parts logistics where the joint queue length distribution is needed for an exact analysis. In the final section we conclude by indicating how to extend the method to non-preemptive priority rules. Matrix-analytic method The M/M/1 preemptive priority system can be described by a Markov process with states (q N , . . . , q 1 ), where q n denotes the number of class-n customers in the system. State transitions are triggered by arrival and service completions. Class-n customers enter at an exponential rate λ n , triggering a transition from (q N , . . . , q 1 ) to state (q N , . . . , q n + 1, . . . , q 1 ), and if q N = · · · = q n+1 = 0 and q n > 0, class-n customers are served at an exponential rate μ n , which leads to a transition from (0, . . . , 0, q n , . . . , q 1 ) to (0, . . . , 0, q n − 1, . . . , q 1 ). Throughout the paper we assume that the system is stable, i.e., the traffic intensity ρ is less than 1 (see, for example, [11]): and we denote by p(q N , . . . , q 1 ) the equilibrium probability of being in state (q N , . . . , q 1 ). To ease notation, let us introduce λ := N n=1 λ n . We propose to use the matrix-analytic method for M/G/1 structured systems to exactly and recursively calculate the joint queue length probabilities p(q N , . . . , q 1 ), starting from p(0, . . . , 0) = 1 − ρ. Key to this approach are first-passage probabilities, that can be determined through one-step analysis. In fact, the first-passage probabilities are the elements of the auxiliary matrix G of the matrix-analytic method. However, rather than determining the infinite matrix G using matrix equations, we recursively determine its elements using scalar equations, derived by exploiting the skip-free property of this Markov process. To highlight the recursive nature of the method, we first treat the two-and three-class systems. Two-class system The transition rate diagram of the two-class system depicted in Fig. 1a shows the twoclass system is a QBD process with class-2 levels q 2 defined as the set of states with q 2 high-priority customers. To calculate the probabilities p(q 2 , q 1 ), we propose to exploit the M/G/1 structure of this Markov process, instead of its G/M/1 structure as done by [19]. Instrumental in the calculation of p(q 2 , q 1 ) are the first-passage probabilities g 2;i 1 , instead of the elements of the rate matrix as in [19]. The first-passage probability g 2;i 1 is defined as the probability that, starting at class-2 level q 2 > 0 in state (q 2 , q 1 ), the first passage to class-2 level q 2 −1 happens in state (q 2 −1, q 1 +i 1 ). Note that g 2;i 1 does not depend on the starting state (q 2 , q 1 ) and can be interpreted as the probability that i 1 class-1 customers arrive during a busy period of class-2 customers. By one-step analysis we get Transition rate diagrams of the two-class system. a Transition rate diagram of the two-class system, b Embedded on class-2 level q 2 = 0 So g 2;i 1 can be recursively calculated, starting from g 2;0 , which follows from (2), To calculate p(q 2 , q 1 ), we use the following equation for excursions starting at class-2 level q 2 to levels higher than q 2 ending at first return to class-2 level q 2 . The number of excursions per time unit that end in state (q 2 , q 1 ) is equal to p(q 2 + 1, q 1 )μ 2 , but this number is also equal to the excursions starting from class-2 level q 2 per time unit that end in state (q 2 , q 1 ). The number of excursions per time unit that start in state from which all probabilities can be recursively calculated, once the boundary probabilities p(0, q 1 ) are known. The probabilities p(0, q 1 ) can be determined by considering the Markov process embedded on class-2 level 0. The transition rate diagram of the embedded Markov process is shown in Fig. 1b. Note that the embedded Markov process has an M/G/1 structure with class-1 levels q 1 defined as the set of states with q 1 class-1 customers (and no class-2 customers). To formulate the analog of (5), we introduce f 2;i 1 , which is the probability that, starting in state (1, q 1 ), the first passage to class-1 levels less than or equal to q 1 + i 1 happens in state (0, q 1 + i 1 ). In this case, this first-passage probability is equal to the probability that during a busy period of class-2 customers, at least i 1 class-1 customers arrive. So Then, similar to (5), we have which can be used to calculate all boundary probabilities, starting from the probability of an empty system p(0, 0) = 1 − ρ. Three-class system The transition rate diagram of the three-class system is shown in Fig. 2a and b. This system can be described by a QBD process with class-3 levels q 3 defined as the set of states with q 3 high-priority customers. Let g 3;i 2 ,i 1 be the probability that, starting at class-3 level . Note that g 3;i 2 ,i 1 can be interpreted as the probability that i 2 class-2 and i 1 class-1 customers arrive during a busy period of high-priority class-3 customers. By one-step analysis, where, by convention, g 3;i 2 ,i 1 = 0 if i 2 < 0 or i 1 < 0. From (9) the probabilities g 3;i 2 ,i 1 can be recursively calculated, starting from g 3;0,0 , which follows from (8), Similar to (5), we have which can be utilized to calculate all probabilities, once the boundary probabilities p(0, q 2 , q 1 ) are known. To determine p(0, q 2 , q 1 ) we proceed by considering the Markov process embedded on class-3 level 0, which is of the M/G/1 type, with class-2 levels q 2 defined as the set of states with q 2 class-2 customers (and no class-3 customers). Its transition rate diagram is depicted in Fig. 3a. The first-passage probabilities g 2;i 1 for the embedded Markov process are defined as the probability that, when starting at class-2 level q 2 > 0 in state (0, q 2 , q 1 ), the first passage to class-2 level q 2 − 1 happens in state (0, q 2 − 1, q 1 + i 1 ). Further, the first-passage probabilities g 3;i 1 are defined as the probability that, when starting in state (1, q 2 − 1, q 1 ), the first passage to class-2 level q 2 − 1 happens in state (0, q 2 −1, q 1 +i 1 ). Observe that g k;i 1 is the probability that i 1 class-1 customers arrive during a busy period of higher priority (class-2 and class-3) customers, that starts with the arrival of a class-k customer, for k = 2, 3. Notice the difference between the first passage probabilities g 3;i 1 and g 3;i 2 ,i 1 . The number of indices after the semicolon in the subscript is related to what level the Markov process is embedded on, as illustrated in Fig. 3. By one-step analysis we get for k = 2, 3, From equations (13), both g 2;i 1 and g 3;i 1 can be recursively calculated, with g 2;0 and g 3;0 being the minimal non-negative solution of (12). To solve (12) we introduce B k , which is the Laplace-Stieltjes transform (LST) of the service time of a classk customer, and B P k , the LST of a high-priority (class-2 and class-3) busy period initiated by a class-k customer. Then g 2;0 and g 3;0 can be calculated from, see [18,Sect. 5.8], where B P 2,3 (s) is the LST of a high-priority busy period initiated by a class-2 or a class-3 customer, which is equal to the LST of the busy period in an M/H 2 /1 queue with class-2,3 customers, To formulate the analog of (11) for p(0, q 2 , q 1 ), we introduce the first-passage probabilities f 3;i 2 ,i 1 defined as the probability that, when starting in state (1, q 2 , q 1 ), the first passage to class-2 levels less than or equal to q 2 +i 2 happens in state (0, q 2 +i 2 , q 1 +i 1 ). The probability f 3;i 2 ,i 1 can be interpreted as the probability that at the end of a busy period of class-3 customers, there have been at least i 2 class-2 arrivals, and then, when the server brings down this number to i 2 , the total number of class-1 arrivals (from the start of the busy period of class-3 customers) has been i 1 . Hence, we can express f 3;i 2 ,i 1 as an infinite sum, Before elaborating on the computation of f 3;i 2 ,i 1 , we proceed to derive an equation for p(0, q 2 , q 1 ) by considering excursions to class-2 levels higher than q 2 that start at class-2 level q 2 or lower, and end at first return to class-2 level q 2 in state (0, q 2 , q 1 ). The number of excursions per time unit that end in state (0, q 2 , q 1 ) is equal to p(0, q 2 + 1, q 1 )μ 2 . This number is also equal to the excursions starting from class-2 level q 2 or lower per time unit that end in state (0, q 2 , q 1 ). A fraction g 2;i 1 of the excursions starting in (0, q 2 , q 1 − i 1 ) by a class-2 arrival end in (0, q 2 , q 1 ). Excursions to class-2 levels higher than q 2 starting in state (0, q 2 − i 2 , q 1 − i 1 ) by a class-3 arrival reach, with probability f 3;i 2 ,i 1 − g 3;i 2 ,i 1 , class-2 level q 2 in state (0, q 2 , q 1 ) at first return to class-2 level q 2 . Note that g 3;i 2 ,i 1 needs to be subtracted, since with probability g 3;i 2 ,i 1 class-2 level q 2 is reached but not yet exceeded. Hence, from which p(0, q 2 , q 1 ) can be recursively calculated, once the boundary probabilities p(0, 0, q 1 ) are known. To determine p(0, 0, q 1 ) we consider the Markov process embedded on the axis q 3 = q 2 = 0, the transition rate diagram of which is depicted in Fig. 3b, with class-1 levels q 1 defined as the set of states with q 1 class-1 customers (and no class-2 or class-3 customers). To finally formulate the equations for p(0, 0, q 1 ) we define f k;i 1 as the probability that, when starting in state (0, 1, q 1 ) if k = 2 and starting in state (1, 0, q 1 ) if k = 3, the first passage to class-1 levels less than or equal to q 1 + i 1 happens in state (0, 0, q 1 + i 1 ). Similar to the two-class system, this firstpassage probability is equal to the probability that at least i 1 class-1 customers arrive during a busy period of class-2, 3 customers, initiated by a class-k customer. So, for k = 2, 3, Then, similar to (7), we have This equation can be used to recursively calculate p(0, 0, q 1 ), with initially p(0, 0, 0) = 1 − ρ. We now turn to the calculation of the first-passage probabilities f 3;i 2 ,i 1 . To avoid evaluation of the infinite sums in (16), we again employ one-step analysis, yielding for i 2 > 0 and i 1 ≥ 0, where, by convention, f 3;i 2 ,i 1 = 0 if i 1 < 0. The first-passage probabilities f 3;i 2 ,i 1 can be recursively calculated using the equations (20), starting with f 3;0,i 1 = g 3;i 1 . The last two terms in (20) need some explanation: this is the probability of first passage to class-2 levels less than or equal to q 2 + i 2 in state (0, q 2 + i 2 , q 1 + i 1 ) when starting an excursion in state (2, q 2 , q 1 ), so with two instead of one class-3 customer. Now imagine that the second class-3 customer enters service when the busy period generated by the first class-3 customer finishes. The first term corresponds to the event that the number of class-2 arrivals during the busy period generated by the first class-3 customer is j 2 < i 2 , so that the number of class-2 arrivals during the second busy period should be at least i 2 − j 2 . The second term corresponds to the event that the number of class-2 arrivals during the first busy period is j 2 ≥ i 2 . The surplus number j 2 − i 2 of class-2 customers should be served after the busy period generated by the second class-3 customer. The duration of the excursion will not be altered if these class-2 customers enter service (as well as any higher priority customer arriving during their service) before the second class-3 customer. Then f 3;i 2 , j 1 is the probability that the number of class-1 arrivals is j 1 when the last surplus class-2 customer completes service, and thus, the number of class-1 arrivals during the busy period generated by the second class-3 customer should be exactly equal to i 1 − j 1 . Note that the busy period generated by this second class-3 customer includes class-3 and class-2 customers, since each arriving class-2 customer is surplus. So the probability of exactly i 1 − j 1 class-1 arrivals is g 3;i 1 − j 1 . N-class system We now extend the approach for obtaining the stationary distribution of the threeclass system to an N -class system. Since we are dealing with N classes, we need some accommodating notation. We introduce i (n) = (i n , i n−1 , . . . , i 1 ), q (n) = (q n , q n−1 , . . . , q 1 ), j (n) is vector-index of length n, 0 (n) is the zero vector of length n, and e (n) k denotes a vector of zeros of length n with a 1 at position n + 1 − k. Class-n level q n denotes the set of states with q n class-n customers and no customers of higher classes. We first describe how to obtain the first-passage probabilities, followed by the computation of the equilibrium probabilities. Note that f k;0,i (n−1) = g k;i (n−1) . By onestep analysis we get, for n = N − 1, N − 2, . . . , 1 and k ≥ n + 1, From (22), all g k;i (n) with k ≥ n+1 and n fixed can be calculated, with g k;0 (n) computed as where B P n+1,...,N (s) is the LST of a high-priority (class-n + 1 and higher) busy period, which is equal to the LST of the busy period in an M/H N −n /1 queue with class-n + 1, . . . , N customers, The first-passage probabilities f k;i (n) with n = N − 1, . . . , 2 and k ≥ n + 1 follow from one-step analysis similar to (20), with i n > 0, The last two terms in (25) describe the probability of first passage to class-n levels less than or equal to q n + i n in state (0 (N −n) , q (n) + i (n) ) when starting an excursion in state (0 (N −n) , q (n) ) + e (N ) m , so with one class-k and one class-m customer. Note that we act as if the class-k customer enters service when the high-priority busy period generated by the class-m customer finishes. This is feasible, since the order in which the customers are served does not alter the duration of a high-priority busy period, cf. (20). The remaining first-passage probabilities for the case n = 1 are computed as, for k ≥ 2, The equilibrium probabilities of the N -class system follow again by counting excursions as done for the two-and three-class systems. The number of excursions per time unit that end in state (0 (N −n) , q (n) ) is equal to p(0 (N −n) , q n + 1, q (n−1) )μ n . This number is also equal to the excursions starting from class-n level q n or lower per time unit that end in state (0 (N −n) , q (n) ). A fraction g n;i (n−1) of the excursions starting in (0 (N −n) , q n , q (n−1) − i (n−1) ) by a class-n arrival end in (0 (N −n) , q (n) ). Excursions to class-n levels higher than q n starting in state (0 (N −n) , q (n) − i (n) ) by a class-m, m = n + 1, . . . , N arrival reach, with probability f m;i (n) − g m;i (n) , level q n in state (0 (N −n) , q (n) ) at first return to class-n level q n . We have for n = 1, 2, . . . , N , (27) which can be solved recursively, starting from p(0 (N ) ) = 1−ρ. Note that for n = 1, the second term on the right-hand side of (27) becomes p(0 (N −1) , q 1 )λ 1 and for n = N , the first term on the right-hand side reduces to 0. Remark 1 The above algorithm to determine the equilibrium probabilities involves subtractions in some equations, see, for example, (26), which may possibly lead to loss of significant digits and instability. However, in all experiments we observed numerically stable results. Application in spare parts logistics Our interest in the joint queue length distribution arose from a spare parts supply problem for repairable parts sharing the same repair shop. For this problem, we apply our method, based on the matrix-analytic approach, to demonstrate the influence of assigning repair priorities on the performance of the system. There are M identical machines and each machine contains three different subsystems, numbered 1, 2, 3. Each subsystem n consists of Z n identical parts in parallel. We refer to the parts of subsystem n as parts of Stock-Keeping Unit n (SKU n). For each subsystem, k n < Z n parts have to function, i.e., we have redundancy, and the redundant parts are in "cold standby." This is called a "k n -out-of-Z n " setup. We have k n functioning parts per subsystem and only these parts are subject to failure. When Other typical systems with this structure can be found in [24]. A machine is only working when all three subsystems are working. When one of the functioning parts fails, a redundant part takes over its function and a service engineer takes a new part from a stock of parts and replaces the failed one. The failed part is then sent to a single-server repair facility. Part and repair requests are served on a first come first serve basis. The repair time for a part of SKU n is exponentially distributed with rate μ n ; the delivery and replacement times are small and can be neglected. We assume that failures of parts of SKU n occur according to a Poisson process with rate λ n . This approximation, which is the only one needed, is valid when M, the total number of machines in the system, is large and when the fraction of working machines is high. After repair the broken parts are assumed to be as good as new, and they are put back to stock. The stock of SKU n at time instant t = 0 is denoted by S n . We call the amount S n the basestock level for SKU n parts. The system is shown in Fig. 4. Let us define the system availability as the average fraction of working machines: The number of backorders of SKU n parts is given by (q n − S n ) + , where (x) + = max(0, x) and q n is the number of SKU n parts in repair. Define E n as the number of "empty" spots in a given subsystem n of any of the M machines. Then, by conditioning on the number of parts of SKU n in repair we obtain, with s ≤ Z n , q n −S n −s / M Z n q n −S n , q n − S n ≥ s. (29) 123 In terms of the joint queue length distribution, the system availability can be written as The expression (30) determines the system availability much better than other approximations proposed in the literature, for example, the system availability defined in [24] only uses information on the mean number of backorders. The matrix-analytic method makes it possible to use the detailed distribution of the number of parts in repair. To demonstrate the approach we execute a set of experiments with the following parameters: M = 100, and Z n = 4, k n = 2, λ n = n/300, and μ n = (4−n)/β for n = 1, 2, 3, where β is chosen such that 3 n=1 λ n /μ n = ρ. We wish to compute the joint queue length distribution such that the sum q 3 ,q 2 ,q 1 p(q 3 , q 2 , q 1 ) > 1− with a small positive number. We do this by computing the equilibrium probabilities of the states in a discrete three-dimensional cuboid C with states {0, . . . , c 3 } × {0, . . . , c 2 } × {0, . . . , c 1 }. For the sake of clarity, we briefly introduce the marginal queue length distribution of class-n customers as p n (·). We specify the construction of C in more detail. The bound c 3 is computed from the M/M/1 system with only class-3 customers, such that c 3 q 3 =0 p 3 (q 3 ) > 1 − , which leads to c 3 = log log λ 3 /μ 3 − 1 . The bound c 2 is obtained through a priority system with class-3, 2 customers such that the sum of the marginal probabilities for class-2 customers is very close to 1. That is, c 2 q 2 =0 p 2 (q 2 ) > 1 − . Conveniently, the marginal queue length distribution p 2 (·) can be derived directly from the joint equilibrium probabilities p(0, ·) of the priority queueing system with class-3, 2 customers via the relation λ 2 p 2 (q 2 − 1) = μ 2 p(0, q 2 ). Thus, this allows us to estimate the bound c 2 without having to compute all joint equilibrium probabilities p(q 3 , q 2 ). The final bound c 1 can be found iteratively until q 3 ,q 2 ,q 1 p(q 3 , q 2 , q 1 ) > 1 − or using the same method as for the bound c 2 . Naturally, this method of constructing C extends to an arbitrary number of classes. In Table 1 we list the system availability according to (30) for different utilization rates of the repair shop and different priority assignments. The basestock levels S n depend on the mean queue lengths, i.e., we set S n = E[Q n ] , n = 1, 2, 3, where Q n is the queue length of SKU n parts. The algorithm for the 3-class system was executed using Java 8.0 on a PC with an Intel Core i7-3770 CPU and 16 GB RAM. The computation times mentioned in Table 1 depend on the number of states with significant probability mass, i.e., on the load of the system, the priority assignment and, naturally, the parameter value of . For these experiments, we have selected = 10 −6 . Table 1 shows that we have a fast numerical method to compute the availability for different priority assignments and particular choices of the basestock levels. This method can easily be exploited in a procedure to optimize the priority assignment and basestock level, for example, in order to maximize system availability under a given budget for spare parts (cf. [1] which considers a slightly different setting with equal repair rates for all SKUs). We use = 10 −6 . The variable r n indicates the priority of SKU n parts, either high (H), medium (M), or low (L) Conclusion and extensions We have developed for the M/M/1 preemptive priority system with N customer classes and class-dependent service rates a method for the exact determination of the joint equilibrium queue length distribution. This method is based on the matrix-analytic method as the embedded Markov processes are of the M/G/1 type. Key to this approach are first-passage probabilities, computed by one-step analysis. We applied the exact solution method to a spare parts logistics problem where repairable parts share the same repair shop, and showed that this method produces accurate results in the order of seconds. We next sketch how the method can be extended to an M/M/1 non-preemptive priority system. In the non-preemptive case one identifies the customer currently in service by adding another variable to the state description. For the two-class system, the state description becomes (q 2 , q 1 , s), where s ∈ {1, 2} indicates the class of the customer in service and s = 0 indicates no customer in service. By defining class-2 level q 2 as the set of states with q 2 class-2 customers, one can again count the number of excursions per time unit that start from class-2 level q 2 and reach levels higher than q 2 to finally end at state (q 2 , q 1 , 2). The states with a class-1 customer in service can only be reached from the states (q 2 , q 1 , 1) or (0, 0, 0), and thus, the equilibrium probabilities of these states can be recursively determined for q 2 > 0 immediately from the boundary probabilities of class-2 level 0, see Fig. 5. One finds the equilibrium probabilities of class-2 level 0, starting from p(0, 0, 0) = 1 − ρ, by embedding the Markov process on class-2 level 0 and again counting excursions. Notice that the approach is very similar to the one for the preemptive case and only requires the computation of equilibrium probabilities of the states (q 2 , q 1 , 1) as an additional step.
8,153.6
2014-11-12T00:00:00.000
[ "Engineering", "Mathematics" ]
Selenium and Other Trace Element Mobility in Waste Products and Weathered Sediments at Parys Mountain Copper Mine, Anglesey, UK : The Parys Mountain copper mining district (Anglesey, North Wales) hosts exposed pyritic bedrock, solid mine waste spoil heaps, and acid drainage (ochre sediment) deposits. Both natural and waste deposits show elevated trace element concentrations, including selenium (Se), at abundances of both economic and environmental consideration. Elevated concentrations of semi-metals such as Se in waste smelts highlight the potential for economic reserves in this and similar base metal mining sites. Selenium is sourced from the pyritic bedrock and concentrations are retained in red weathering smelt soils, but lost in bedrock-weathered soils and clays. Selenium correlates with Te, Au, Bi, Cd, Hg, Pb, S, and Sb across bedrock and weathered deposits. Man-made mine waste deposits show enrichment of As, Bi, Cu, Sb, and Te, with Fe oxide-rich smelt materials containing high Pb, up to 1.5 wt %, and Au contents, up to 1.2 ppm. The trace elements As, Co, Cu, and Pb are retained from bedrock to all sediments, including high Cu content in Fe oxide-rich ochre sediments. The high abundance and mobility of trace elements in sediments and waters should be considered as potential pollutants to the area, and also as a source for economic reserves of previously extracted and new strategic commodities. Introduction Historic mining sites and associated tailings can be a significant source of polymetallic contamination, resulting in potentially toxic levels of metals posing a risk to surface and groundwater systems [1]. Sulphide-mineral oxidation within current and former mine sites is a significant environmental threat faced by the mining industry worldwide [2]. Elements such as Se, Te, As, Cd, Hg, and Pb are notable pollutants in waste sites due to their widespread and intense occurrence, and can be damaging to ecological and human health, affecting livestock, soils, and groundwater systems [3][4][5][6][7][8][9][10][11]. Conversely, the need for new low-carbon energies and technologies has led to an increase in demand for many critical raw materials such as Se [12][13][14][15][16]. Historic mining sites previously focused upon base or precious raw materials may now contain economic concentrations of these other trace elements. Therefore, trace element enrichment at historic mining sites poses both a potential environmental threat and an economic opportunity. Selenium is typically found in association with Cu mines as a by-product recovery from anode slimes [15,16], and Cu mining districts may show Se enrichment. The objective of this study is to quantify the distribution of Se geochemical controls on the distribution of these trace elements. This includes (1) a geochemical characterisation of the bedrock, smelt materials, weathering soils, and acid mine drainage (ochre) sediments, (2) identification of trace element source and controls on element liberation and fixation in the Parys Mountain Cu mining district, and (3) determination of any resource potential and environmental threats posed by trace elements at Parys Mountain. Due to its increasing economic and environmental importance [10,11,[14][15][16], Se will be the focus of this study, as well as elements with chemical affinities to Se (e.g., Au, Cu, Pb, As, Te). Understanding the distribution of Se and the geochemical processes that control trace element enrichment in natural and waste products of mine tailings is important for predicting and managing soil and water quality in such extensive areas. Study Area Parys Mountain (Mynydd Parys; National Grid Reference: SH441904) is a historic Cu mining site on the island of Anglesey, north-west Wales, UK (Figures 1-3; [17][18][19][20][21][22]). The site has a long history of mining, as far back as the early Bronze Age [17,18]. The western part of the mountain is currently (2017) under operation licence by Anglesey Mining Plc. Two prominent historic Cu mines, the Mona and Parys mines, are centred on Parys Mountain (Figures 2 and 3). Evidence suggests that Parys Mountain was subjected to fire setting techniques during the Bronze Age in order to extract Cu for crude tools [17][18][19]. It has also been suggested that Romans mined Parys Mountain for Cu and Pb [19]. The site became one of the world's largest copper mines in the 1780s, with ore recovered from a number of open pits and underground workings up to 150 m below the surface. Between 1768 and 1904, an estimated 3.5 million tonnes of ore were removed to yield 130,000 tonnes of Cu metal [20]. From 1810, significant underground workings were opened, and by 1900, all significant mining activity had ceased [20]. Drilling programs took place from 1961 to 1990. An overall geological reserve of 4,114,000 remaining tonnes has been proposed, with estimated grades of 1.43% Cu, 1.2% Pb, 2.4% Zn, 20 g/t Ag, and 0.3 g/t Au [20]. [17][18][19][20][21][22]). The disposition of the main geological units (Figure 1) resulted from the folding of the initial sequence of mudstone overlain by rhyolite and shales into a large anticline-syncline structure with an east-west strike and a dip to the north, with the north limb overturned to the south. Mineralisation at Parys Mountain is Ordovician volcanogenic massive sulphide (VMS) deposits of Kuroko type [21][22][23][24][25]. The country rock is made up of folded Ordovician-Silurian sandstones and mudstones, rhyolitic lavas and pyroclastic deposits, mafic and felsic intrusions, and minor basaltic lavas [21,25]. Quartz veins host abundant pyrite (Figure 4), associated with Ordovician-Silurian sea floor hydrothermal activity [21][22][23][24]. Veins rich in Cu were remobilised during subsequent tectonic activity [23][24][25]. As a consequence of the extensive mining history, the area hosts a number of natural and solid waste deposits enriched in trace elements. Natural deposits include exposed bedrock, soils, and streams, while waste deposits include smelt soils and muds, mine drainage systems, and associated Fe(III) (oxy)hydroxide (ochre) stream sediment deposits [23,24,[26][27][28][29][30][31]. Previous studies at Parys Mountain have generally focused on the uptake, retention, and environmental impact associated with toxic metal contamination on bioaccumulation and plants [31][32][33][34][35], and environmental contamination of muds and stream waters of a select suite of major and trace element concentrations [26][27][28]36]. Weathering of waste spoil heaps and bedrock, and subsequent oxidation of sulphides, has led to the development of acidic waters that drain into rivers, smaller connected streams, standing water bodies, and eventually Dulas Bay (Figure 1). Ochre sediment precipitation occurs in these streams and run-off systems (Figure 2), principally containing Fe(III) (oxy)hydroxides and enriched trace elements [37]. As a consequence of the extensive mining history, the area hosts a number of natural and solid waste deposits enriched in trace elements. Natural deposits include exposed bedrock, soils, and streams, while waste deposits include smelt soils and muds, mine drainage systems, and associated Fe(III) (oxy)hydroxide (ochre) stream sediment deposits [23,24,[26][27][28][29][30][31]. Previous studies at Parys Mountain have generally focused on the uptake, retention, and environmental impact associated with toxic metal contamination on bioaccumulation and plants [31][32][33][34][35], and environmental contamination of muds and stream waters of a select suite of major and trace element concentrations [26][27][28]36]. Weathering of waste spoil heaps and bedrock, and subsequent oxidation of sulphides, has led to the development of acidic waters that drain into rivers, smaller connected streams, standing water bodies, and eventually Dulas Bay (Figure 1). Ochre sediment precipitation occurs in these streams and run-off systems (Figure 2), principally containing Fe(III) (oxy)hydroxides and enriched trace elements [37]. As a consequence of the extensive mining history, the area hosts a number of natural and solid waste deposits enriched in trace elements. Natural deposits include exposed bedrock, soils, and streams, while waste deposits include smelt soils and muds, mine drainage systems, and associated Fe(III) (oxy)hydroxide (ochre) stream sediment deposits [23,24,[26][27][28][29][30][31]. Previous studies at Parys Mountain have generally focused on the uptake, retention, and environmental impact associated with toxic metal contamination on bioaccumulation and plants [31][32][33][34][35], and environmental contamination of muds and stream waters of a select suite of major and trace element concentrations [26][27][28]36]. Weathering of waste spoil heaps and bedrock, and subsequent oxidation of sulphides, has led to the development of acidic waters that drain into rivers, smaller connected streams, standing water bodies, and eventually Dulas Bay ( Figure 1). Ochre sediment precipitation occurs in these streams and run-off systems (Figure 2), principally containing Fe(III) (oxy)hydroxides and enriched trace elements [37]. . Element maps (LA-ICP-MS) of pyrite in bedrock (red box representing ~1 mm length and width), rich in associated major and trace elements, including Se, Fe, Au (all commonly in the core), and Hg (in the later rim). Enrichment depicted by colour scale (low blue to high red; semi-quantitative data in ppm). Sampling Whole rock, sediment, and water samples were collected from Parys Mountain under the guidance and supervision of the volunteers of the Parys Underground Group. Targeted samples included: • Natural deposits: sulphidic bedrock (n = 4) and weathering soil (n = 8). Weathering soil formed from the natural weathering of bedrock deposits. • Waste deposits: red smelt muds and soils (n = 7) and ochre sediments (n = 8). Red muds and soils formed from weathering of waste material from sintering, refining, and beneficiation activities, while ochre sediments formed from precipitation of Fe(III) (oxy)hydroxides from mine drainage systems. • Underground pool water (n = 1) and surface stream water samples from mine drainage systems (n = 3) were also collected. Sampling Whole rock, sediment, and water samples were collected from Parys Mountain under the guidance and supervision of the volunteers of the Parys Underground Group. Targeted samples included: • Natural deposits: sulphidic bedrock (n = 4) and weathering soil (n = 8). Weathering soil formed from the natural weathering of bedrock deposits. • Waste deposits: red smelt muds and soils (n = 7) and ochre sediments (n = 8). Red muds and soils formed from weathering of waste material from sintering, refining, and beneficiation activities, while ochre sediments formed from precipitation of Fe(III) (oxy)hydroxides from mine drainage systems. • Underground pool water (n = 1) and surface stream water samples from mine drainage systems (n = 3) were also collected. Scanning Electron Microscopy Whole rock and sediment samples were examined using an ISI ABT-55 scanning electron microscope (SEM) (Kevex, Thermo Fisher Scientific, Waltham, MA, USA) with Link Analytical 10/55S EDAX (EDS) (Link An, High Wycombe, UK) facility for mineralogical determination. Laser Ablation Trace element analysis of polished blocks was performed by using a New Wave laser ablation system UP213 nm (New Wave Research, Fremont, CA, USA) coupled to an inductively coupled plasma-mass spectrometry (ICP-MS) Agilent 7900 (Agilent Technologies, Tokyo, Japan). The laser beam was fired with a spot size of 100 µm moving in straight line, a 10 Hz repetition rate, and at 50 µm·s −1 ablation speed with 1 J·cm 2 energy. Before ablation, a warm-up of 15 s was applied with 15 s delay between each ablation. Settings parameters were optimised daily by using a NIST Glass 612 (NIST, Gaithersburg, MD, USA) to obtain the maximum sensitivity and to ensure low oxide formation. In order to remove possible interferences which could affect Se measurement, a reaction cell was used with hydrogen gas (between 3.0 and 3.5 mL/min optimisation to decrease Se background). MASS-1 Synthetic Polymetal Sulfide (USGS, Reston, VA, USA) was used to provide a semi-quantification by calculating the ratio concentration (µg·g −1 )/counts per seconds, and multiplying this ratio by the sample counts. Certified and informative values are available by request through the US Geological Survey website. Whole Rock and Sediment Geochemistry Whole rock and sediment samples were analysed by both inductively coupled plasma atomic emission spectroscopy (ICP-AES) and solution ICP-MS. Elements of interest include As, Au, Bi, Cd, Co, Cr, Cu, Fe, Hg, Mo, Ni, Pb, S, Sb, Se, and Te. Samples of~30 g were individually milled and homogenised, and 0.5 g were digested with aqua regia in a graphite heating block. The residue was diluted with deionised water (18 MΩ cm), mixed, and analysed using a Varian 725 instrument at ALS Minerals (Loughrea; method ID: ME-MS41). Results were corrected for spectral inter-element interferences from the sample matrix, solvent medium, and plasma gas. The limits of detection/resolution for elements of interest are shown in Supplementary Materials. Errors for whole rock for ICP-MS were calculated based on certified and achieved values for certified reference materials, shown in Table S3. Geological Certified Reference Materials (CRMs) utilised included MRGeo08 (mid-range multi-element CRM), GBM908-10 (base metal CRM), OGGeo08 (ore grade multi-element CRM), and GEOMS-03 (multi-element CRM). Results for CRM analysis were within the anticipated target range for each metal and standard. Duplicate analyses produced reported values within the acceptable range for laboratory duplicates, with an average relative percent difference of 4%. Water Chemistry Four water samples (three acid mine drainage streams and one underground mine pool) were collected for water chemistry analysis. Sample collection was performed using acid-cleaned plastic bottles and stored at 4 • C in the dark before analyses. The samples were centrifuged for 5 min at 4500 rpm. A 10 mL supernatant was collected and spiked with 100 µL HNO 3 (70%, analytical reagent grade, Fisher Scientific, Hampton, NH, USA) and 100 µL H 2 O 2 (32%, analytical reagent grade, Fisher Scientific) to reach a 1 wt % final concentration of H 2 O 2 and HNO 3 in the solution. All the samples were analysed in triplicates. Analysis was performed by ICP-MS (7900, Agilent Technology, Santa Clara, CA, USA). Procedure parameters are presented in Table S4. Lense parameters were optimised daily by using a solution of 1 µg·L −1 of gallium, yttrium, thallium, and cerium to ensure the best detection limit. Hydrogen (3.5 mL min −1 ) or Helium (4.3 mL min −1 ) were used in the reaction cell to reduce the background and to remove possible interferences. Standards solutions were prepared at a concentration of 0, 0.05, 0.1, 1, 10, 50, and 100 µg·L −1 using multi-element standard solutions (VWR) and deionised water. Standards were used as external calibration for quantification. Germanium (20 µg·L −1 ) was added online as an internal standard during the entire analysis to correct the effect of possible instrument drift or fluctuations in the plasma. The analysis for sample Ochre 4 was repeated with a dilution factor of two to ensure that cobalt, cadmium, and nickel measurements fit in the calibration curve. As a quality control, the samples were spiked with a standard multi-elements solution to reach an addition of 10 µg·L −1 and a recovery was calculated between 90% and 110%. Water samples were also measured for acidity (pH level) using an electrode pH meter. Sample Descriptions Massive sulphide bedrock deposits provide a source for Se, ore materials, and other trace elements in the region ( Figure 4). In the ground immediately surrounding Parys Mountain, two waste materials were identified and widespread to a depth of 0.3 m: a smelt, identified as a red mud layer, and a weathered smelt, identified as a coarser red soil. The red mud and soil represent a processing residue from roasted ores, i.e., a smelted material ( Figure 5). Partially lithified red mud smelts (RMS) represent solid raw waste material from sintering, refining, and beneficiation activities. The RMS is a heterogeneous fine fraction residue and contains quartz, Ti-and Al-oxides, pyrite, hematite, goethite and minor elemental S, baryte, galena, and sphalerite, identified by SEM. Deposits of RMS are typical of Al refineries (e.g., Bauxite red muds) but have been known to show affinities for As, Cu, and S mining wastes [38][39][40][41]. Coarser red weathering smelt soils (RWSS) are also typical in Cu mining wasteland areas [39,42,43], produced by weathering of smelt and bedrock. Parys Mountain RWSS contain more Fe oxides and pyrite than the RMS. Quartz and sandy components are retained in both RMS and RWSS, often with an altered appearance and numerous Fe oxide and pyrite inclusions. Iron oxides and elemental S are produced from the smelting of pyrite. Pyrite often shows an altered, weathered texture, and/or a rim comprising S, Si, Fe, and Al (identified using SEM-EDS). Other minor components of bedrock, soils, and smelts include gypsum, limonite, jarosite, monazite, sericite, and chlorite. Standing pools of mine water (produced during extraction activity) are evident at the surface of Parys Mountain and also underground. Yellow soils and clays are thicker and more widespread, formed from natural weathering of the bedrock, unrelated to sintering, refining, and beneficiation activities. Yellow soils and clays contain quartz, pyrite, gypsum, limonite, jarosite, and monazite (SEM-EDS analysis). Mine drainage from the former workings are deposited in nearby natural stream systems, with precipitation of Fe(III) (oxy)hydroxides in drainage channels, resulting in yellow-orange ochre sediment deposits. Four ochre-bearing streams were sampled on Parys Mountain and the surrounding areas for ochre sediment and stream water (where present): a dried ochre deposit (Ochre 1-ochre sediment only), two ochre-bearing streams on the slopes to the north of former workings (Ochre 2 and Ochre 3), and an ochre-bearing stream on top of the former workings (Ochre 4) (Figure 2). Whole Rock and Sediment Geochemistry (ICP-AES and ICP-MS) All results are shown in Tables S1 and S2. Both natural and smelt deposits are enriched compared to world average values [44][45][46][47] (Table S1; Figure 6), typically by 2-3 orders of magnitude. Overall, there is a general decrease in average concentrations of Co, S, and Te from the bedrock source to the final weathered products (Figure 7). Concentrations of Cu, Fe, Mo, Ni, and TOC increase from source to ochre sediments, while As and Cd vary across samples (Figure 7). There are high average concentrations of As, Bi, Hg, Mo, Pb, Sb, Se, and Te in weathered smelt materials. Excluding TOC, which is inherently enriched in weathered products due to vegetation and biological processes, the highest enrichment factor for any element is Pb in weathered smelt, which is enriched by a factor of 82.1 compared to the bedrock concentrations. Lead is enriched in all weathered products compared to the bedrock. Mercury and Sb are also highly enriched in weathered smelt (67.7 and 32.6 respectively) compared to the bedrock. Sulphidic bedrock and RWSS show high Te levels (0.7-1.6 ppm), significantly enriched compared to the world average continental crust and soils. Both Se and Te are depleted by the stage where materials are weathered to soils and ochre sediments, but a notable portion remains throughout weathering. In RWSS, Se is higher than the bedrock, while much of the high Te concentrations are also retained from the bedrock but are relatively depleted in the weathering products. A comparison of Se and Te (Figure 8a) indicates a positive correlation, with some exceptions (RMS). In ochre sediments, Se is more variable, but Te is generally low. Whole Rock and Sediment Geochemistry (ICP-AES and ICP-MS) All results are shown in Tables S1 and S2. Both natural and smelt deposits are enriched compared to world average values [44][45][46][47] (Table S1; Figure 6), typically by 2-3 orders of magnitude. Overall, there is a general decrease in average concentrations of Co, S, and Te from the bedrock source to the final weathered products (Figure 7). Concentrations of Cu, Fe, Mo, Ni, and TOC increase from source to ochre sediments, while As and Cd vary across samples (Figure 7). There are high average concentrations of As, Bi, Hg, Mo, Pb, Sb, Se, and Te in weathered smelt materials. Excluding TOC, which is inherently enriched in weathered products due to vegetation and biological processes, the highest enrichment factor for any element is Pb in weathered smelt, which is enriched by a factor of 82.1 compared to the bedrock concentrations. Lead is enriched in all weathered products compared to the bedrock. Mercury and Sb are also highly enriched in weathered smelt (67.7 and 32.6 respectively) compared to the bedrock. Sulphidic bedrock and RWSS show high Te levels (0.7-1.6 ppm), significantly enriched compared to the world average continental crust and soils. Both Se and Te are depleted by the stage where materials are weathered to soils and ochre sediments, but a notable portion remains throughout weathering. In RWSS, Se is higher than the bedrock, while much of the high Te concentrations are also retained from the bedrock but are relatively depleted in the weathering products. A comparison of Se and Te (Figure 8a) indicates a positive correlation, with some exceptions (RMS). In ochre sediments, Se is more variable, but Te is generally low. Table S1. Table S3. Table S3. Sulphidic bedrock contains 0.3 ppm of Au. Across all samples, Au shows minor correlation with Se, Bi, Cd, and Hg ( Figure 9). Correlation coefficients for all trace elements and statistical significance of Au correlation coefficients are shown in Table S5. There is also a strong correlation between S and Au (r correlation value of −0.82). Smelt samples show high Au content (up to 1.2 ppm), as well as natural yellow clays (0.4-0.5 ppm Au content). Ochre sediments and yellow soils do not host Au above level of detection. Given the presence of Au in whole rock, waste, and weathered samples, it is possible to identify potential Au pathfinder elements across Parys Mountain samples, as previous studies have shown that Se and Te can also act as Au pathfinder elements [48][49][50][51][52][53]. Figure 9 shows that Se, Bi, Cd, and Hg have a positive correlation with Au, and can act as pathfinder elements for Au in natural and waste soils and clays at Parys Mountain. Table S3. Sulphidic bedrock contains 0.3 ppm of Au. Across all samples, Au shows minor correlation with Se, Bi, Cd, and Hg ( Figure 9). Correlation coefficients for all trace elements and statistical significance of Au correlation coefficients are shown in Table S5. There is also a strong correlation between S and Au (r correlation value of −0.82). Smelt samples show high Au content (up to 1.2 ppm), as well as natural yellow clays (0.4-0.5 ppm Au content). Ochre sediments and yellow soils do not host Au above level of detection. Given the presence of Au in whole rock, waste, and weathered samples, it is possible to identify potential Au pathfinder elements across Parys Mountain samples, as previous studies have shown that Se and Te can also act as Au pathfinder elements [48][49][50][51][52][53]. Figure 9 shows that Se, Bi, Cd, and Hg have a positive correlation with Au, and can act as pathfinder elements for Au in natural and waste soils and clays at Parys Mountain. Table S5. Highly sulphidic (up to 18.6%) and Fe-rich (14.7%) bedrock deposits are enriched in As (495 ppm), Bi (35 ppm), Cu (543 ppm), Pb (149 ppm), Se (60 ppm), and Te (1.5 ppm). Sulphur is higher in natural clays (2.5-2.8%), sediment from Ochre 4 (2.3-2.4%), and smelts (1.7-1.8%), while other samples are low in S (<0.8%). Waste deposits show enrichment of As, Bi, Cu, Sb, and Se compared to natural deposits and bedrock ( Figure 6). Ochre sediments and RWSS show As concentrations up to 1590 ppm and 1040 ppm respectively. The RWSS samples show the highest Bi content of all deposits (up to 230 ppm), the highest Hg content (70 ppm), Pb (up to 1.5%), Sb (351 ppm), and Se (173 ppm). Trace elements Te, S, and Co are depleted from the bedrock to both natural and waste deposits. In ochre sediments, Fe, S, As, Bi, Hg, Mo, Sb, and Se decrease away from the bedrock, while Cu and Cr show a more sporadic spatial occurrence across ochre sediments, including higher concentrations away from the bedrock (Table S2). Conversely, Co is concentrated in the ochre sediments further away from the bedrock and former workings. Ochre sediments show the highest Cu content of all deposits, including concentrations higher than bedrock and smelts (up to 7140 ppm in Ochre 3). Water Chemistry Underground mine pool water shows an acidic pH of 2.3, with elevated trace element concentrations compared to the world average stream water trace element content [54] (Figure 10). Mine pool water shows an As content of 1113 µg·L −1 , significantly enriched compared to an average stream water content of 2 µg·L −1 (Figures 10 and 11). Mine pool water also hosts enriched Cr (247 µg·L −1 ), Se (55.6 µg·L −1 ), and Ni (151 µg·L −1 ), and also retains minor Te content (0.19 µg·L −1 ). Table S5. Highly sulphidic (up to 18.6%) and Fe-rich (14.7%) bedrock deposits are enriched in As (495 ppm), Bi (35 ppm), Cu (543 ppm), Pb (149 ppm), Se (60 ppm), and Te (1.5 ppm). Sulphur is higher in natural clays (2.5-2.8%), sediment from Ochre 4 (2.3-2.4%), and smelts (1.7-1.8%), while other samples are low in S (<0.8%). Waste deposits show enrichment of As, Bi, Cu, Sb, and Se compared to natural deposits and bedrock ( Figure 6). Ochre sediments and RWSS show As concentrations up to 1590 ppm and 1040 ppm respectively. The RWSS samples show the highest Bi content of all deposits (up to 230 ppm), the highest Hg content (70 ppm), Pb (up to 1.5%), Sb (351 ppm), and Se (173 ppm). Trace elements Te, S, and Co are depleted from the bedrock to both natural and waste deposits. In ochre sediments, Fe, S, As, Bi, Hg, Mo, Sb, and Se decrease away from the bedrock, while Cu and Cr show a more sporadic spatial occurrence across ochre sediments, including higher concentrations away from the bedrock (Table S2). Conversely, Co is concentrated in the ochre sediments further away from the bedrock and former workings. Ochre sediments show the highest Cu content of all deposits, including concentrations higher than bedrock and smelts (up to 7140 ppm in Ochre 3). Water Chemistry Underground mine pool water shows an acidic pH of 2.3, with elevated trace element concentrations compared to the world average stream water trace element content [54] (Figure 10). Mine pool water shows an As content of 1113 µg·L −1 , significantly enriched compared to an average stream water content of 2 µg·L −1 (Figures 10 and 11). Mine pool water also hosts enriched Cr (247 µg·L −1 ), Se (55.6 µg·L −1 ), and Ni (151 µg·L −1 ), and also retains minor Te content (0.19 µg·L −1 ). Ochre 4 run-off waters show an acidic pH (2.5), while Ochres 2 and 3 show a more neutral pH (6.6 and 6 respectively). Trace element concentrations in these ochre-bearing stream water samples decrease away from Parys Mountain (Figure 11), with the exception of Pb which is higher in Ochre 3 (14.6 µg·L −1 ). Ochre 4 run-off waters show an acidic pH (2.5), while Ochres 2 and 3 show a more neutral pH (6.6 and 6 respectively). Trace element concentrations in these ochre-bearing stream water samples decrease away from Parys Mountain (Figure 11), with the exception of Pb which is higher in Ochre 3 (14.6 µg·L −1 ). Ochre 4 run-off waters show an acidic pH (2.5), while Ochres 2 and 3 show a more neutral pH (6.6 and 6 respectively). Trace element concentrations in these ochre-bearing stream water samples decrease away from Parys Mountain (Figure 11), with the exception of Pb which is higher in Ochre 3 (14.6 µg·L −1 ). Trace element Liberation and Fixation Pyrite oxidation is considered the main source for polymetallic contamination and trace element enrichment in weathered sediments at Parys Mountain. Minor mineral phases in bedrock samples such as limonite, jarosite, and sericite may also be a source of Se, as well as Fe, S, Au, and Pb. Minor mineral phases may also be a source of trace elements, including gypsum (Cd, Cr), monazite (Pb, Au), and chlorite (Fe, No, Cu, Pb). However, due to their low abundance in the bedrock with respect to pyrite content, minor phases contribute smaller amounts of trace elements to natural and man-made deposits. Oxidation of exposed pyrite can liberate trace elements [55][56][57][58][59] from the rapidly exhumed bedrock ore. The initial liberation and enrichment of all waste and natural deposits resulted from active pyrite oxidation, leaching, and release of trace elements, sulphates, and other heavy metals [60][61][62]. Notably high As, Bi, Hg, Mo, Pb, Sb, and Se in weathered smelt materials indicate that these represent a mine-related geochemical signature, directly related to the pyritic bedrock. Elements such as As, Cr, Se, Ni, Cd, and Pb are enriched in mine pool water and stream water samples, indicating that these elements are present in the form of water-soluble compounds and are readily liberated from the source. Acidic pH levels also promoted Cu, Pb, As, and Ni mobility [5][6][7][8], resulting in higher concentrations in weathered soils and ochre sediments at Parys Mountain. Elements which show low enrichment or an overall decrease in concentrations from bedrock source to weathered sediments (e.g., S, Co, and Te) were likely non-reactive and not liberated during oxidation and any processes of dissolution. More alkaline conditions (pH levels 7 or above) may increase Se concentrations in water, which contributes to the notable absence of Se in ochre sediments and run-off deposits compared to high-Se smelts. Selenate (Se IV), the most mobile form of Se that can be leached to groundwaters, is unlikely to migrate to deeper groundwaters underlying acidic soils [63], which may also explain the concentrated distribution of Se at Parys Mountain. High concentrations of trace elements in RWSS may be related to the close proximity to the bedrock source (i.e., elements with a low mobility), with more mobile elements accumulating in more distal ochre sediments. High concentrations of a number of trace elements in RWSS (As, Bi, Cd, Hg, Mo, Pb, Sb, Se, and Te) and ochre sediments (As, Cu, Fe, and Mo) may also be related to the changes in mineralogy of the deposit types, with adsorption and fixation of trace elements related to Fe-oxide minerals. The RWSS show the highest abundance of Fe-oxide mineral phases, while ochre sediments contain abundant orange-brown Fe(III) (oxy)hydroxides, indicating the oxidising conditions of both environments. Sorption of trace elements onto Fe(III) (oxy)hydroxides is a potentially important control on the transport and attenuation of (hydr)oxyanion forming elements, and can act as a sink for elements such as Se, As, Ni, and Sb [64][65][66]. Retention of trace elements on Fe oxides may be possible due to immobilisation processes, with trace elements strongly adsorbed by Fe oxides. This retention has been previously identified with goethite over the entire pH range and by magnetite in the neutral-to-alkaline range [67][68][69]. Ochre-bearing streams at Parys Mountain exhibit pH ranges from acidic to neutral, and both goethite and magnetite have been identified in weathered soils and clays. Therefore, weathered materials containing abundant Fe oxides act as a sink for trace elements, resulting in trace element enrichment compared to the source and other sediments containing lower abundances of Fe oxides. As ochre deposits are made up of almost entirely Fe(III) (oxy)hydroxide precipitate phases, the whole rock results of ochre sediments give an indication of the magnitude of adsorption and fixation of trace elements by Fe oxides. As well as Fe and Se, results in Figure 7 and Table S2 also suggest a strong fixation of As, Cd, Cu, Mo, Ni, Pb, and Sb in Fe oxides. Economic Considerations Parys Mountain is still an actively licensed mining site, and several surveys have been made on the tonnage potential and resource grade on the bedrock in the area since the 1980s. Anglesey Mining estimated a geological resource of 6.5 million tonnes, with 2.15 million tonnes containing 0.36% Cu, 2.11% Pb, and 0.42 g/t Au [19,20,70]. Results from this study suggest that there may be potentially economic resources in the weathered products from the bedrock, both in the RWSS and the ochre sediments. The area in the immediate vicinity of the previously worked Parys Mountain Great Opencast and Mona mines shows expansive areas of RWSS, streams, and ochre sediments. These weathered deposits contain concentrations of trace elements above the lowest cut-off grades utilised by recent and ongoing mining operations worldwide [71][72][73][74][75][76][77][78]. Adopting the lowest cut-off grades from recent ore mining projects [71][72][73][74][75][76][77], for a workable area of 0.556 km 3 surrounding the former mines, with a depth of 0.3 m and topsoil density of 1.2 tonnes/m 3 , some conservative estimates of resource potential of Se and other resources in RWSS have been calculated. Cobalt, Sb, and Te resource estimates are not given at Parys Mountain due to their lower than cut-off level abundance. These calculations indicate that there is a potential 45 tonnes of Se in the RWSS. However, the average Se content of 158.6 ppm in these deposits is below the lowest Se cut-off grade of 400 ppm [78]. While this suggests that extraction of Se at Parys Mountain is not feasible, the elevated concentrations of Se in smelts may highlight the potential for economic reserves in similar base metal mining sites around the world, particularly VMS-hosted deposits and weathered sulphidic orebodies. It is also vital to record these estimates for future consideration, as the demand for these elements is expected to increase [10][11][12][13][14][15][16] and cut-off grades may alter accordingly. Parys Mountain hosts potentially economic Au deposits. There are several examples of recent and ongoing Au mining projects that operate assuming a cut-off grade of 0.2 to 0.4 g/t [19,20,70]. Previously reported grades of Au at Parys Mountain include the aforementioned estimates of 0.3 to 0.42 g/t [19,20]. The sulphidic bedrock samples measured here agree with these estimations of Au content (0.3 ppm), while natural yellow soils, RMS, and ochre sediments do not show Au levels above detection limit. However, natural yellow clays show Au content up to 0.5 ppm, and RWSS show Au concentrations of up to 1.2 ppm. These results indicate that Au content at Parys Mountain may be higher than previously thought and, in RWSS, there could be a potentially untapped source and target. Selenium can act as a pathfinder element in Au exploration worldwide in selenide-containing epithermal Au-Ag and massive sulphide deposits [49,51,52,79]. Tellurium can also fix Au in telluride-bearing Au ore deposits in a range of settings [51,53,79]. Both Se and Te correlate with Au at Parys Mountain, as well as other elements typically associated with Au such as Pb, Ni, Sb, Hg, Bi, and Cd, indicating that these trace elements can act as Au pathfinders at Parys Mountain. The Au concentrations of 0.3 to 1.2 ppm indicate a potentially higher Au grade in other similar areas of Parys Mountain, which are yet to be investigated. Resource calculations here show that Parys Mountain RWSS contain a potential 0.26 tonnes of Au. Other elements that show concentrations higher than typical cut-off grades include Pb in RWSS, with an average of 1.18% in Parys Mountain RWSS compared to a cut-off grade of 1% [77], and Cu and Fe in ochre sediments (Cu = 0.29% average concentration in ochre sediments compared to a cut-off value of 0.15% [76], and Fe = 33.32% average concentration in ochre sediments compared to a cut-off value of 27.78% [74]). Resource estimate for Pb in RWSS is 3346 tonnes. However, estimates of workable ochre sediment tonnage are difficult to define. Very modest estimates based on stream area calculations in close proximity to the Parys Mountain site (Figure 2 streams only; not including standing water bodies) and a potential sediment depth of up to 0.1 m give 1.83 tonnes of Cu or, more significantly, 211 tonnes of Fe in stream ochre sediments. Hyper concentrated mine pools and ochre-bearing streams at Parys Mountain could also be used in sewage treatment as flocculating agents, similar to projects from Buxton colliery, Derbyshire, UK, and Falun Cu mine complex, Sweden [62], and the high Fe ochre sediment content could potentially be harvested from the contaminated stream sediments and mine waters. This has been achieved at the Marchand Mine project, Pennsylvania, where Fe sludge has been removed and recovered from the contaminated Sewickley Creek and the Youghiogheny River for use as a crude pigment [80]. Environmental Implications All the discussed trace elements with the exception of Cr and Ni show concentrations above the world averages in continental crust, soils, stream sediments, and stream waters. In some cases, concentrations are 2-3 orders of magnitude higher than world averages. This is to be anticipated in an area of such extensive historic former workings, and what is essentially still a working mining site. However, the high and seemingly mobile Se and other trace element contents in sediment and water should be monitored as the area has numerous interconnected streams, rivers, and water bodies, connecting to more extensive water systems on Anglesey leading to Amlwch and the sea, as well as local rural communities and agricultural land. Results and observations indicate that the main source of pollution at Parys Mountain is subaerial mine-waste tips, with their high content of sulphide minerals. A well-known concern to former and operational mining sites worldwide is contamination of soils and groundwaters from Se, particularly in U mining sites [11,81,82]. The release of Se during extraction can have environmental ramifications, affecting livestock [3,4,7,10,[83][84][85]. Elevated water-soluble trace elements such as Cd, Co, Cu and Pb at Parys Mountain are also typically higher than previously determined maximum permitted levels by various studies and governing bodies [86,87]. Compositionally similar and extensively extracted base metal sulphide ore deposits in Norway have been classified as "significantly" or "highly" polluted as a result of mining activities releasing Cu, Zn, Fe, Pb, and Cd [62,88,89]. The soil guideline values [90] for inorganic As is 640 ppm for commercial land use, and levels for residential and allotment land use is lower. Parys Mountain deposits show consistently higher concentrations of As across deposits than the guideline values. Similarly, Parys Mountain shows higher concentrations of Hg than the soil guideline levels of 26 ppm. In former mine workings across Europe, including sites associated with sulphide mineralisation, technosols have been developed and utilised in order to reduce the threat of contamination posed by tailings [1,90,91]. Technosols are man-made soils, often made up of stock materials such as carbonates, biochars, and crop residues, and have been shown to reduce acidity, immobilise metals, and improve soil quality, fertility, and structure. This can result in increased microbial biomass activity and development of vegetation. These factors have helped to decrease the hazards for human health and the environment in the affected areas. Due to the high abundance of Cu-Pb across Parys Mountain deposits, these two elements should be monitored, along with other enriched and potentially harmful trace elements such as Se, Hg, Cd, and Mo. Conclusions Natural deposits and former site workings at Parys Mountain host enriched concentrations of critical trace elements such as Se. A number of trace elements are enriched and retained across samples from natural sources and weathered soil or clay products, to smelts, ochre sediments, and water bodies. Iron oxide-rich waste deposits are more enriched in Se and other trace elements than natural deposits. Selenium and other elements have been mobilised from the pyrite in the bedrock source and retained in high concentrations in Fe oxide-rich RWSS, weathered soils, and ochre sediments. Selenium correlates with the enrichment of other trace elements, including Te, Au, Bi, Cd, Hg, Pb, S, and Sb. Parys Mountain also hosts potentially economic Au deposits, and Se (as well as Te, Pb, Ni, Sb, Hg, Bi, and Cd) can be used as geochemical pathfinder elements for Au enrichment in the area. There are also potentially economic Cu and Pb reserves in weathered materials. Potentially toxic amounts of As, Se, Pb, Cd, and Cu have been identified in some Parys Mountain deposits, and should continue to be monitored as contaminants in local soil and groundwater systems. Supplementary Materials: The following are available online at www.mdpi.com/2075-163X/7/11/229/s1, Table S1: Trace element content of natural bedrock, soils, and water run-off at Parys Mountain mining district. World average soil, crust, stream sediment, and stream water compositions also shown (Salminen et al. [46] and references therein); Table S2: Trace element concentrations of mine waste smelt deposits, ochre sediments, and mine water associated with the Parys Mountain mining district; Table S3: Calculated error for whole rock (rock, sediments, soils) for inductively couple plasma-mass spectrometry (ICP-MS), based on certified and achieved values for certified reference materials; Table S4: ICP-MS instrument settings used for water sample analysis, Table S5: Correlation coefficients for all trace elements, and significance of a correlation coefficient (determined by t-test confidence level of linear relationships in Microsoft Excel) between Au vs. Bi, Cd, Hg, Ni, Pb, S, Se, and Te. Statistically significant correlation coefficients: p < 0.05.
9,332.4
2017-11-22T00:00:00.000
[ "Geology", "Environmental Science" ]
Reproductive Biology and Endocrinology Open Access Level Set Segmentation of Bovine Corpora Lutea in Ex Situ Ovarian Ultrasound Images Background: The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL) boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Background The ovaries of all mammalian species, including humans and cattle, contain follicles and corpora lutea (CL). The bovine model for studying human ovarian function is well established [1]. Objective and motivation The objective of this research is to study the viability of level set image segmentation techniques [2] for automatic segmentation of CL (delineation of the boundary) in twodimensional (2D) ultrasound images. Our study represents the first investigation of semi-automated segmenta-tion of corpora lutea from ultrasound images. The level set method lends itself to image segmentation tasks because it requires minimal user input, accommodates arbitrary changes in region topology and offers a straightforward extension to higher dimensional data [3]. Figure 1(a) shows an image of a CL and Figure 1(b) shows the desired segmentation result as drawn by a human expert in ovarian ultrasound interpretation. The presence (or absence) of a CL in the ovaries, its size and morphology provide information regarding the current state of the individual's reproductive cycle [4,5]. The (c) (d) most common method of visualizing the CL in vivo is ultrasonography. Monitoring the development of ovarian follicles and CL over time is crucial to the understanding of human and bovine reproductive biology, fertility and timing of fertility therapy, the effects of contraception, and the diagnosis of ovarian diseases and dysfunction, such as ovarian cysts, cancers, and polycystic ovarian syndrome. Corpus Luteum segmentation goal and contour initialization In practice, CL segmentation is performed manually. Manual delineation is time consuming and subject to variance in human interpretation of images. The domain of image processing offers the potential for detailed analysis of CL size, and appearance, which will facilitate study of the aforementioned processes and diseases. Successful automation of CL segmentation would further the automation of existing analyses, such as correlating the value of image pixels within the CL region with various physiological attributes [6] and automatically determining CL diameter for use in higher-level classifiers [5], as well as facilitate new investigations. Literature review Research on prostate segmentation from 2D ultrasound images was reviewed for insight into CL segmentation due to the similarities of the two problems. The prostate has a similar echotexture appearance to that of corpora lutea in ultrasound images and a similar level of contrast with the rest of the image. Moreover, there is only one region of interest in both problems. However, there are differences which make the CL segmentation problem more challenging. Due to standardized imaging procedures, the prostate's position in an image is approximately known a priori and strong assumptions may also be made about the prostate's shape in the image plane [7][8][9][10]. This is not possible when imaging corpora lutea since it can be present at different locations within the ovary and is more variable in shape and size [11,12]. CLs are assumed only to be approximately elliptical in the image plane with low to moderate eccentricity. Segmentation of the prostate from ultrasound images has been well studied [7]. A number of prostate segmentation methods used deformable contour models. Deformable contour models were first introduced as tools for image segmentation by Kass et al. [13]. Their active contour models or "snakes" formulated image segmentation as an energy minimization problem. An active contour was initially placed on an image. Solving energy equations caused the curve to move or "evolve" until it minimized the energy function. The energy function was chosen so that the curve tended to follow edges in the image. Snakes were found to poorly handle topological changes in contours such as merging or splitting but the difficulties can be overcome with care, albeit considerable computational overhead. The success of prostate segmentation using deformable models using snakes and so-called "discrete dynamic contours" were largely dependent on careful initialization of the contour in a position near the desired boundary [14][15][16]. Badiei et al. developed an algorithm in which small number of marker points were placed on the prostate boundary which was then warped, based on an a priori shape model, so that the prostate was elliptical in shape [8]. The marker points were then used to find the ellipse that best fit the warped prostate. The inverse of the warping transformation was applied to the ellipse to obtain the final segmentation. The success of the approach was dependent on the correctness of the assumption of elliptical shape and the careful placement of the initial marker points. Level set methods has been proven effective both in general [17][18][19][20] and for ultrasound image segmentation [21,22]. Recent applications of the level set method to prostate ultrasound image segmentation [3,7,23] show that this technique is accurate and flexible (it can handle contours of varying shape, size, and concavity). A level set method was chosen for the current study because of the prior success of level set methods in prostate segmentation and its ability to easily handle arbitrary changes in contour topology. Fan et al. [3] used a level set method to perform threedimensional (3D) surface detection from ultrasound images of the prostate. A fast discrimination method was used to roughly extract the prostate region. The prostate region information was incorporated into the level set method instead of the spatial image gradient. This addressed the issue of "boundary leaking" which occurs when the contour evolves across a very weak edge that is part of the desired boundary. The segmentations produced qualitatively appeared to be good, but a quantitative assessment of segmentation accuracy was lacking. Herein, a new methodology was created to segment corpora lutea from 2D ultrasound images using this work as a starting point. Our algorithm consists of a multi-step preprocessing stage followed by segmentation using a level set method. It is semi-automated in that it requires an initial user-specified closed contour that is assumed to be completely contained within the CL, and to completely contain the CL's central cavity if one exists. It was hypothesized that this level-set contour evolution method could be used to locate the boundaries of CL with an average error of 1-2 mm. The CLs used in this study ranged from 14.3 mm to 21.5 mm in diameter. Image data The work herein is motivated by the need to study human reproduction but images of bovine ovaries were used in this feasibility study as they are well-established as a vehicle for studying human ovaries due to similarities in physiology and morphology. The images used in this study were obtained during a previous study Singh et al. [6]. Left and right ovaries of heifers were surgically removed at defined stages of the estrous cycle and imaged ex situ in parallel planes at 0.5 mm increments using a broad-band (5-9 Mhz), convex-array, ultrasound transducer interfaced with an ATL Ultra Mark 9 HDI ultrasound machine (Advanced Technology Laboratories, Brothell, WA). At the time of ovariectomy, the number of days from ovulation was known. From this data set, ovaries with CLs (n = 8) were selected for the current study. From the set of parallel images of each ovary, the image which contained the CL at maximum diameter was selected. All images were 640 × 480 pixel 8-bit grayscale, but the pixel size varied across images, ranging from 0.057 mm × 0.057 mm to 0.087 mm × 0.087 mm as determined from the distance gradations on the right-hand side of each image. An experienced gynecologic ultrasonographer manually segmented the images which provided a "ground truth" for validation purposes. The diameters of the CLs in the selected images ranged from 14.2 mm to 21.5 mm. Diameter was estimated by averaging the length and width of the smallest bounding box aligned to the image coordinate axes of the expertly segmented CL region. A sample image is shown in Figure 1(c) and is used as a running example throughout this paper. Curve evolution with level sets In this section, the level set method is presented in a similar fashion to that in [3] which is, in turn, derived from [2]. The level set method evolves an initial 2D contour according to an energy function derived from image pixel data by embedding the 2D contour in a 3D surface. An initial 2D contour C(t = 0) is represented by a 3D function ψ that evolves over time. The value of ψ at a point p at time t = 0 is defined as the distance d from p to the nearest point on C at time t: The sign of d is negative if the point lies in the interior of C and is positive otherwise. The function ψ(p, t = 0) is called a signed distance function. Figure 1(c) illustrates a circular initial contour that was used to segment the image on which the contour was superimposed. The corresponding signed distance function, ψ(p, t = 0) is in Figure 1(d); the 2D contour is drawn beneath the surface. The level set method causes the contour C(t = 0) to move towards the desired boundary by evolving the surface ψ over time. At an aribitrary time t, the contour C(t) is represented by the set of points for which ψ(p, t) = 0, also called the zero-level set of ψ: The evolution of ψ is described by the partial differential equation In this equation, the initial condition is ψ(p, t = 0), ∇ψ denotes the gradient of ψ, and F is the speed function. The speed function describes the rate at which the contour will move in the outward normal direction. The contour is encouraged to evolve towards the desired image boundary by designing an appropriate speed function F. In general, the speed function F depends on the curvature K of the evolving front and is typically separated into a constant term F 0 and the remainder F 1 (K): The constant term F 0 provides a constant expansion or contraction force depending on its sign. The curvaturedependent component F 1 controls the smoothness of the deforming shape. A common choice of F is given in the following equation: Equation 5 describes an outward unit normal force that is reduced by a factor proportional to the local curvature of the contour. Intuitively, this causes the non-smooth segments of the contour (which have high curvature) to move slowly while nearby portions of contour "catch up", resulting in smoother curve. The constant regulates the smoothness of the curve and must be greater than zero [3]. Larger values of result in smoother contours. To cause the evolving contour to stop at the desired image boundary, F is typically multiplied with an image dependent quantity k I : . Reproductive Biology and Endocrinology 2008, 6:33 http://www.rbej.com/content/6/1/33 The term ∇G σ denotes the gradient of a two-dimensional Gaussian function whose standard deviation is σ, I is the image function, and * denotes convolution. The convolution operation is the same as that described for sticks filters (below); only the kernel differs. Since the gradient operator can be applied after the convolution operation without changing the result, the second term of the denominator's sum can be viewed as magnitude of the gradient of an image which has been convolved with a Gaussian filter kernel raised to the power p. A Gaussian filter has the effect of computing for each pixel a centerweighted average of the intensities in its local neighborhood which smoothes or blurs the image. A smoothed image is desirable since it reduces the magnitude of small, unimportant local edges so that they have less effect on the speed function. Pixels with large image gradient (corresponding to pixels that have a high probability of being edges of the corpus luteum) will cause the value of k I (Equation 6) to be close to zero. When k I is multiplied with F the speed at which the contour embedded in ψ propogates is reduced to nearly zero when it is near the desired boundary. The exponent p controls the severity of the penalty that gradient magnitude applies to the speed function. Speed function for CL segmentation The low intensity contrast between CL regions and other regions of the ovary and the noisy nature of ultrasound images required a more sophisticated construction of the speed function, rather than direct application of the process described by Equation 6. The speed function was con-structed by filtering the image with a sticks filter [24,25]. The filtered image was then processed with Sobel filters [26] to obtain a gradient magnitude which was then contrast enhanced. The mean image intensity was then subtracted from each pixel and the resulting image was contrast enhanced a second time and inserted into Equation 6 to obtain the weighting term k I for the speed function F, above. These steps are discussed in greater detail below. Sticks filter Sticks filtering [24,25] is a process for reducing image speckle which may obfuscate boundaries of structures in the image. A sticks filter is a bank of linear filters which evaluate the likelyhood of a linear feature of length N in one of 2N-2 possible orientations passing through each pixel. Each filter mask is a N × N matrix where each entry is either 0 or 1/N. Entries with value 1/N represent lines through the center of the matrix. A bank of filters for N = 5 is illustrated graphically in Figure 3. Linear filtering [26] (convolution) works by positioning the filter mask's center over each image pixel and computing the sum of products of each entry in the filter mask with the intensity of the underlying pixel. Each filter mask or "stick" will thus produce the greatest response (sum of products) when positioned over a bright linear feature of a specific orientation. A pixel's output intensity is the response of the filter from the sticks filter bank which responds maximally at that point; the magnitude of the maximal response corresponds to the mean intensity along the line segment orientation to which that filter is most sensitive (the filter response is the sum of the intensities along the The first step in our algorithm was to filter the images with a sticks filter bank of size 17. This stick length was chosen Result of the level set method Representation of a sticks filter bank for sticks of length 5 Figure 3 Representation of a sticks filter bank for sticks of length 5. because it is small enough that it correlates well with CL boundary segments of the same length but is large enough that it smoothes small-scale features and reduces the magnitude of edges with high curvature which are not of interest. Reducing the magnitude of high-curvature edges is important since the level set method relies heavily on edge information and segmentations are undesirably influenced by small-scale edges such as those caused by speckle. Sobel magnitude filter Sobel convolution filters are edge detectors which approximate partial first derivatives of the image. The horizontal (S h ) and vertical (S v ) Sobel filter kernels are These filters compute center-weighted average finite differences over a distance of 3 pixels in a 3 pixel-wide band in the horizontal and vertical directions, respectively. An edge magnitude image M was computed by applying horizontal and vertical Sobel edge detectors to the sticks filtered image and combining the horizontal and vertical responses (respectively R h and R v ) in the usual way: The Sobel filters may be generalized to larger sizes: Larger Sobel filters compute the finite differences over longer distances and bands and are much less sensitive to very local changes. The effect is that larger filters respond to larger scale changes in the image function and small scale changes are not captured. The use of the Sobel filter at this point in the algorithm was to identify large scale step edges in the image which are characteristic of the CL-stroma boundary. A Sobel filter size of 17 × 17 was chosen empirically by testing the filters on sample images, where preference was given to images that exhibited strong edges along the true boundary of the CL. A size of 17 × 17 enhanced only the major, dominant edges, which are more likely to be true bound-aries of the CL. Figure 5 shows the results of processing Figure 4(b) with a 17 × 17 Sobel magnitude filter. Intensity normalization In ultrasonographic images, even the strongest edges may have fairly weak magnitude. In order to improve the contrast and further emphasize major edges, the pixel intensities of the edge magnitude image M were normalized to obtain the image . Normalization is a linear remapping of pixel intensities. If the pixel intensities in an image are within the interval [I min, I max ], then a normalizing linear remapping causes intensity I min to be mapped to intensity 0, and I max to be mapped to the maximum possible intensity while intensities between I min and I max are linearly interpolated. After normalization, images were converted from 8-bit grayscale (integer pixel intensities ranging from 0 to 255) to real-valued images with intensities ranging from 0.0 (black) to 1.0 (white) by dividing each pixel's intensity by 255. Weak edge suppression The normalized edge magnitude image obtained in the previous step exhibits weak edges within the interior of the CL region ( Figure 6(a)) caused by the fine texture present there. These edges interfere with the level set segmentation process. Weak edges were suppressed by subtracting from each pixel in the normalized edge magnitude image the mean intensity of the normalized edge magnitude image and clipping all resulting negative intensities to zero. Formally, let μ be the mean intensity of the normalized edge magnitude image . The mean-subtracted edge magnitude image N is defined as The resulting image N was re-normalized using the procedure described above to obtain the normalized mean-subtracted edge magnitude image . Figure 6(b) illustrates the effect of mean subtraction from the normalized edge magnitude image in Figure 6 Level set segmentation of corpora lutea Placement of the initial contour Placement of the initial 2D contour was done manually. All initial contours were circular. If a fluid cavity was present, the initial contour was chosen to enclose the fluid cavity. Kastelic et al. showed that 79% of bovine ovaries exhibit central cavities [27]. If a fluid cavity was not present, the contour was placed near the center of the CL. The remainder of the algorithm is automatic. The intial 2D contour was embedded as the zero level set of a 3D signed distance function ψ(p, t = 0). This function was evolved using Baris Sumengen's Matlab level set toolbox [28] according to Equation 3. Curve evolution parameters The weighting term k I for the speed function F was computed from the preprocessed image in the previous section using a slightly altered version of Equation 6: In order to achieve finer control over the rate at which k I approaches zero as increases, a coefficient α was added to the second term of the denominator. Values of α = 100 and p = 1 were chosen empirically. A surface plot of 1-k I (inverted for ease of interpretation) computed using the image in Figure 6(b) as is shown was experimentally chosen to be 0.375. Use of a non-zero curvature-based force was used to discourage the "leaking" of the evolving contour through sections of the true CL boundary where the magnitude of the edge was weak. The Matlab toolbox [28] which was used to evolve the boundary also required some additional parameters. The first-order accurate finite difference option was chosen for approximation of derivatives. The surface being evolved must be discretized on a grid; a conservative grid size of 1 × 1 was chosen. The time interval between steps of the evolution, which may range from 0.5-0.9, was chosen to be a conservative 0.5. A more aggressive selection of a larger timestep would improve the speed of the curve evolution algorithm, but could also cause numeric instability leading to poor or nonsensical results. The surface in which the 2D contour is embedded will almost certainly lose the property of being a signed distance function as it evolves since portions of surface will move at different velocities. It is necessary to periodically re-initialize the surface to a signed distance function, for example, after every m time steps. This is achieved by extracting the zero-level set C(t) (defined in Equation 2) and constructing from it a new signed distance function ψ new . The evolution process then continues using ψ new for another m time steps. This is repeated until a stopping condition is condition is met (see below). A value of m = 1 was selected which caused the surface to be reinitialized as a signed distance function after every time step. This ensured that the surface was always a signed distance function and facilitated the test of the stopping condition. Stopping condition The stopping condition determined when to cease evolution of ψ. Since the zero level set contour will presumably stop when it has settled onto the desired boundary, a test is needed to determine when motion of C(t) becomes sufficiently small. The criterion used was motivated by the work of [29]: Values N 0 = 250 and γ = 250, and Δm = 15 were used to evaluate the stopping condition. Figure 8 shows the contour detected by the proposed method using the lower CL image from Figure 1(a) as input to the level set method. The green contour is the initial contour, the purple contour is the final (automatic) contour, and the yellow contour is the contour determined by an experienced human interpreter. Results of the method on the entire data set are discussed in the Results section, below. Smoothing with Fourier descriptors During development of the implementation of the above algorithm, it was observed that the region boundaries determined by the algorithm quite often meandered back and forth across the expertly traced boundary. It was hypothesized that if the boundaries were smoothed, they might conform more closely to the ground truth boundary. Boundaries were smoothed by truncating the Fourier descriptors of the boundary [26]. This section reviews the basics of smoothing using Fourier descriptors. The original boundary may be recovered via the inverse Fourier transform. However, S, (and, in turn, P) may be approximated by using only the first Z ≤ B Fourier descriptors in the computation of the inverse transform: This acts as a "lowpass filter" on the contour, and reduces the magnitude of high frequency variations which results in a smoother contour. The fewer descriptors that are retained, the greater the smoothing effect, as illustrated in Figure 9. The effects of smoothing on contour accuracy are discussed in the Results section. Validation The manual segmentations obtained for this study were used as the basis for comparison when validating the semi-automatic segmentations. The metrics [30,31] mean absolute distance (MAD), root mean squared distance (RMSD), Hausdorff distance (HD), sensitivity, and specificity were used to evaluate segmentation accuracy. These metrics are formally defined below. The minimum distance d min between a point p, and a contour C, is defined as which is the smallest Euclidean distance between p and any point on the contour C. Let C M and C A denote the sets of points on the manually segmented and semi-automatically segmented contours, respectively. The mean absolute distance between C M and C A is where |C M | denotes the number of points in the set CM. Smaller MAD indicates a better segmentation. The root mean squared distance between C M and C A is Visualization of the speed function Figure 7 Visualization of the speed function. A surface plot of 1 -k I (inverted for ease of interpretation) computed by using the image in Figure 6(b) for in Equation 11. Areas close to the desired contour of the corpus luteum boundary are red in colour, indicating that the evolution of ψ will slow and eventually stop near the true contour of the corpus luteum. N This metric is similar to MAD, except that pixels in C A which are further away from C M contribute a greater penalty to the metric. Hausdorff distance is the greatest minimum distance between two contours. It characterizes the maximum deviation of one contour from another. Formally, A smaller HD is more desirable. Let TP denote the set of true positive pixels, that is, those pixels that were correctly identified as belonging to a CL region. Let TN be the set of true negative pixels correctly identified as not belonging to a CL region. Similarly define FP and FN to be the set of false positive and false negative pixels. The sensitivity of a segmentation is thus defined as and represents the percentage of pixels that truly belong to the CL region which were correctly identified as such. The specificity of a segmentation is the ratio Result of segmentation Figure 8 Result of segmentation. The contour detected by the proposed method using the image and initial contour from Figure 1(c) as input to the level set method. The green contour is the initial contour, the purple contour is the final (automatic) contour, and the yellow contour is the contour determined by an experienced human interpreter. Specificity represents the percentage of pixels which are not part of the CL region which were correctly identified as such. A perfect segmentation with respect to the ground truth will have both a sensitivity and a specificity of 1.0. Results The eight images in our testing set were segmented with the algorithm described above. The manual segmentations performed by a human expert were compared with the automated segmentations by computing the previously described metrics for each image. Smoothing with Fourier descriptors was performed on the automatically segmented contours. Table 2 shows the mean MAD, mean RMSD, mean HD, specificity, and sensitivity over all eight test images while varying the percentage of descriptors used to construct the smoothed contour. Table 2 demonstrates that as the percentage of descriptors was decreased, all validation metrics improved. However, the improvements were negligible; the greatest improvement was in the RMSD metric using 1% of the Fourier descriptors, where the mean RMSD decreased by 0.2 mm. Table 1 and Figures 10 and 11 demonstrate quite clearly that the proposed algorithm typically under-segments CL; specificity is generally excellent, while sensitivity varies from very good (0.938) to quite poor (0.457). The high specificity occurred because the contours were initialized inside the CL and grew outward. The sensitivity was generally lower because of the noisy nature of the images. The gradient information captured in k I (Equation 11) did not perfectly match the true contour of the CL which caused the curve evolution to stop too soon in many instances. There are no other CL segmentation algorithms with which this work could be compared. Given that the mean Hausdorff distance is only 3.4 mm (σ = 2.0 mm) and the images from Figures 10 and 11, it is clear that the regions are being under-segmented, but the degree of under-segmentation is fairly uniform about the true contour. Discussion Smoothing with Fourier descriptors resulted in a minor improvement, but it is concluded that the additional computation is not worth the degree of improvement. The advantages of the algorithm are that it can handle arbitrary contours and requires no user intervention beyond placement of the initial contour. Moreover, no serious boundary leaking occurred resulting in very high specificity in the segmentations. Though similar to the work of Effect of smoothing a contour by truncating Fourier descriptors (a) Gooding et al. [32], our work differs in that a substantial amount of preprocessing is needed to enhance the gradient formed by CL boundaries. Gooding et al. use a similar gradient-based level set contour evolution method to segment ovarian follicles from 3D images. Follicles are easier to distinguish from their environment with gradientbased level set methods without aggressive preprocessing because the gradient at follicle boundaries is usually quite strong. There are several aspects of this algorithm that could be improved with future work. A combination of texture and intensity properties might be able to locate a small area of the CL with high confidence in which the initial contour could be automatically placed. Recent work on distinguishing CL echotexture from that of ovarian stroma has shown promise [33] and this work will be continued by examining wavelet-based texture features. A challenge in automatic contour placement will be to distinguish CL texture not only from that of stroma, but from that of a corpus albicans which are non-functional and are comprised of more dense tissue. A more rigorous exploration of the algorithm's parameter space (which is of high dimension) could yield a set of parameters that produces superior results. The speed of the contour evolution is clearly an issue. Speedup by a factor of 2-3 is likely possible by re-implementing the methods herein using a programming language such as C or C++. Real-time speeds are not likely achievable using level-set-based contour evolution, thus, more computationally efficient segmentation algorithms will be investigated to see whether they may be suitable for CL segmentation. The algorithm must be tested on CL's imaged in vivo. It is expected that the algorithm will give similar performance on such images. Since the algorithm was successful in locating boundaries between lutetal tissue and stroma we expect it to perform similarly at boundaries where the pro-truding CL is closely interfaced with organs and tissues surrounding the ovary. Moreover, our preprocessing causes significant blurring in order to "plug" holes in the CL boundary through which leaks into the background may occur which affects the potential accuracy of boundary location. Images acquired in vivo may permit less aggressive preprocessing since the area of the image surrounding the ovary will be textured and not have a nearzero gradient through which the contour can leak. In turn, less blurring would cause less damage to the shape of the boundary, permitting a higher accuracy in boundary location. The evidence that CL morphology is related to function is conflicting, particularly between different laboratories and species. A study by Tom et al. supported the hypothesis that quantitative changes in luteal echotexture in bovine corpora lutea are indicative of changes in its physiologic status and capacity to elaborate progesterone [34]. In mares, luteal area was positively correlated with circulating progesterone levels; however, the presence of a cystic cavity within the CL did not affect the luteal gland's ability to produce progesterone [27,35]. In humans, mean luteal area of human CL's was shown to be positively correlated with serum progesterone concentrations (r = +0.88) and serum estradiol concentrations (r = +0.62) [36]. The human study is particularly encouraging with regard to the prospect of non-invasive automated interpretation of physiologic information from images which could obviate blood tests in future generations. In this context, the current study and others of its kind will be crucial to automated, standardized analysis of CL images. The work presented in the present study also is useful for automated recognition of CL in the studies of CL morphology and has the potential to be used in different imaging modalities such as histology, ultrasonography and magnetic resonance imaging [37,38]. Indeed, any study which would benefit from automatic measurements of CL diameter, such as that of Maldonado-Castillo et al. study, and is largely responsible for the final draft of the manuscript. RAP participated in writing the introduction, discussion, and conclusion sections of the manuscript and provided "ground truth" for the CL segmentations to be evaluated. GPA and JS provided the image data for the study, advised on the design of the study, and participated in proofreading the manuscript. All authors have read and approved the final manuscript. Segmentation results for test images 5-8 Figure 11 Segmentation results for test images 5-8. The green contours are the initial contours, the purple contours are the final (automatic) contours, and the yellow contours are the contours determined by a human expert. The image numbering corresponds to "Image ID" in Table 1.
7,515.2
2008-01-01T00:00:00.000
[ "Biology", "Medicine" ]
MODELLING POPULATION GROWTH WITH DELAYED NONLOCAL REACTION IN 2-DIMENSIONS In this paper, we consider the population growth of a single species living in a two-dimensional spatial domain. New reaction-diffusion equation models with delayed nonlocal reaction are developed in two-dimensional bounded domains combining different boundary conditions. The important feature of the models is the reflection of the joint effect of the diffusion dynamics and the nonlocal maturation delayed effect. We consider and analyze numerical solutions of the mature population dynamics with some wellknown birth functions. In particular, we observe and study the occurrences of asymptotically stable steady state solutions and periodic waves for the twodimensional problems with nonlocal delayed reaction. We also investigate numerically the effects of various parameters on the period, the peak and the shape of the periodic wave as well as the shape of the asymptotically stable steady state solution. 1. Introduction. Mathematical modelling of population dynamics is a fast growing division, which has been playing more and more important roles in discovering the relation between species and their environment and in understanding the dynamics involved in the corresponding biological and physical processes. A well-known logistic equation with time delay (see [9]) is given by: where u(t) is the total population of the species at time t ≥ 0, p > 0 is the birth rate coefficient, K > 0 is the carrying capacity of the environment, and r ≥ 0 is the delay parameter reflecting the fact that the current growth rate is governed by the relative size of the population at time r ago, in comparison with the carrying capacity. Then, by introducing simply a diffusion term and incorporating a discrete delay in the birth term, a widely used reaction-diffusion equation with delay and local effect on a two-dimensional bounded domain (see [1], [3], [4], and [9]) is described as: DONG LIANG, JIANHONG WU AND FAN ZHANG where u(t, x, y) is the density of the population of the species at time t ≥ 0 and location (x, y), and D is the diffusion coefficient. In recent years, new mathematical models incorporating delayed effects have been studied. Smith in [10] and Smith and Thieme in [11] derived a scalar delayed differential equation for the population with immature and mature age classes. The maturation period was regarded as the time delay. Using the same idea, a system of delayed differential equations for mature population in a patchy environment was proposed by So, Wu and Zou in [12]. Furthermore, in [13], they derived a non-local reaction-diffusion equation with time delay in a continuous unbounded one-dimensional spatial domain. Existence of travelling wavefronts for this model was also studied in [13]. Moreover, Liang and Wu [6] considered a species living in a spatially transporting one-dimensional field and derived a reaction advection diffusion equation model with an advection term accounting for the spatial transport and a spatial translation in the delayed nonlocal effect term. Travelling wavefronts for the unbounded one-dimensional domain were studied both theoretically and numerically in [6]. However, there is particular interest in studying the species population with nonlocal delayed effect living in a high-dimensional bounded spatial field. It is very important and difficult to investigate the asymptotically stable steady state solutions and the periodic wave solutions for the high-dimensional problems with nonlocal delayed effects. In this paper, we consider the population growth of a single species living in a two-dimensional spatial domain. Only two age classes, that is, immature and mature populations are assumed for the species and the fixed maturation period is regarded as the time delay. Both the death rate and the diffusion rate of the mature population are further supposed to be age independent. New reaction-diffusion equation (RED) models with delayed nonlocal reaction are developed in two-dimensional bounded domains. The important feature of the models is that they reflect the joint effect of the diffusion dynamics and the nonlocal maturation delayed effect in the bounded two-dimensional domain. We focus on the numerical computation and analysis of the mature population dynamics on the two-dimensional bounded domains with some well-known birth functions combining with Neumann and Dirichlet boundary conditions. In particular, we investigate numerically the occurrences of the asymptotically stable steady state solutions and the periodic waves for certain ranges of birth rate and death rate parameters. In addition, the effects of various parameters on the periodic waves and the asymptotically stable steady state solutions are further investigated. Moreover, the initial condition is considered as a function of time and space, and its effect on the mature population dynamics is also studied. The paper is organized as follows. In the next section, we derive the new reaction-diffusion equation models with delayed nonlocal reactions in two dimensional bounded domains. In Section 3, we introduce numerical methods for simulating the models in two-dimensional bounded domains. We report our numerical results and analyze in detail the dynamical behaviours of the processes in Section 4. Finally, we draw some conclusions in Section 5. 2. RDE Models in 2-D. Starting from the age-structured population dynamic, we consider the population growth of a single species in a two-dimensional bounded domain. The reaction-diffusion equation models with delayed nonlocal reactions will be derived for the maturation population in 2-D. Let Ω = [0, L x ] × [0, L y ] be the spatial living two-dimensional domain of the species, u(t, a, x, y) denote the density of the population of the species at time t ≥ 0, the age a ≥ 0 and the spatial location (x, y) ∈ Ω. Let D(a) and d(a) denote the diffusion rate and death rate at age a respectively. Then, the population density function u(t, a, x, y) satisfies At first, let us consider the Neumann boundary condition: for t ≥ 0 and a ≥ 0. Assume that the population has only two age stages as mature and immature species. Let r ≥ 0 denote the fixed maturation time for the species and a l > 0 be the life limit of an individual species. Therefore, u(t, a l , x, y) = 0 at any time t > 0 and any (x, y) ∈ Ω. The total mature population is denoted by w(t, x, y) and Since only the mature population can reproduce, we have Suppose D m and d m are the age-independent diffusion rate and death rate for the mature population respectively, that is, D(a) = D m and d(a) = d m for a ∈ [r, a l ]. Then integrating (3) leads to Further, we can eliminate u(t, r, x, y) from (7), which can be achieved as follows. Let us fix s ≥ 0 and define V s (t, x, y) = u(t, t − s, x, y) for s ≤ t ≤ s + r. Then, from (3), it follows, for s ≤ t ≤ s + r, that with and the corresponding boundary conditions Note that (8) is a linear reaction diffusion equation in 2-D, we can solve (8)-(11) by the method of separation variables. Let V s (t, x, y) = Ψ(t)Φ(x, y). From (8) it leads to Further, we obtain the following series solution for (8) - (11): where with the use of the relation u(t, r, x, y) = V t−r (t, x, y), we finally obtain a reactiondiffusion equation model in 2-D with delayed nonlocal reaction and the Neumann boundary condition as following: where w 0 (t, x, y) is an initial function which should be specified, and The homogeneous Neumann boundary condition indicates an isolating boundary, and no species can go through the boundary. In the same way, we can consider the problem with the Dirichlet boundary condition, mixed boundary condition and periodic boundary condition. The similar 2-D reaction-diffusion equation models are obtained but with different delayed nonlocal reaction terms. The 2-D model with delayed nonlocal reaction and the Dirichlet boundary condition is where The 2-D model with delayed nonlocal reaction and the mixed boundary condition is The 2-D model with delayed nonlocal reaction and the periodic boundary is Models (17) Here, ε reflects the impact of the death rate of immature and α represents the effect of the dispersal rate of the immature on the growth rate of the mature population. F (x, y, w(t − r, ·)) represents the nonlocal spatial effect with time delay. When α → 0 and ε → 1, that is, if all immature population live to maturity without death and dispersal, then the model equation becomes which is the local time delay problem on a bounded domain. This local delay problems have been widely studied in many papers, such as [16], [7], [14] and [15] for the finite domain case. The problems with delayed nonlocal effects in a one-dimensional bounded domain have recently been studied in [5] by Liang, So, Zhang and Zou. In the following sections, we will focus on the numerical computation and numerical analysis of 2-D reaction-diffusion equation models with delayed nonlocal reaction combining with Neumann and Dirichlet boundary conditions. In particular, we will observe numerically the dynamical behaviours of the population processes. 3. Numerical Schemes on 2-D Domains. In order to investigate numerically the above RDE models in 2-D, we will introduce numerical methods in this section. Let us consider the 2-D model with the Dirichlet boundary condition: where Take a uniform spatial grid for the domain Ω = [0, L x ] × [0, L y ] with nodes (x i , y j ), i = 0, 1, 2, · · · , m x ; j = 0, 1, 2, · · · , m y such that where m x and m y are positive integers. Denote spatial step sizes ∆x = L x mx and ∆y = L y m y , then, x i = x 0 + i∆x for i = 0, 1, 2, · · · , m x and y j = y 0 + j∆y for j = 0, 1, 2, · · · , m y . Similarly, let n = 0, 1, 2, · · · , k, and let T be the final time, a uniform partition on the time interval is defined as where k is a positive integer and the time step size is ∆t = T k ; t n = n∆t for n = 0, 1, 2, · · · , k. Further, denote W n i,j as the approximate value of w(t n , x i , y j ). By using the backward-difference method, the differential operators in (38) can be approximated by for n = 1, 2, · · · , k, i = 1, 2, · · · , m x and j = 1, 2, · · · , m y . Additionally, in order to obtain a numerical scheme of equation (38) on the spatial and time nodes, we need to deal with the delayed nonlocal effect term . High-order interpolations can be defined from multilevel values to approximate W n−k(r) i,j for increasing the accuracy. Furthermore, let S N,M (x, y, z x , z y ) be the approximation function to the infinite series function of the term F (x, y, w(t − r, ·)) that is, Here, N and M are large positive integers. Therefore, which is used to approximate the delayed nonlocal effect term. The quadrature techniques can be applied to this formula (47). Composite Simpson's method (see, [2]) is used to obtain the delayed nonlocal effect terms F (x, y, W (t − r, ·)) in our computations and the truncation error is O((∆x) 4 + (∆y) 4 ). Finally, we obtain the finite difference scheme for the 2-D RDE model (38) with delayed nonlocal reaction: . Many solution techniques of equations can be applied to solve the system above, such as the Jacobi iterative method, Gauss-Seidel iterative method and SOR iterative method (see, [2]). Let us give a short description of these iterative methods. Consider the solution of a general system of equations Ax = b. Let x i refer to the ith element of the vector x, k ≥ 1 represent the iteration number, and a ij , b i , i, j = 1, 2, · · · , n be the components of the matrices A n×n and b respectively. Assume a ii = 0 for i = 1, 2, · · · , n. Then, the Jacobi iterative method can be expressed as for i = 1, 2, · · · , n, and the Gauss-Seidel iterative method has the form for i = 1, 2, · · · , n. Furthermore, the SOR iterative method is described as for i = 1, 2, · · · , n, where ω > 0 is called the relaxation factor. When ω = 1, this method becomes the Gauss-Seidel iterative method. Normally, the SOR method has the fastest convergence rate if 1 < ω < 2, and the Gauss-Seidel iterative method converges faster than the Jacobi iterative method. The Gauss-Seidel iterative method is applied to solve the system of equations (48) in our numerical computations. 4. Numerical Analyses of 2-D Models. We now study numerically the solutions of the two-dimensional reaction-diffusion equation models with delayed nonlocal reaction derived in Section 2. In our computations, two birth functions which were widely used in the studying of Nicholson's blowflies equation (for example, see [5], [6], [9], and [13]) are considered: with p > 0, q > 0, and a > 0, and with p > 0, q > 0, and K c > 0. q = 1 has been normally used in the literature, here we use q as a parameter to reflect the intensity of competition for limited resources that accounts for the crowding effect. Additionally, the initial condition w 0 (t, x, y) is given to be a space-time function in the domain Ω × [−r, 0]. By using the finite difference method coupled with the iterative technique described in Section 3, we can obtain the numerical solutions of the two-dimensional reaction-diffusion equation with delayed nonlocal reaction. Our numerical simulations show that the positive stable steady state solutions exist under a large range of the biological parameters. In addition, the positive periodic wave appears when the ratio of the birth parameter over the death parameter is greater than a certain value, and many other parameters also affect the periodic solution of the dynamical system. Furthermore, some impacts can be made by the initial condition on the mature population dynamics. Problems with b 1 (w). First, we consider the Neumann problem with delayed nonlocal reaction and the birth function b 1 (w) = pwe −aw q . This birth function with q = 1 has been widely used in the well-studied Nicholson's blowflies equation. It increases monotonically before reaching the peak and then decays almost exponentially to zero. 2-D Neumann Let the spatial domain Ω = [0, π] × [0, π]. The Neumann boundary condition for the total mature population w(t, x, y) is given as and for t ≥ 0. The initial function w 0 (t, x, y) on Ω × [−r, 0] is given as w 0 (t, x, y) = w c + cos n x x cos n y y cos n t πt, where n x , n y and n t are positive integers and w c ≥ 0. The graph of this initial function is a periodic wave, which shows as cosine waves with period lengths of 2 n t along the t-direction, 2π n x along the x-direction, and 2π n y along the y-direction in the domain [0, π] × [0, π] × [−r, 0], respectively. The value of w c represents the central value of the initial periodic wave. We take the uniform time grid with the step size ∆t along the t-direction and the uniform spatial grid with the step sizes ∆x, ∆y along the x-direction, and the y-direction, respectively. Example 1. Let the diffusion coefficient D m = 1 and the death rate d m = 1 for the mature population, α = 1, ε = 1 for the immature population and the maturation age (the time delay) r = 1. Let q = 1 and a = 1 in the birth function b 1 (w) = pwe −aw q . Choose w c = 2, n x = 4, n y = 4, and n t = 4, then w 0 (t, x, y) = 2 + cos 4x cos 4y cos 4πt. Take ∆x = ∆y = π 16 ≈ 0.196 and ∆t = 0.01. We then numerically observe the solutions by varying the birth rate p as p = 5, p = 15, and p = 50. The numerical solutions are shown in Figure 1 -3. It is clear that the positive solutions exist for some parameters. Figure 1 shows the numerical solutions at the central point ( π 2 , π 2 ) of the domain. The horizontal axis indicates the t-direction and [−1, 0] refers to the initial time interval. The vertical direction represents the value of total mature population. In Figure 1, if p is less than a number, such as p = 5, the solution converges to a steady value less than w c = 2.0. Increasing the value of p to a certain range, for example, when p = 15, the solution still converges to a steady value but bigger than w c and with oscillation for some time at the beginning. Further increasing the value of p, for instance, when p = 50, a periodic solution occurs. Despite using the same periodic initial function, these solution graphs (p = 5, 15, 50) show different properties. In addition, in the periodic solution case (p = 50), both the period size and the peak value of the solution wave of the population dynamical process increase greatly compared with those of the initial periodic wave when t ∈ [−1, 0]. The numerical solutions of the case p = 5 along the x-direction at y = π 2 and at t = −0.6, 0.01, 0.1, 0.5, 3, and 6 are shown in Figure 2. We can see clearly that the graph of the periodic initial wave is a portion of cosine wave within two periods along the x-direction in the domain x ∈ [0, π] when t ∈ [−1, 0], such as at t = −0.6. However, with the increasing of time, the solution graphs are still periodic waves at the beginning, but the amplitude decreases continuously, for instance, at t = 0.01 and t = 0.1. A short time later, the solution graphs become straight lines along the x-direction at t = 0.5 and t = 3. Finally, the straight lines overlap completely (see, at t = 3 and t = 6), which means that the solutions reach one steady value. The steady solution of the case with p = 5 and the Neumann boundary condition is a constant. With the increasing of birth rate p, we obtain a periodic wave when p = 50. The three-dimensional surface of the wave at y = π 2 is shown in Figure 3. In this figure, x ∈ [0, π] and t ∈ [−1, 20], and [−1, 0] represents the initial time interval. It is clearly seen that, after a short time, the solution appears periodically, and the values along the x-direction at every fixed time t are constants correspondingly since the Neumann boundary condition is provided. value are increased significantly. Moreover, when decreasing r (see, r = 0.5), the solution does not show the periodic property but converges to a steady value finally. It is clear that the large delay leads to the occurrence of the periodic solution and affects on both the period length and the peak value. The numerical solutions at the central point ( π 2 , π 2 ) of the domain are shown in Figure 5. For n t = 4, the initial wave contains two periods in the initial time interval [−1, 0], while for n t = 8, it contains four periods. We note that for cases (i) and (ii) with the same w c , despite the periodic initial curves include different periods in the initial time interval, the final solution graphs (the solid line and the with p > 0, q > 0, and K c > 0. For this birth function, we show the effects of the birth rate parameter p as well as parameter q of the birth function on the solution of the mature population. The numerical solutions at the central point ( π 2 , π 2 ) of the domain are shown in Figure 6. In this figure, when p is smaller, such as p = 1.0, the solution converges monotonously and decreasingly to a steady value, which is less than the central value of the periodic initial wave w c = 2. However, when increasing p to p = 2.0, the solution also converges to a steady value but bigger than w c = 2. Furthermore, when p goes over a certain range (see the graph for p = 5.0 in Figure 6), the solution wave shows periodic properties with a slight waveform distortion. Moreover, we calculate the case with a constant initial condition. Fix r = 1.0, D m = 1, d m = 1, α = 1, ε = 1, K c = 2.0, and q = 2.0, and then consider two cases: (i). p = 1.25 and the initial condition function is a constant w 0 = 1.0, and (ii). p = 2.0 and the initial condition function is a periodic function w 0 (t, x, y) = 1.5 + 0.1 cos 4x cos 4y cos 4πt. The three-dimensional surfaces of the solution waves of these two cases at y = π 2 are shown in Figure 7. It is clear that despite the different initial conditions, the solutions of these two cases converge to some steady values (constants) respectively. Furthermore, these steady values (constants) are less than the average values of the corresponding initial conditions when p is varying in a small value range, such as p = 1.25 and p = 2.0. It is consistent with the results in Figure 6. Figure 10. The three-dimensional surface of the periodic wave for the homogeneous Dirichlet boundary condition and the birth function b 1 (w) at y = π 2 . While p = 800, q = 1, a = 1, r = 1, D m = 1, d m = 1, α = 1, and ε = 1, the initial condition function w 0 (t, x, y) = sin x sin y cos t. keeping the same, the solutions are not periodic waves but converge to some steady values respectively. The three dimensional surface of the periodic wave for the homogeneous Dirichlet boundary condition with the parameters above and D m = 1.0 is shown in Figure 10. It can be seen that after the initial time interval, the solution shows the periodic feature along the t-direction very quickly. Moreover, the graphs of the solutions along the x-direction appear in arc curves with zero values at x = 0 and x = π. We then show the case with a small birth rate p. Let r = 1.0, D m = 1, d m = 1, α = 1, ε = 1, and q = 1.0. Take p = 100, and the initial condition on [0, π] × [0, π] × [−1, 0] is specified as w 0 (t, x, y) = 2 + sin 5x sin y cos 4πt, which does not satisfy the homogeneous Dirichlet boundary condition in the initial time interval. Figure 11 shows the numerical solutions along the x-direction at y = π 2 and at different time levels. In this figure, the horizontal direction represents the x-direction of the domain, and the vertical direction represents the values of the total mature population. From Figure 11, we can see that the graph at t = −0.3, which belongs to the initial time interval [−1, 0], is a segment of the periodic wave within two and half periods. However, a short time later (for example, when t = 0.05 and t = 0.1), the solution graph becomes a symmetric curve with zero values at x = 0 and x = π. Gradually, with increasing of time t, the solution curve becomes a smooth arc at t = 0.6. Finally, it tends to an identical smooth arc. The solutions overlap completely at t = 1.0 and t = 3.0 in Figure 11. This implies that the solution of this case converges to a steady solution in a certain time. Furthermore, the curve at t = 0.1 is over the steady curve, but the one at t = 0.6 is under it. It means that the solution curve oscillates at the beginning. Moreover, although the initial condition does not match the homogeneous boundary condition, the final solution keeps the same properties as those of the case with normal initial condition. Figure 12 shows the three-dimensional surface of the asymptotically stable steady solution. It can be seen that, after the initial time interval, the solution converges quickly to a stable steady-solution as the time t increases. The shape of the solutions along the x-direction appears in a steady arc curves with zero values at x = 0 and x = π. Furthermore, we observe numerically the effect of the large diffusion rate D m of the mature population on the asymptotically stable steady state solutions for the homogeneous Dirichlet boundary condition. The solutions are shown in Figure 13. It is clear that the shapes of asymptotically stable steady state solutions Figure 12. The asymptotically stable-steady solution for the homogeneous Dirichlet boundary condition and the birth function b 1 (w) at y = π 2 . While p = 100, q = 1, a = 1, r = 1, D m = 1, d m = 1, α = 1, and ε = 1, the initial condition function w 0 (t, x, y) = 2 + sin 5x sin y cos 4πt. Example 7. Finally, we consider the Dirichlet problems with the birth function b 2 (w). We illustrate the effects of the time delay r on the mature population and the numerical results. Let the death rate d m = 1, α = 1, ε = 1, q=2, K c = 2, ∆x = ∆y = π 16 ≈ 0.196 and ∆t = 0.01. The initial condition is w 0 (t, x, y) = sin x sin y cos t. Choose case (i). vary r = 1.0 to r = 2.0, fix D m = 1.0 and p = 50; and case (ii). fix r = 1.0 and D m = 10.0, vary p = 250, p = 500 to p = 1000. The numerical solutions at the central point ( π 2 , π 2 ) of the domain for cases (i) and (ii) are shown in Figure 14 and 15. It is clear that for this Dirichlet problem with the birth function b 2 (w), the values of time delay will affect the periodic solutions (see Figure 14) not only on the period sizes and the peak values of the periodic waves, but also on the wave shapes. With the increasing of time delay r to r = 2.0, the solution graph oscillates intensely and enduringly at the beginning of the time. Moreover, in Figure 15, when the diffusion rate D m is fixed as a value D m = 10.0, with the increasing of the value of the birth rate p in a big value range from p = 250, p = 500 to p = 1000, the solution curves become much sensitive. It is remarkable that when p = 250, the solution converges to a steady solution, and when p = 500 and p = 1000, the solution waves show some periodical characteristics but with flat roofs for the graph (p = 500) and with shape distortion for the graph (p = 1000). 5. Conclusions. In this paper, we developed some new Reaction Diffusion Equation (RDE) models with delayed nonlocal reaction for the growth dynamics of a single species population living in a two-dimensional bounded domain. The models reflect the joint effect of the diffusion dynamics and the nonlocal maturation delayed effect. The mature population dynamics with two widely used birth functions are investigated numerically in Section 4. We observe that when the ratio of the birth rate parameter p over the death parameter d m is in a certain range, the solution of the mature population is positive and converges to a stable steady-solution in Figure 15. The distributions of the mature population with the birth function b 2 (w) and the homogeneous Dirichlet boundary condition when the birth rate p is changed. The data are r = 1.0, K c = 2, q = 2, d m = 1, α = 1, ε = 1, and w 0 (t, x, y) = sin x sin y cos t. Give the diffusion rate D m = 10.0 and the birth rate p as p = 250, p = 500 and p = 1000. the t-direction. Outside of this range, positive periodic wave solutions occur. Additionally, numerical results show that the period, the peak, and the shape of the periodic wave can be affected by other parameters, for example, the value of time delay r, the diffusion rate D m , and even the birth function parameters K c and q. Meanwhile, the shape of the asymptotically stable steady-solution is also affected by these parameters. The mature population for the homogeneous Dirichlet boundary condition turns to extinction if the diffusion rate D m of the mature population becomes extremely large. Furthermore, the numerical computations also show that the initial condition has some effects on the mature population dynamics without essential changes. The theoretical analysis of properties of these models especially the relations between these parameters and the existence of the periodic waves will be a next step work in the near future.
7,039.2
2004-11-01T00:00:00.000
[ "Mathematics" ]
Effect of Aging on the Microstructures and Mechanical Properties of AZ80 and ZK60 Wrought Magnesium Alloys In this paper, the effects of solution and aging on the microstructures and mechanical properties of AZ80 and ZK60 wrought magnesium alloys are investigated by optical microscope, electronic scanning microscope and mechanical testers. The result shows that both the tensile strength and elongation of AZ80 alloy increase firstly and then decrease with the increasing of the aging temperature, the peak values appear when the aging temperature is 170°C. The hardness of ZK60 alloy increases firstly and then decreases with the increasing of the aging temperature, and the hardness reaches its peak value at 170°C. However, the toughness of the alloy is just the opposite. Moreover, ZK60 alloy has good performances in both impact toughness and other properties at the aging temperature from 140 to 200°C. Introduction Magnesium, because of its high specific strength, high shock-absorbing capacity, good performance in electromagnetic resistance, and good performance in recycling [1], is widely used in the aerospace industry, automobile industry and military industry etc. At present, the most widely used magnesium alloys are AZ91 and AZ31, however, these alloys hold comparatively low strength; AZ80 and ZK60, because of their high strength, have attracted a great deal of interest [2][3]. Nowadays, many researchers have investigated the effect of the texturing process and alloy element on structure and property. For example, ToshijiMukai made an attempt to improve the ductility of magnesium alloy by controlling the alloy's crystal-grain structure [4]; Hidetoshi Somekawa studied the mechanical property of AZ31 magnesium alloy by using the Equal Channel Extrusion technology (ECAE) [5], Song Pei-wei et al. studied the microstructure and the mechanical properties of magnesium alloy by reciprocating extrusion of magnesium-alloy ingot [6], N. Balasubramani et al. studied the aging precipitation behavior, microstructure, and the mechanical properties of the magnesium alloy by adding alloy [7][8][9]. Other researchers made attempt to improve the properties of magnesium alloy through heat treatment. Ma Yan-long studied the effect of heat treatment on the microstructure of ZK60 magnesium alloy [10]; Wang Hui-min studied the aging process of Mg-Al alloys [11]; D. Duly et al. studied the solution process of the Mg-Al alloys [12]; Porter and Xiao Xiao-ling et al. studied the precipitated way of the Mg 17 Al 12 phase of the Mg-AI alloys [13; 14]. However, research efforts on the effects of solution and aging on the microstructures and mechanical properties of AZ80 and ZK60 wrought magnesium alloys are rather limited. Therefore, the objective of the present work was to investigate the effects of solution and aging on the microstructures and mechanical properties of AZ80 and ZK60 wrought magnesium alloys so that the laws of structure formation of the products could be understood, which should provide guidance for the alteration and control of the structure of the product phases. Experimental procedures Mg-8.9wt%Al-0.53wt%Zn magnesium-alloy and Mg-(5.0-6.0)wt%Zn-(0.3-0.9)wt%Zr magnesium-alloy ingot were used for this experiment. Firstly, the alloys were machined to the cylinders with the diameter of 15mm along the axial direction. Secondly, the annealing treatment: the slab was heated at 400°C for 12h in the furnace with a heating rate of 15°C/min and then air cooled to the room temperature in the open air. Thirdly, the solution treatment: AZ80 magnesium alloy after the annealing treatment was heated at 420°C for 5h and then cooled to the room temperature in the air; ZK60 magnesium alloy after the annealing treatment was heated at 500°C for 2h and then cooled to the room temperature in the air; Finally, aging treatment: the AZ80 and ZK60 magnesium alloy were heated at 110°C, 140°C, 170°C, 200°C, and 230°C separately for 10h and then cooled to room temperature in the air. After heat treatment, the tensile tests were carried out by WEW-E100D electronic universal testing machine (the strain rate is 3mm/min, and the specimen shape is as shown in Fig.1). The impact strength (the specimen shape is as shown in Fig.2) was measured by XJJ-50 strut-beam impact machine at room temperature and the hardness was measured by HB-3000 Brinell hardness tester (There are 3 groups for testing, and the data for the experiment can be obtained by averaging values.). The microstructure was observed by s2530 scanning electron microscope and ZEISS digital microscope. Fig. 3 shows the effect of the aging temperature on the impact toughness, it can be seen that the impact toughness of AZ80 magnesium alloy decreases with the increasing of the aging temperature. However, the impact toughness of ZK60 magnesium alloy decreases firstly and then increases with the increasing of the aging temperature, its impact toughness reaches its minimum value at 170°C. Fig. 4 shows the effect of the aging temperature for the ultimate tensile strength, it can be seen that the aging temperature has similar effect on the ultimate tensile strength of AZ80 and ZK60 magnesium alloy. The ultimate tensile strengths increase firstly and then decrease with the increasing of the aging temperature, and that the ultimate tensile strength curve of ZK60 magnesium alloy varies more gently than that of the AZ80 alloy. At 170°C and 200°C, the ultimate tensile strengths of AZ80 and ZK60 magnesium alloys reach their peak values, the ultimate tensile strengths of ZK60 and AZ80 alloys are 263Mpa and 258 Mpa respectively. Normally speaking, ZK60 magnesium alloy has a higher ultimate tensile strength than AZ80 alloy. It maybe attribute to the β-Mg 17 Al 12 phase precipitates from the supersaturated α-Mg solid solution during aging process, and the amount of β-Mg17Al12 phase increases and the morphology changes greatly with the increasing of the aging temperature, which results in the change of the alloy's ultimate tensile strength. Fig. 3 The effect of the aging temperature on the impact toughness of AZ80 and ZK60 alloys Fig. 4 The effect of the aging temperature on the ultimate tensile strengths of AZ80 and ZK60 alloys Fig. 5 shows the effect of the aging temperature on the elongation, it can be seen that the aging temperature has similar effect on the elongation of AZ80 and ZK60 magnesium alloy. The elongations first increase and then decrease with the increasing of the aging temperature, the elongation curve of ZK60 magnesium alloy varies more gently than that of the AZ80 alloy. In addition, the elongation of ZK60 magnesium alloy is greater than that of the AZ80 magnesium alloy in low-temperatures interval (between 110 and 140°C) and hightemperature interval (between 200 and 230°C). The reason for the decrease in the elongation may be as follows: the dislocation resistance gradually increases when the β-Mg 17 Al 12 phase is precipitated with the increasing of the aging temperature, which results in the dislocation piling up and the increasing of the stress, so the elongation gradually decreases. Fig. 6 shows the effect of the aging temperature on the hardness. It can be seen that the hardness of AZ80 magnesium alloy increases with the increasing of the aging temperature, the hardness of AZ80 alloy reaches its peak value at 170°C. However, the hardness of ZK60 alloy increases firstly and then decrease with the increasing of the aging temperature, the hardness of AZ80 magnesium alloy is greater than that of the ZK60 alloy in both low-temperatures interval (between 110 and 140°C) and high-temperature interval (between 200 and 230°C). The reason for the increase in the hardness of AZ80 alloy may be as follows: with the increasing of the aging temperature, the β-Mg 17 Al 12 phase precipitating from the supersaturated α-Mg solid solution gradually hardening, which results in the decrease of the toughness, when the temperature reaches 230°C, the Nucleation becomes very easy, and the grain is refined, so the hardness increases. The reason for the change in the hardness of ZK60 alloy may be as follows: the amount of the precipitated β-Mg 17 Al 12 phase gradually increases with the increasing of the aging temperature, which results in the increase of the hardness. However, with the temperature further increases, the grain of the precipitated β-Mg 17 Al 12 phase growth, which results in the over aging phenomenon, so the hardness decreases. Fig. 7(a) that the microstructures of the as-cast magnesium alloy (α-Mg solid solution andβ-Mg 17 Al 12 precipitated phase) are characterized by the uneven network structures with the serious segregations; the structure of the as-cast magnesium is uniformed and the segregation is eliminated after homogenization treatment (Fig. 7(b)); the majority of eutectics are dissolved to form a homogeneous solid solution after solid-solution treatment (Fig. 7(c)); during low-temperature aging (Fig. 7(d-e)), a portion of the secondary phase precipitate along the grain boundary, and the structures are characterized by irregular-bone distribution or irregular petal distribution with the aging twins. When the aging temperature increases, the amount of the secondary phase increases and the aging twins increase; when the temperature reaches 170°C (Fig. 7(f)), the amount of the secondary phase reaches its peak value, and the structures are characterized by the uniform tiny sheet distribution, when the aging temperature further increases, the growth of the secondary phase grains is observed (Fig. 7(g)); and when the temperature reaches 230°C, the structure turns into network distribution (Fig. 7(h)). It can be seen that the microstructure is α-Mg solid solution in the as-cast condition ( Fig. 8 (a)). But a portion of the secondary phase precipitates along the grain boundary during low-temperature aging treatment (Fig. 8 (d) and Fig.8 (e)), and with the increasing of the aging temperature, the amount of the secondary phase increases. When the temperature reaches 200°C as shown in Fig.8 (g), the amount of the secondary phase reaches its peak value. When the temperature further increases, the precipitation phase begins to decrease. Comparing with AZ80 and ZK60, it can be seen that the structural evolution of ZK60 magnesium is similar to that of the AZ80; the microstructures of the as-cast ZK60 magnesium alloy are characterized by the uneven network structures with the serious segregations ( Fig. 8 (a)). But the structure of ZK60 is smaller than that of the AZ80 at the same aging temperature. Conclusions (1) For the AZ80 magnesium alloy, the hardness increases and the plasticity decreases with the increasing of the aging temperature. The tensile strength and elongation firstly increase and then decrease with the increasing of the aging temperature. (2) For the ZK60 magnesium alloy, the hardness firstly increases and then decreases with the increasing of the aging temperature. On the contrary, the impact toughness firstly decreases and then increases with the increasing of the aging temperature. When the aging temperature is 170°C, the alloy has the maximum hardness and minimum impact toughness. The tensile strength and elongation of the alloy firstly increase and then decrease with the increasing of the aging temperature. 3) The optimum aging temperature of AZ80 magnesium alloy is 170°C, and that of the ZK60 magnesium alloy is 200°C. At the same aging temperature, ZK60 alloy has a better performance than AZ80 alloy.
2,529.8
2010-01-01T00:00:00.000
[ "Materials Science" ]
Stable Actinide π Complexes of a Neutral 1,4‐Diborabenzene Abstract The π coordination of arene and anionic heteroarene ligands is a ubiquitous bonding motif in the organometallic chemistry of d‐block and f‐block elements. By contrast, related π interactions of neutral heteroarenes including neutral bora‐π‐aromatics are less prevalent particularly for the f‐block, due to less effective metal‐to‐ligand backbonding. In fact, π complexes with neutral heteroarene ligands are essentially unknown for the actinides. We have now overcome these limitations by exploiting the exceptionally strong π donor capabilities of a neutral 1,4‐diborabenzene. A series of remarkably robust, π‐coordinated thorium(IV) and uranium(IV) half‐sandwich complexes were synthesized by simply combining the bora‐π‐aromatic with ThCl4(dme)2 or UCl4, representing the first examples of actinide complexes with a neutral boracycle as sandwich‐type ligand. Experimental and computational studies showed that the strong actinide–heteroarene interactions are predominately electrostatic in nature with distinct ligand‐to‐metal π donation and without significant π/δ backbonding contributions. Introduction The p-type complexation of aromatic carbocycles by d-block and f-block metal centers takes aunique position in the history of organometallic chemistry with landmark moments such as the discoveries of ferrocene, [1] bis-(benzene)chromium [2] and uranocene. [3] In fact, this concept was one of the first that has been successfully transferred from transition metal to actinide chemistry, [3b, 4] thus such species have always been of high value for studying f-element-ligand bonding and determining critical parameters such as the extent of 5f-orbital participation and metal-ligand covalency. Nowadays,m ost prototypic aromatic carbocycles have been incorporated as unsupported sandwich-type ligands into numerous actinide p complexes [5] including anionic C 4 -C 8 [6] and neutral C 6 rings, [7] as well as anionic fused aromatics such as naphthalene [8] or pentalene. [9] When it comes to related heteroarene complexes,t he diversity becomes significantly smaller, and as trong imbalance in favor of the d-block transition metals is encountered. Thus, p-ligated heteroarene complexes of the d-elements have been realized for al arge number of aromatic heterocycles all across the periodic table including B-based systems (BNC 2 ,B 2 N 2 ,B C 4 ,B NC 3 ,B C 5 , B 2 C 4 ,BNC 4 ,BC 6 ) [10] and benzene analogs EC 5 (E = B-Ga, Si-Sn, N-Sb), [11] to name only af ew.F or the f-elements, p complexation of anionic BNC 3 , [12] BC 5 , [13] AlC 5 , [14] NC 4 ,[5h,7i,15] N 2 C 3 , [16] C 2 P 2 , [6a] PC 4 , [17] P 3 C 2 , [18] PN 2 C 2 , [19] P 4 /As 4 , [20] and P 5 [21] has been verified for selected lanthanide and actinide molecules.B yc ontrast, f-block complexes bearing neutral heteroarenes as sandwich-type ligands are exceedingly rare, and limited to [(tBu 3 -C 5 H 2 P) 2 Ho] (I). [22] Pyridine(diamine) uranium species of the type II formally also contain aneutral nitrogen heterocycle,h owever, p coordination to uranium involves reduced anionic pyridine rings ( Figure 1). [23] We note that p complexation of boracycles is still unknown for the actinides in general. At this point, we wondered what requirements had to be met by the actinide metal center and aneutral heteroaromatic ligand to create more stable p interactions.I ng eneral, the strength of metal-arene p bonding is dictated by two factors: (i)Electrostatics,w hich explains the preference of "hard" actinide cations for p complexation of "hard" anionic (hetero)arene ligands.( ii)Metal-to-ligand backdonation, which is the dominant part of bonding interactions in p complexes of neutral (hetero)arenes. [24] We reasoned that the electrostatic term is maximized by employing high-oxidationstate metal precursors (Th IV ,U IV-VI ), which, at the same time, will limit ligand reduction processes,t hus allowing the generation of species with truly neutral heteroaromatic ligands.T his,however,will significantly lower the backdonation capabilities of the actinide metal center, thus electronrich heteroarenes with very strong p donor strengths will be required as antidote.R ecent studies in our group have highlighted the exceptional p donor strength of the bora-paromatic 1,4-bis(cAAC) 2 -1,4-diborabenzene [1;dbb;cAAC = cyclic (alkyl)(amino)carbene] [25] in remarkably stable Group 6h alf-sandwich complexes [(dbb)M(CO) 3 ]( M = Cr,M o, W). [10s] We were thus confident that the dbb ligand might be as uitable choice for generating the first stable actinide p complexes with neutral heteroarene ligands. Results and Discussion When ThCl 4 (dme) 2 and UCl 4 were allowed to react with 1.1 equivalents of the neutral bora-p-aromatic 1 in ad onor solvent (thf,M eCN) at refluxing conditions (12 h), either purple suspensions (Th) or deep-red solutions (U) formed, from which p complexes [(dbb)(L)AnCl 4 ](2a:An= Th,L= thf; 2b:An= Th,L= MeCN; 3a:An= U, L = thf; 3b:An= U, L = MeCN) were isolated as red solids in moderate to good yields (Scheme 1). Compounds 2a/b and 3a/b are thermally robust, even in the presence of an excess of the respective donor solvent, which strongly contrasts with the labile p coordination of benzene and its methylated analogs in related species such as [(h 6 -C 6 H n Me 6Àn )UX 3 ]( X= BH 4 ,A lCl 4 ), [7a,d] and [(h 6 -C 6 Me 6 ) 2 U 2 Cl 7 ][AlCl 4 ]. [7b] However,w hen dissolved in thf,the acetonitrile ligand of 2b and 3b is readily displaced quantitatively to afford thf complexes 2a and 3a.Nochanges were observed upon dissolving 2a and 3a in MeCN.T his reactivity is not surprising given the better s donor properties of thf,a nd the oxophilicity of the actinides.T he fact that ligand displacement reactions preferably occur at the Lewis base site of [(dbb)(L)AnCl 4 ]w ithout affecting the p coordi-nation of the dbb ligand is remarkable,and clearly emphasizes the unique strength of these p interactions. By contrast, complexes 2a/b and 3a/b proved highly sensitive under redox conditions.I no ur hands,c hemical oxidation or reduction consistently led to decomposition of 2a/b and 3a/b to afford free dbb 1 and unknown actinide species.I ts hould be noted that 2a/b and 3a/b were also formed when the reactions were carried out in chlorinated (CH 2 Cl 2 )o ra romatic solvents (benzene/toluene) in the presence of 1equivalent of thf/MeCN,a lthough yields were lower in these cases.I nt he absence of donor solvents, however, no reaction occurred for UCl 4 (presumably because of its low solubility), and ThCl 4 (dme) 2 is partly converted to the dme-bridged dimer [{(dbb)ThCl 4 } 2 -k-dme] (4)(optimized conditions:f luorobenzene, DT,2 0h,1 0% isolated yield; Figure S23). Thus,o ur initial experiments indicated that actinide p complexes of neutral diborabenzene 1 are readily accessible simply by combining the ligand with standard actinide reactants.W en ote that the simplicity of this approach is very uncommon in condensed phase keeping in mind that the p complexation process usually requires preactivation of the metal center under ligand abstracting conditions such as reduction, oxidation, photolysis,o rh alide abstraction. We next turned our attention to the electronic structure of the actinide metal centers of 2a/b and 3a/b.F ormally, oxidation states of + IV are required to exclude the occurrence of ligand reduction processes upon dbb coordination, and to ascertain the neutral nature of the dbb p ligand. For 2a/ b,t heir chemical composition and solution NMR spectra in the normal diamagnetic range strongly indicate an oxidation state of + IV for the thorium centers,even though acoupled biradical character due to non-innocence of the dbb ligand cannot be ruled out completely.The 1 HNMR spectra of 2a/b confirm the presence of a1 :1 ratio of coordinated dbb and Lewis base with their expected signal patterns.N oteworthy are the chemical shifts for the aromatic dbb ring protons (2a: d H = 7.18; 2b: d H = 7.78), which almost remain unaltered from that of the free ligand 1 (d H = 7.31). Similarly,t he 11 BNMR resonances of the boron nuclei (2a: d B = 27.5; 2b: d B = 27.8) are only slightly shifted to higher frequencies upon complexation (1: d B = 24.8). [25] By contrast, the related Group 6h alfsandwich complexes [(dbb)M(CO) 3 ]( M = Cr,M o, W) exhibited asignificant high-field shift of both the 1 H(d H = 4.74-4.97) and 11 BNMR (d B = 6.0-7.0) resonances of the diborabenzene ligand. [10s] This behavior was interpreted in terms of strong metal-to-ligand backbonding contributions from the electron-rich Group 6m etal centers to the empty dbb ligand orbitals,t hus creating highly covalent bonding interactions. Consequently,t he present findings indicate af undamentally different bonding picture for 2a/b with larger electrostatic and rather small metal-to-ligand backbonding contributions, which is in line with the higher oxidation state of Th IV and its lack of fe lectrons. For 3a/b,m agnetic susceptibility measurements also account for an oxidation state of + IV of the uranium centers, thus verifying the presence of neutral dbb p ligands in 3a/b as well. In solution, 3a/b show paramagnetic behavior at room temperature with paramagnetically shifted and broadened Scheme 1. Reactivity of dbb 1 with ThCl 4 (dme) 2 and UCl 4 to afford actinide half-sandwich complexes 2 and 3. Thee xact nature of the An-dbb p interaction in complexes 2a/b and 3a/b was assessed by DFT calculations. To this end, we studied the electronic structures of 1, 2a, 3a, the hypothetical benzene analogues [(h 6 -C 6 H 6 )(thf)AnCl 4 ] (An = Th,U ), and some literature-known [(h 6 -C 6 H n Me 6Àn )UX 3 ]( X = BH 4 ,A lCl 4 )s pecies,a pplying 5f 0 d 0 , 5f 2 d 0 and 5f 3 d 0 electron configurations for the Th IV ,U IV ,a nd U III centers,r espectively.T he computed structural and spectroscopic parameters of 2a and 3a agree very well with experimentally determined values (Supporting Information). Thec alculations suggest that the An-dbb interactions in 2a and 3ashould be viewed as largely electrostatic in nature with small, but distinct orbital contributions,which coincides with only marginal changes in NMR shifts after complexation of dbb by Th IV .T hus,d elocalization indices (QTAIM DIs), which serve as am easure of the bond covalencyf or ag iven pair of atoms, [27] show rather small values for the An À Cbonds ,a nd their shape resembles that of the frontier molecular orbitals HOMO and HOMOÀ1offree dbb 1. [25] While HOMO of 2a illustrates the p donor interaction of the delocalized aromatic p system of 1 (HOMO) to thoriumsv acant 6d orbitals,H OMOÀ1a nd HOMOÀ16 are reflective of ligand-to-metal p bonding emanating from C=C-centered ligand p orbitals (HOMOÀ1) to empty 5f and 6d orbitals of thorium. It should be emphasized here that MOs associated with metal-to-ligand p/d backbonding could not be located by our calculations.S imilar interactions were also derived for uranium complex 3a ( Figure S27), while the presence of two fe lectrons in principle allows for metal-toligand backbonding interactions.S pin-density calculations, however, have shown that the two unpaired fe lectrons predominately reside at the U IV center with small negative spin densities at chlorine atoms (Figure 4), making such metal-to-ligand p/d backbonding contributions rather weak in nature. Thei solation of crystalline 2a/b and 3a/b allowed us to elucidate their solid-state structures by X-ray diffraction analyses (Figures 5a/b,S 18, S22). All complexes exhibit pseudo-octahedral geometries with four chloride ligands in equatorial positions,a nd one molecule each of Lewis base L (thf,M eCN) and dbb mutually trans in axial positions. Unexpectedly,t he dbb heteroarene is not perfectly planar, instead complexation results in minor deviations of the dbb ligand from planarity in all cases,that is,the two boron atoms are slightly bent out of the ring plane (6.1 to 8.88 8)away from the metal center. Thus,t he hapticity of the An-dbb p coordination seems to be best described as h 4 with close An-C dbb contacts.H owever,t heoretical evidence of weak covalent An-B interactions suggests that the bonding picture is not that simple and that h 6 -typec ontributions have to be considered as well.Hence,the true bonding situation most likely lies within the h 4 -h 6 -continuum, but definitely on the h 4 -side. Notwithstanding its hapticity,t heoretical and experimental considerations clearly show that the diborabenzene ligand is bound to the actinide centers of 2a/b and 3a/b via its fully conjugated p system, and not via interaction of the actinide metal centers with two isolated C = Cd ouble bonds of the heteroarene,a sm ight be reasoned from strong h 4 -contributions.First of all, our computations emphasize the significance of ligand-type orbitals for An-dbb bonding,w hich mainly involve HOMO and HOMOÀ1o ff ree dbb 1,o rbitals of p symmetry spanning the whole B 2 C 4 heterocyclic backbone (resonance structure 1, Figure 5c). More importantly,the type of p coordination active in molecules 2a/b and 3a/b is expected to directly affect their spectroscopic and structural properties.Hence,interaction of the actinide centers with two isolated C=Cd ouble bonds would require the unfavorable breakup of aromatic p conjugation within dbb,r esulting in unfavorable biradical or charge separated resonance structures 1' ' and 1' '' ' (Figure 5c). In our hands,the presence of such resonance structures can be excluded for 2a/b and 3a/b. While any biradical character (1' ')c an be ruled out on the basis of EPR spectroscopic studies,c harge separation (1' '' ') appears very unlikely when closely inspecting the solid-state structures of 2a/b and 3a/b.T hus, p coordination of dbb via resonance structures 1' ' and 1' '' ' most likely causes significant elongation of the endocyclic BÀC dbb of the dbb ligand, while, as aconsequence,exocyclic B À C cAAC and C cAAC À N cAAC bonds will become shorter and longer, respectively.For 2a/b and 3a/ b however,t he opposite is true,a nd B À C dbb (1.507(7)-1.526 (3) )and C cAAC ÀN cAAC distances (1.313(6)-1.322 (5) ) are smaller than in 1,w hile C dbb =C dbb (1.395(4)-1.403 (6) ) and BÀC cAAC (1.584(6)-1.597 (5) (3) ). [25] Consequently,X -ray diffraction data clearly support our theoretical findings that An-dbb p coordination involves the whole aromatic B 2 C 4 framework. IR spectroscopic studies on 1, 2a/b and 3a/b in the solid state also support this p bonding picture (Figures S12-S17). Here,IRbands associated with the endocyclic C = Cbonds are shifted to lower energies upon complexation of dbb,t hat is, from 1412-1472 cm À1 in 1 to 1365-1423 cm À1 in 2a/b and 3a/b. At the same time,t he strong IR absorption of the C cAAC À N cAAC bond is shifted to higher wavenumbers (cf. 1:1423 cm À1 ; 2a/b, 3a/b:1454-1458 cm À1 ), which is consistent with stronger C cAAC ÀN cAAC bonds in p complexes 2a/b and 3a/b (assignment of IR bands supported by by frequencies calculations). An À Cbond lengths were determined to be in the range of 2.831(2) to 2.948(4) (2a:Th-dbb cent 2.586 ; 2b:Th-dbb cent 2.556 ; 3a:U -dbb cent 2.585 ; 3b:U -dbb cent 2.490 ). We note that these contacts are quite short, which illustrates the strong actinide-heteroarene interaction in 2a/b and 3a/b.F or 2a/b,aCSD search on Th complexes featuring neutral p arene ligands provided reasonably longer Th-C cent distances (2.706-2.950 ). [7g,h,l,m] TheU -C distances of 3a/b,h owever, strongly resemble those in [(h 6 -C 6 Me 6 )UX 3 ](X= BH 4 ,AlCl 4 ; av.U-C 2.92 ), [7a,d] (3) )are in agreement with theory and rather weak An À Bb onding interactions.Overall, experimental and theoretical data suggest that neutral dbb is tightly bound to Th IV and U IV in a h 4 -type coordination mode via p interactions involving the whole aromatic p system (mediated primarily by electrostatics in combination with distinct covalent bonding contributions; ligand-to-metal donation;non otable p/d backbonding). Finally,weset out to overcome the well-known tendency of actinide ions to preferably bind "hard" donor ligands and tried to incorporate the "soft" Lewis base PMe 3 in dbb complexes of the type [(dbb)(L)AnCl 4 ]. Thus,the reactions of ThCl 4 (dme) 2 and UCl 4 with 1.1 equivalent of dbb 1 in the presence of PMe 3 in benzene under refluxing conditions resulted in the generation of PMe 3 -substituted species [(dbb)-(PMe 3 )AnCl 4 ]i ns olution. Due to labile An À PMe 3 bonds, however, only [(dbb)(PMe 3 )ThCl 4 ]( 2c)e xhibited sufficient stability to allow isolation (in low yields of 11 %) (Scheme 1, Supporting Information), while its uranium analog eluded isolation and could only be observed spectroscopically. Red crystalline 2c represents the most sensitive and least stable species in the [(dbb)(L)AnCl 4 ]s eries,r eadily reacting in polar/coordinating solvents,a nd decomposing under vacuum conditions,m aking its purification extremely difficult. Nevertheless,i ts identity was clearly verified by NMR spectroscopy and X-ray diffraction studies ( Figure 6). In solution, diamagnetic 2c shows a 11 BNMR resonance at d B = 27.7 for the p ligated diborabenzene ligand (cf. 2a: d B = 27.5; 2b: d B = 27.8). TheT h IV -PMe 3 interaction of 2c is characterized by a 31 PNMR signal with ac hemical shift of d P = À30.6 in solution, and by aTh ÀPbond distance of 3.053 (1) in its solid-state structure.O ther structural parameters are roughly the same as those of compounds 2a and 2b.W ewere surprised to find that dative Th-P interactions are still rare, and only one paper has been published reporting related stable ThÀPd ative bonding interactions involving nonchelating tertiary phosphine ligands,that is,[(BH 4 ) 4 Th(PR 3 ) 2 ] (R = Me,Et). [29] In addition, only afew species containing the bidentate 1,2-bis(dimethylphosphino)ethane ligand (dmpe) are known that are suitable for comparison. [30] Here, 31 PNMR chemical shifts range from d P = À33.3 to À4. 5 (cf.[(BH 4 ) 4 Th-(PMe 3 ) 2 ]: d P = À22.2), and Th-P bond lengths range from 3.096 (3) in [(BH 4 ) 4 Th(PEt 3 ) 2 ]t o3 .237 (2) in [Cp 2 -(CH 2 Ph) 4 Th(dmpe) 2 ]. Nevertheless,t he Th-P interaction of 2c must still be considered rather weak, and the PMe 3 ligand is prone to dissociation in the presence of "hard" Lewis bases. When dissolved in either thf or MeCN,the "soft" PMe 3 ligand is replaced instantaneously,a nd 2c converts quantitatively into its analogs 2a and 2b,r espectively,w hich is consistent with the preferred coordination of "hard" donor ligands to thorium. Conclusion In summary,w eh ave succeeded in the realization of the first actinide-based molecules with an aromatic boracycle as sandwich-type p ligand, [(dbb)(L)AnCl 4 ]. Complexes 2a-c and 3a/b are remarkably stable even in the presence of coordinating solvents,w hich contrasts with the labile p coordination often observed for related species with unsupported benzene ligands.T hus,l igand displacement reactions proceeded at the Lewis basic site in trans-position to the dbb ligand without affecting actinide-heteroarene bonding.A combination of experimental and theoretical techniques was used to verify the neutral nature of the diborabenzene ligand and its p-type coordination to the Th IV and U IV metal centers. Theunique strength of the actinide-heteroarene interaction is closely related to the outstanding p donor capabilities of the aromatic dbb heterocycle,t hus enabling (i)strong electrostatic interactions with the electron-poor actinide centers,and (ii)distinct covalent orbital interactions primarily via ligandto-metal electron donation and without notable backbonding contributions.
4,456.2
2020-04-24T00:00:00.000
[ "Chemistry" ]
Emerging technology has a brilliant future: the CRISPR-Cas system for senescence, inflammation, and cartilage repair in osteoarthritis Osteoarthritis (OA), known as one of the most common types of aseptic inflammation of the musculoskeletal system, is characterized by chronic pain and whole-joint lesions. With cellular and molecular changes including senescence, inflammatory alterations, and subsequent cartilage defects, OA eventually leads to a series of adverse outcomes such as pain and disability. CRISPR-Cas-related technology has been proposed and explored as a gene therapy, offering potential gene-editing tools that are in the spotlight. Considering the genetic and multigene regulatory mechanisms of OA, we systematically review current studies on CRISPR-Cas technology for improving OA in terms of senescence, inflammation, and cartilage damage and summarize various strategies for delivering CRISPR products, hoping to provide a new perspective for the treatment of OA by taking advantage of CRISPR technology. Introduction Osteoarthritis (OA), known as one of the most common types of aseptic inflammation of the musculoskeletal system, is characterized by defects of hyaline cartilage, synovial inflammation, subchondral bone loss, and tissue hypertrophy [1].Its main clinical symptoms are chronic pain and whole-joint lesions, and eventually disability [2,3].The prevalence of OA has increased steadily because of obesity, trauma, and the aging population [4].Despite its high prevalence, there are no drugs that can inhibit the progression and eliminate symptoms of OA absolutely, and medications recommended by guidelines usually have dose-dependent toxicity [5,6].Considering that OA has a high gene-related possibility, estimated at 40-60%, gene therapy may be able to provide more valuable ideas for the treatment of OA [7]. Currently, molecular biology, genetics, and genomics are facing a historic opportunity.Since clustered regularly interspaced short palindromic repeats (CRISPR) was discovered in the 1980s, CRISPR and the CRISPR-associated system (Cas) have been rapidly developed into a third generation of gene-editing tools.Essentially, CRISPR is a defensive sequence within the prokaryotic genome, and Cas represents genes located on the CRISPR locus nearby [8].In a broad sense, the core concepts of the CRISPR-Cas system are the CRISPR locus, the related Cas genes, and the RNAguided adaptive immune system encoded by related genes [9,10].As a type of RNA sequence, the CRISPR locus contains spacers originating from bacteriophages and extrachromosomal elements and is separated by sequences that are short, repeat, and can encode small nonmessenger RNA [11].Generally, it can be divided into a leader region, repeat region, and spacer region.CRISPR RNA (crRNA) derives from precursor CRISPR transcription through processing of nucleic acid endonuclease; it can pair with complementary target sequences by the spacer at the 5′ end and trigger specific disruption of an invading sequence by Cas nuclease from Cas genes [12].Thus, the decisive characteristic of the CRISP-Cas system is the effectors composed of crRNAs and Cas proteins, with the ability to recognize and disturb targeted sequences [13,14].Compared with conventional tools such as zinc finger nucleases, recombinases, transcription activator-like effector nucleases, and restriction enzymes, the CRISPR-Cas system offers more advantages for use in OA therapy [15].It has a more powerful ability to regulate gene expression and genome sequence, more precise insertion, knockout, and edition of targeted genes, and inducing more phenotypic protein [16].Improved CRISPR-Cas systems can produce specific sequences rapidly and be used easily, promoting their application within gene therapy [11].However, the application of the CRISPR-Cas system requires clarification of the molecular biology and genomic mechanisms to identify optimal editing sites. Although OA is a complex, multigenetic, and multitissue degenerative disease, researchers have explored its pathogenesis and structure degeneration comprehensively [17].Senescence, inflammatory alterations, and the corresponding regulation of genes, proteins, and signaling pathways are key factors that induce the development of OA [1,18,19].Once pathological signaling pathways are activated, changes such as excessive apoptosis [20], autophagy [21], pyroptosis [22], hypertrophy [23], disturbance of metabolism [24], and abnormal differentiation [25] occur among chondrocytes.Combined with the influence of inflammatory mediators (e.g., proinflammatory cytokines), processes of subchondral bone sclerosis, degeneration of extracellular matrix, production of reactive oxygen species, and destruction of collagen are initiated [1,[26][27][28], and OA will develop and progress continuously, causing cartilage defects.Thus, OA is regulated by multiple signaling pathways and results from deterioration of cell fate and the interaction of tissues such as cartilage and synovium.The signaling pathways and corresponding molecular products involved in these processes offer potential targets for the treatment of OA, enabling the use of gene-editing therapies, especially with the CRISPR-Cas system, as potential tools for OA treatment. In this review, we summarize the structure, mechanism, and function of the CRISPR-Cas system.Besides, we provide recent insights into OA gene therapy from the aspects of cellular senescence, inflammation, and cartilage repair.The inclusion of up-to-date research is highlighted to summarize and predict potential developments.We also present reviews of and insights into tools for delivering the CRISPR-Cas system. Overview of current therapeutic strategies for osteoarthritis Both primary OA (caused by the degeneration of bone and cartilage tissue) and secondary OA (caused by trauma, inflammation, fracture, etc.) have a similar pathological mechanism: Changes in molecules and the ECM increase the level of inflammatory cytokines and enzymes, which destroy cartilage structure and disturb the process of cartilage repair.Thus, cartilage will disappear, and the resulting direct friction between bones causes pain and even disability [29].This dictates that the treatment of OA ultimately comes down to the control of inflammation and the repair of damaged cartilage. Until now, conventional strategies for preventing exacerbation of OA have been primary therapies such as weight control, exercise control, and trauma prevention [30].Other conventional therapy aims to relieve the symptoms.For example, nonsteroidal antiinflammatory drugs (NSAIDs) are often used to reduce the pain of patients [31].Besides NSAIDs, chondroitin sulfate is generally recognized as an effective nutritional factor that benefits cartilage.In addition to oral medications, intraarticular injections of lubricating agents, such as sodium hyaluronate, can reduce the increased interbone friction that occurs after injuries to articular cartilage, thereby relieving symptoms [29,32].For patients with severe OA, surgery is the last choice of treatment [33].Effective strategies include arthroscopic debridement, osteotomy, and ultimately arthroplasty.However, they carry the risks of iatrogenic injury, periprosthetic infection, and eventual joint revision [34][35][36]. To strengthen the effect of nonsurgical treatment and avoid the side effects and trauma of surgical treatment, as well as to maximize the fundamental solution for cartilage defects and other problems brought about by OA, cell therapies and gene therapies (sometimes combined) have been proposed.Culturing autologous chondrocytes in vitro and injecting them into joints in the form of articular cavity injections for cartilage repair is a widely recognized option in recent years [37][38][39].Meanwhile, owing to their multispectral differentiation, immunomodulatory function, low immunogenicity, and self-renewal ability, MSCs are becoming an emerging therapy that is being focused on to avoid passaging-induced chondrocyte dedifferentiation while taking full advantage of their important roles in tissue regeneration and repair in response to cartilage deficits caused by OA [40,41].Additionally, extracellular vesicles (EVs) secreted by MSCs have also been shown to promote ECM synthesis and cartilage repair [42].Their therapeutic function is mainly achieved by effectively regulating the expression levels of inflammatory genes, catabolic genes and synthetic genes, and immunomodulation of cells and microenvironment within the OA environment [43][44][45].However, all such explorations must confront the dilemma of whether chondrocytes and MSCs can effectively colonize, proliferate, and form mature cartilage tissue in a difficult OA environment.Furthermore, the cost of cell therapy, the risk of additional surgery required to extract the cells, and the safety of clinical translation are all issues that should be balanced. Gene therapies are designed to regulate the expression of damaged genes by regulating genes (alone, or in combination with cellular therapies) to achieve the goal of superiority over cellular therapies or conventional therapeutic molecules.As knowledge of OA continues to grow, gene therapy is advancing with it.The most accepted gene-related therapeutic regimen is intraarticular delivery of various gene enhancers or inhibitors.For example, targeting IL-1β, which is involved in the pathological mechanism of OA, lowering its expression level, or blocking its receptor are considered effective therapeutic options.Based on this, IL-1 receptor antagonists are one of the most promising gene therapies; they can inhibit multiple signal transduction on the corresponding signaling pathway and effectively reverse disease progression in OA models [29].Another idea is to highly express genes that promote cartilage synthesis in vivo.It has been shown that the use of insulin-like growth factor to promote proteoglycan synthesis in rabbit knee joints was effective for stimulating matrix synthesis in OA joints [46].And related studies targeting SOX9, FGF-2, and hyaluronan synthase 2 have shown therapeutic effects on OA [47][48][49].Currently, theories based on various types of RNA dysregulation leading to OA have greatly facilitated the development of RNA-related gene therapies [29].Several studies have reported that intraarticular injections of nonviral or viral vectorloaded miRNAs ameliorate pathological changes in OA [50][51][52].And using small interfering RNAs to specifically inhibit expression of MMP13, which plays a major role in OA progression, has also been shown to be an effective gene therapy option [53].It should be noted that miRNAs are susceptible to off-target effects, whereas siRNAs are more susceptible to degradation, making their effects relatively unstable.In addition, the effects of utilizing RNAs are largely dependent on their effectiveness and specificity.These characteristics limit the application of noncoding RNA-based gene therapy [54].In contrast, CRISPR-based approaches have shown greater potential owing to their high efficiency, weaker off-target effects, and versatility, which points to a new direction for gene therapy [7,16]. Structure, mechanism, and function of the CRISPR-Cas system According to the current CRISPR-Cas loci and mechanisms, existing CRISPR-Cas systems can be divided into two classes [55].Class I includes type I and III systems, composed of heteromeric multiprotein effectors, and carry out biological function through a large multi-Cas protein complex [14,56].Conversely, type II, V, and VI systems belong to class II and are frequently used because they form a single multidomain effector [57,58]. CRISPR-Cas9, which recognizes and cleaves double-strand DNA (dsDNA) by employing single DNA endonuclease, is the most utilized tool that benefits from the specificity and codability of RNA [59].It is composed of guide RNA (gRNA) and Cas9 proteins with nucleic acid endonuclease function; the gRNA guides Cas protein to target sites, where double-strand DNA is ruptured through the influence of the Cas protein, and is then repaired by the endogenous pathway [60][61][62][63][64].The realization of this process relies on high GC proto-spacer adjacent motif (PAM, a noncoding short fragment on crRNA), trans-activating RNA (tracrRNA), crRNA, and Cas9.gRNA is synthesized by a combination of crRNA and tracrRNA, where the former identifies targeted sequences of DNA and the latter combines Cas9 protein [57].Cas9 has a recognition lobe (REC) containing bridge helix and three helical domains, and a nuclease lobe (NUC) with a Topo domain, a HNH domain, a C-terminal domain (CTD), and a split RuvC domain.The RuvC domain is activated to cleave DNA strands that are opposite to complementary strands (i.e., nontargeted DNA), and the HNH domain is activated to cleave DNA strands that are complementary with crRNA (i.e., targeted DNA) [65].Subsequently, Doudna and Charpentier fused crRNA and tracrRNA into a single RNA and named it single-guide RNA (sgRNA) [66].The improved CRISPR-Cas9 system provided revolutionary progress for gene therapy (Fig. 1 shows the timeline of the progress of the CRISPR-Cas system). The mechanism of action of the CRISPR-Cas9 system can be summarized as follows: Cas9 cuts the sequences on the targeted DNA with the guidance of sgRNA, which produces double-strand break (DSB).The DNA will then be repaired as in autonomous cells via a process that involves nonhomologous end joining (NHEJ) and homologous recombination (HR) [12].NHEJ directly shortens the distance between the ends of broken strands and then rejoins the broken strands with the help of DNA ligase, whereas HR prefers DNA exchange between homologous chromosome regions [67].NHEJ and HR have different characteristics, and each has its own advantages and disadvantages.A specific comparison between NHEJ and HR is presented in Table 1 [68][69][70][71][72]. Besides Cas9, researchers have explored many new Cas proteins to develop favorable type II CRISPR-Cas systems.For instance, Qi et al. introduced dead-Cas9 (dCas9) in 2013 [73].Mutations in the RuvC and HNH domains on dCas9 cause Cas proteins to have only targeting function and lose their nuclease function.dCas acts as a tool for precise targeting and can form fusion proteins with other effectors [73,74].This allows the CRISPR-dCas9 system to target and regulate gene expression without causing DNA damage.Another explored approach is the CRISPR-Cas12 system, with 11 subtypes labeled from a to k [75].Cas12a, 12b, and 12f are commonly used.Cas12a prefers recognizing a high content of T nucleotide PAM, rather than a high content of GC like Cas9.It functions through a single RuvC domain and is guided by a single crRNA, whereas Cas12b is guided by crRNA and tracrRNA [75,76].In addition to the routine function of Cas proteins to cleave dsDNA, Cas12a, 12b, and Cas12f have the ability to trans-cleave single-strand DNA (ssDNA) without dependence on PAM.Thus, full utilization of the ssDANase activity of Cas12 can provide sensitive, specific, and rapid new solutions for gene therapy and molecular diagnostics [77][78][79].In contrast, the CRISPR-Cas13 system is a type VI system and has been identified as a potential tool targeting RNA [80]. Although the CRISPR-Cas13 system has been explored and divided into seven subtypes (a, b1, b2, c, d, X, and Y), all the types have similar single effector Cas13 proteins with two different RNase activities: one to target, cleave, and generate the RNA sequence, and the other to preprocess crRNA [81][82][83].In summary, numerous CRISPR-Cas13 systems have been developed and applied in RNA degradation, live imaging, nucleic acid detection, and base edition [84], and further progress on the CRISPR-Cas13 system will provide a new gene therapy and gene-editing platform for OA. Biological and biomaterial-related delivery systems for the CRISPR-Cas system Although CRISPR-Cas has been regarded as a revolutionary technology for gene editing and transcriptional regulation since 2012 because of its unparalleled advantages such as precise editing of multiple targets, rapid generation of mutants, and the possibility of designing single guide RNAs (sgRNAs) [85][86][87], its components must be delivered under stringent conditions by special tools.Strategies to deliver CRISPR-Cas systems efficiently and safely have gradually become an issue that must be solved and innovated.The ideal delivery system for CRISPR components should be efficient, highly safe, stable, and nontoxic [88].Conventional viral vectors are limited by oncogenicity, immunogenicity, compositional constraints, mass production efficiency, and Cas expression lifespan, while for nonviral vectors, one needs to address issues such as rapid clearance, toxicity, biocompatibility, and release of active ingredients [87,89].In addition, a variety of abiotic delivery options are worth considering Several current delivery systems are summarized in Table 2. Viral delivery systems have the abilities to integrate into the host genome, produce sustained effects, and deliver compositions efficiently [90].Among the variety of viral vectors, adenoviruses, adeno-associated viruses (AAVs), and lentiviruses play an important role in CRISPR-Cas-based genome-editing therapies and have been widely used in clinical models and trials [91].As an 80-100-nm double-stranded DNA virus, adenovirus itself can carry up to 8 kb of exogenous DNA and enhance transfection of the CRISPR-Cas system through additional targeting signals [92].In addition, adenoviruses can infect both dividing and nondividing cells and effectively minimize off-target effects and unintended mutations [91,92].In contrast, ideal AAVs have a transmission capacity of 4.1-4.9kb and recombinant AAV must also contain articular regulatory elements for gene expression, so even though the vectors themselves may be much larger than the size of the CRISPR-Cas system, the packaging efficiency is severely reduced, and they cannot be used for extensive gene regulation [90,93].Another serious problem is that the presence of neutralizing antibodies against AAV in patients previously infected with AAV significantly reduces the transfection efficiency [94].The property of AAV to promote long-term Cas expression also increases the risk of off-target effects [95].However, AAV is often used as an in vivo transfection system and exhibits tropism for different organs depending on the serotype and phenotype [90].In general, the combination of capsid regulation and genomic regulation provides AAV serotype vectors that reduce the affinity of neutralizing antibodies for drug-resistant reactions and increase the transfection efficiency [95].The intra-articular injection of adeno-associated virus, which expressed CRISPR/Cas9 components to target genes encoding MMP13, IL-1β, and NGF, successfully achieved gene editing in a surgically induced OA mouse model [96].Compared with adenovirus and AAV, lentivirus, as a type of retrovirus, has low cytotoxicity and weak immunogenicity, with little side effects on transfected cells [90,97].Although it also faces difficulties in off-target effects due to continuous Cas9 expression and high-precision genome editing, the use of selective integrase-deficient lentiviral vectors generated by integrase modification significantly reduces the risk of unintended mutations [98,99].For all viral vectors, the use of glycoproteins for viral surface wrapping modification, or deletion of promoters or enhancers with terminal repetitive sequences to avoid the activation of relevant genes, are effective methods to improve the safety of transfection and delivery of viral vectors [90]. Nanoparticle delivery systems have revolutionized the field of genome editing in the context of the rapid development of synthetic vectors, biomaterials, and cell engineering.Nonviral vectors are less limited by packaging capacity and minimize immunogenicity [100].At the same time, Cas delivered by nonviral vectors tends to be expressed transiently, reducing the probability of insertion mutagenesis and the risk of nucleaseinduced off-target effects [100,101].Lipid nanoparticles (LNP) artificially polymerized molecular nanoparticles have been widely used and are recognized as mainstream [16,90,101].Lipid nanoparticles are essentially amphiphilic, bilayer vesicle-like carriers composed of various hydrophobic and hydrophilic molecules that mimic cell membranes [102].Owing to their efficient delivery ability and good biocompatibility, they have promising applications in the delivery field.LNPs are characterized as a targeted delivery system with cargo monitoring and reduced toxicity [103].In particular, the ionic and polar head of cationic liposomes allows unstable nucleic acids with anions to better cross the cell membrane, making them highly sought after for gene delivery, especially nuclear transport [90,101,102].Liposomes prepared by Han et al. using microfluidics can increase the encapsulation of terminal sgRNA up to 85% [104].Based on the advantages of high bioavailability, biocompatibility, long lifetime in blood circulation, and degradability of polymeric materials, the use of protein cores and polymeric encapsulation of CRISPR-Cas system to form a nanodelivery system for effective gene delivery is considered to have good development prospects [105,106].Although artificial polymeric molecular nanoparticles could offer a new delivery system of gene therapy, it is still unclear whether they can realize their advantages in the circulatory system, as local injection is often considered for the treatment of OA. Extracellular vesicles as the delivery system for genetic components has received increasing academic attention [88].As functional materials secreted by various natural cells under different external or internal conditions, EVs can regulate biological processes by themselves while offering effective delivery, targeted delivery, and biocompatibility through their phospholipid bilayer membranes and high-level messenger molecules on the surface [107][108][109].Therefore, both artificially modified and natural EVs are reliable and are expected to deliver CRISPR-related components with high safety.Hybrid exosomes formed by membrane fusion of chondrocyte-targeting exosomes with liposomes entered the deep region of the cartilage matrix in OA rats, delivering the plasmid Cas9 sgMMP-13 to chondrocytes [110].However, accurate delivery of components via EVs is problematic owing to various types of interference.Delivery of EVs based on the CRISPR-Cas systems is still in its infancy, and multiple issues need to be addressed: (1) the standardization and engineering of EV preparation, (2) the uncertain interactions, pharmacokinetics, and biodistribution of EVs and intrinsic CRISPR components, (3) clarification of methods for administration of EVs, (4) bioregulatory functions due to their own bioregulatory functions, so one cannot ignore homogenization of EV delivery systems for different diseases and the trade-off between generalizing the types of EVs for broad categories of diseases or targeting development for each different disease, and (5) the need to consider organelle-specific EVs as a future research direction. With the identification of structures, exploration of mechanisms, and development of platforms (Fig. 2), the CRISPR-Cas system has become an emerging technology that is receiving more attention in the gene therapy field.The combined application of different CRISPR-Cas systems provides the possibility for various gene-editing strategies.In the OA gene therapy field, this revolutionary technology has sufficient potential for diagnosis, reversing cellular senescence, improving inflammation, and promoting cartilage repair. Application of the CRISPR-Cas system for cellular senescence in the process of OA Cellular senescence, known as a key risk factor in OA, is caused by multiple physical or pathological processes such as DNA damage, telomere shortening, oxidative stress, mitochondrial dysfunction, and sustained cytokine activation [118].Apoptotic resistance, degeneration of extracellular matrix (ECM), secretion of proinflammatory Fig. 2 The mechanism of the classical CRISPR-Cas system and the classification of CRISPR-Cas systems.CRISPR-Cas9 shears through different structural domains on the Cas9 protein and repairs the sheared DNA by both NHEJ and HDR to accomplish gene editing.In turn, CRISPR-Cas is divided into different kinds according to the Cas molecules, and permanent arrest of proliferation are the common characteristics of senescence among various cellular types, being identified as the senescence-associated secretory phenotype (SASP) [119].The accumulation of senescent nonreplicable chondrocytes will trigger inflammatory pathways, affect oxidative stress, inhibit energy metabolism in mitochondria, and destroy the balance between synthesis and elimination within cartilage homeostasis [120][121][122][123]. Preclinical studies have proved that removing the SASP through multiple gene-editing tools can attenuate the process of OA [124].As an emerging gene-editing tool, CRISPR-Cas technology offers the possibility of effective validation of potentially relevant pathways and reversing cellular senescence phenotypes more efficiently and precisely. Common senescence-related genes include telomerase-related genes that maintain chromosome stability and preserve telomere length [125], fibroblast growth factor (FGF) family genes that inhibit cellular senescence, oxidative stress, stem cell failure, and promotes autophagy through multiple signaling pathways (e.g., insulin/IGF-1, WNT, p53/ p21, and forkhead box) [126,127], forkhead box subgroup O (FOXO) family genes targeting oxidative stress, DNA damage, autophagy, and metabolism [128], SIRT family genes that affect the stability of genome, chronic inflammation, homeostasis of energy, metabolism, mitochondrial signaling pathways, and interactions with multiple other signaling pathways [129][130][131][132], vascular endothelial growth factor (VEGF) pathway for vessel formation [133], etc.Since senescence-related genes have been extensively studied, chondrocyte-associated senescence genes that promote OA progression are gradually being validated.Recent studies have shown that senescent chondrocytes during OA progression have two robust endophenotypes.One is endotype-1 with high expression of forkhead box protein O4 (FOXO4), cyclin-dependent kinase inhibitor 1B (CDKN1B), and RB transcriptional corepressor like 2 (RBL2), while the other is endotype-2 with potential therapeutic pathways of vascular endothelial growth factor (VEGF) C and SASP [134].The CRISPR-Cas system plays an important role in exploring and validating such potential pathways and therapeutic targets.Yes-associated protein (YAP), known as an actor in the Hippo signaling pathway, plays a key role in cartilage homeostasis and cellular senescence [135].Regulation of its expression will affect the integrity of the nuclear envelope, the transduction of cGAS-STING signals, and the formation of the SASP [136].Fu et al. delivered a CRISPR-Cas9 system via lentivirus to knockout YAP in mice, verified its role in promoting the development of OA, and revealed the role of the YAP/FOXD1 axis in regulating cellular senescence as one of the major molecular mechanisms for OA progression [137].The same protocol for exploring target genes was used to discover and validate the CBX4 gene by Liu et al.They utilized a CRISPR-Cas system to construct CBX4 knockout human mesenchymal stem cell (hMSC) models and found that deficiency of CBX4 leads to cellular senescence, whereas its overexpression alleviates cellular senescence and subsequent osteoarthritis through maintaining nucleolar homeostasis [138].Meanwhile, Jing et al. added to the lack of genomic screening studies based on the CRISPR-Cas system by constructing a synergistic activation mediator (SAM) using CRISPR-based activation (CRISPRa) technology to screen for OA progression via relevant aging genes.The results showed that SRY-Box transcription factor 5 (SOX5) can activate age-protective genes such as high-mobility group box 2 (HMGB2) and attenuate cellular senescence by triggering epigenetic and transcriptional remodeling.In a subsequent validation phase, they found that delivering SOX5 through lentivirus attenuated age-dependent OA in aged mice [139]. In addition to being used as a detected technical tool for potential sites, gene therapies based on the CRISPR-Cas system for different endophenotypes and corresponding gene, phenotypes, and signaling cascades have great promise.Conventional gene therapy for cellular senescence commonly means the introduction of exogenous complementary cDNAs into target tissues and cells to repair genes that have become defective [140].With the development of the CRISPR-Cas system, gene replacement, polygene editing, and epigenetic modification therapy have become possible strategies to slow or inhibit aging, which cannot be achieved by conventional gene therapy.In the application of gene knockout, CRISPR/Cas technology eliminates the laborious process of synthesizing and assembling protein modules with specific DNA recognition ability.Moreover, compared with TALEN and ZFN technologies, the design and synthesis of gRNA in CRISPR/Cas require significantly less effort, while exhibiting lower toxicity than ZFN technology [141][142][143].The aforementioned advantages have also been observed in the regulation of cellular senescence.By mimicking a similar mechanism of disease requiring wound healing, Varela-Eirín et al. used the CRISPR-Cas9 system to specifically downregulate the expression of the gap junction channel protein connexin 43 (Cx43), which reduced the nuclear translocation of Twist-1 caused by the Cx43-mediated increase in gap junctional intercellular communication (GJIC) and inhibited the formation of SASPs through the downregulation of p53, p16INK4a, and NF-κB to retard chondrocyte senescence and tissue remodeling [144].As the influences of senescence signaling pathways do not exist in isolation owing to the interaction between multiple pathological processes such as inflammatory factor release and excessive reactive oxygen species (ROS) formation, CRISPR-Cas system gene therapy solely targeting senescence is not fully developed at present, and the core direction of use remains the exploration of possible and potential genes.Unlike the clearly defined inflammatory genes, genetic disease genes, or cancer genes in the common use scenarios of the CRISPR-Cas system, modifications of specific genes may lead to serious side effects or adverse reactions due to the complex signaling cascade of the senescent genes and the unclear mechanisms.Only senescence genes that have been identified after enough bioinformatics analyses, gene sequencing, and functional tests make clear sense for treatment using the CRISPR-Cas system. Application of the CRISPR-Cas system for inflammation in the process of OA Inflammation in the cartilage and synovial microenvironment has been recognized as a key factor in the progression of OA since the discovery of abnormally high levels of inflammatory plasma proteins in the blood and joint fluids of OA patients in 1959 [145].High levels of complements, plasma proteins, inflammatory mediators, and cytokines are among the key features of OA [146].For example, interleukin-1β (IL-1β), which is produced by chondrocytes, leukocytes, osteoblasts, and synoviocytes, can bind to IL-1 receptor (IL-1R) and activate transcription factors through the NF-κB and MAPK signaling pathways to regulate the inflammatory response, leading to the production of inflammatory mediators such as COX-2, PGE2, and NO and accelerating OA progression [147].Additionally, tumor necrosis factor-α (TNF-α) is one of the most important inflammatory factors that stimulate inflammation in OA.By regulation of pathways such as NF-KB and PI3K/Akt, it stimulates the production of matrix metalloproteinase (MMP) -1, MMP-3, and MMP13 by cartilage, synovium, and subchondral bone layer-associated cells to break down cartilage collagen [147][148][149].As a key inflammatory mediator that can synergize with TNF-α, IL-6 initiates signaling cascades through the regulation of MAPK, SATA3, ERK, and other signaling pathways to promote OA progression [150,151].In brief, different inflammatory mediators have corresponding regulatory pathways, and genetic modulation of any targets on the pathway by using CRISPR/Cas system-related techniques has the potential to significantly affect the final OA progression.The multiple inflammation-related pathways are summarized in Table 3 [24,.Nowadays, as the implementation and development of disease-modifying OA drugs (DMOADs) are subject to a series of limitations [219], it is of great significance to conduct CRISPR-based targeted therapy to target inflammatory mediators and related pathways during the progression of OA. Owing to the upregulation of IL-1β during the OA process, Zhao et al. tried to ablate IL-1β to ameliorate its progression [96].After delivering a targeting CRISPR-Cas system with an adeno-associated virus (AAV), histology and μCT analyses were performed.The study demonstrated that CRISPR-mediated destruction of IL-1β significantly remitted the symptoms of posttraumatic osteoarthritis (PTOA).The same targets and similar editing strategies were confirmed by Karlsen et al. [220].Meanwhile, Dooley et al. identified and targeted the functional structural domain of IL-16 by using the CRISPR-Cas system, and RNP complexes containing recombinant Cas9 coupled to guide RNA were delivered to cells via electroporation [221].This study demonstrates the regulatory role of the CRISPR-Cas system in targeting inflammatory factors for chondrogenic differentiation.To address the problem of impaired cell regenerative capacity due to the development of inflammatory conditions in the microenvironment of PTOA, Bonato et al. improved the concept of cartilage tissue engineering through the CRISPR-Cas system [222].The study provided multivalent protection to inhibit signaling that activates proinflammatory and catabolism of NF-κB pathways by targeted knockdown of TGF-β-activated kinase 1 (TAK1) in cells by CRISPR-Cas9.TAK1-konckout chondrocytes could efficiently integrate into natural cartilage even under proinflammatory conditions.Besides, results demonstrated that TAK1-knockout chondrocytes secrete less cytokines, which in turn reduces the recruitment of proinflammatory M1 macrophages.This type of targeted CRISPR-Cas-engineered chondrocytes (cartilage tissues) for inflammatory conditions represents a new option for OA treatment.Notably, owing to the persistence of inflammatory factors in the OA synovium, inflammation-related changes in the microenvironment also affect a variety of autologous cellular strategies by promoting fibrocartilage deposition [223].In addition to the engineering of autologous chondrocytes by altering inflammation-related genes, another promising approach is to combine mesenchymal stem cells (MSCs) with the CRISPR-Cas system to attenuate inflammatory signals that promote ECM degradation, especially targeting IL-1Ra [223][224][225].Another common CRISPR-Cas9-edited inflammation-associated stem cell is the induced pluripotent stem cell (iPSC) to improve immunomodulation of arthritis.CRISPR-Cas9-edited iPSCs targeted loss of IL-1R, thereby preventing IL-1-induced inflammatory responses and subsequent tissue degradation [226].Recently, an engineered iPSC with a dynamic negative feedback loop was constructed using CRISPR-Cas9 technology and mouse iPSCs by Brunger et al. [227].By adding IL-1Ra or soluble TNFR1 (Tnfrsf1a) genes downstream of the Ccl2 promoter, iPSCs can synthesize anticytokines under IL-1 or TNF-α stimulation in a self-regulatory fashion and effectively inhibit inflammation in a self-regulatory manner.The model has already been used for inflammation in animal models of rheumatoid arthritis (RA) [228].Considering that OA and RA are also osteoarticular inflammatory diseases involving the synovium and joints, this scheme may provide a new direction for gene inflammation therapy of OA.With the growing understanding of the mechanisms of inflammation and corresponding immune regulation, CRISPR-Cas9-mediated Treg therapies have improved arthritis treatment, although the transmission, lifespan, and plasticity of these cells in vivo are unknown [229].In summary, the use of CRISPR-Cas9 technology to (1) directly knock down overexpressed inflammation-related genes in existing cells, (2) engineer delivered chondrocytes by inflammation-related gene edition, (3) perform gene edition of undifferentiated stem cells to make them antiinflammatory to cope with the postdifferentiation inflammatory milieu, and (4) edit various genes of effector cells that perform immunomodulatory functions in inflammatory environments are the directions of OA gene therapy for inflammation. Pathological mechanisms Refs.Application of the CRISPR-Cas system for cartilage repair in the process of OA Cartilage defects are the most critical feature of OA progression [230].Owing to the complexity of cellular components in the microenvironment in which articular cartilage resides (e.g., chondrocytes, immune cells, endothelial cells, synoviocytes, adipocytes, mesenchymal stem cells, etc.), the repair of cartilage defects is comodulated by the intercommunication of multiple cytokines [231].In particular, dysfunctional chondrocytes that have undergone a series of stimuli such as senescence and inflammation release excessive amounts of protease matrix-degrading enzymes (typically composed of MMPs and ADAMTSs) in response to persistent stimuli in the OA environment, which induces proinflammatory factors to be released from neighboring cells and further enhances the activity of these enzymes, ultimately contributing to the persistence of low-grade inflammation and local tissue damage [232].On the basis of the existence of the vicious circle in the microenvironment described above, cartilage defects become increasingly severe, and the repair of cartilage tissue will be severely impeded.More importantly, although articular cartilage is durable, it lacks blood vessels, resulting in poor regeneration and limited intrinsic healing [233].Existing cartilage repair strategies include microfractures, autologous chondrocyte cell transplantation, biomaterial-based scaffolding techniques for cartilage repair, and various tissue engineering techniques.However, there has not yet been a technique that meets all the requirements for successful cartilage healing, i.e., embodying appropriate bioactivity, structure-function relationships, and ECM organization relationships [231].Thus, combining gene therapy, cell/tissue engineering, and biomaterials as crosslinking projects may provide a potential direction.Among the various types of cartilage repair concepts that have emerged in recent years, the utilization of MSCs is currently one of the most promising ideas [234].As research has clarified that chondrocytes are one of the many types of cells that differentiate from MSCs [235], several current studies are exploring how to appropriately engineer MSCs to adapt them to the needs of cartilage repair.One of the prevailing ideas in this regard is to reprogram cells to give them special abilities [234].CRISPR-Cas-based introduction of exogenous genes and regulation of gene expression levels and engineering of MSCs for regenerative medicine has grown significantly.The core idea of engineering MSCs using CRISPR-Cas is to replace the diseased cells and integrate them into the target tissue to achieve a therapeutic effect while avoiding an inflammatory response [236].MSCs have the differentiation potential to receive physical, chemical, and biological stimuli for lineage transformation and ultimately directed differentiation, and the genes, transcription factors, microRNAs, and signaling pathways involved in the whole process will be activated or inhibited, which facilitates the application of the CRISPR-Cas system [237][238][239].For example, RNA-guided nucleases (RGNs) in combination with the CRISPR system can be targeted to increase the expression of antiinflammatory factor genes in order to delay the progression of arthritis [240].Aggrecan, type II collagen, and SOX9 are considered to be the major transcription factors involved in the differentiation of MSCs into chondrocytes [232,237,241], which can be targeted to enhance the potential of MSCs for cartilage repair.The use of CRISPR-Cas9 technology can also delay telomere shortening and reduce histone deacetylation as well as DNA methylation [242][243][244].Owing to its capability for multigene editing, it can be used to promote chemokine receptor expression to increase MSC homing and adhesion to target tissues while having an anti-aging effect [242].These studies show great promise for genome editing by the CRISPR-Cas system in engineering stem cells for cartilage repair therapeutic applications, but several ethical issues regarding the possible ethical implications of cytogenetic manipulation still need to be resolved before its use in clinical practice. TGFβs In addition to improving aging and suppressing local inflammation to slow the progression of osteoarthritis and enhance cartilage repair, another important idea is to maintain chondrocyte homeostasis, enhance differentiation of chondrocytes, and reduce apoptosis of extant chondrocytes and breakdown of differentiated cartilage components.Nowadays, various types of RNA have been used as potential therapeutic targets.Based on microRNA 140 (miR-140) known as a chondrocyte-specific endogenous gene regulator associated with osteoarthritis, Chaudhry et al. highly efficiently edited products targeting miR-140 gene editing were obtained using two sgRNAs in combination with dual RNP-mediated CRISPR-Cas9 transfection [245].The results indicate that this targeted removal of miR-140 can significantly improve the expression levels of a variety of genes in chondrocytes, especially for genes that require high removal levels to observe significant expression differences.Nguyen et al. focused on LncRNA DANCR, which induces differentiation of human synovial-derived stem cells into cartilage.By leveraging the superior ability to edit targets and upregulate expression of dCas9 compared with conventional Cas9, they successfully induced the activation of DANCR in adipose-derived stem cells after screening by packaging dCas9 and the corresponding gRNA against DANCR in viruses and delivering them, which provides a new idea for the repair of cartilage defects [112].Additionally, since MMP13 was identified as a major factor affecting type II collagen content, numerous studies have focused on how targeted knockdown of the MMP13 gene can ameliorate type II collagen loss.Sedil et al. used a CRISPR/Cas9mediated gene editing strategy to reconstruct human chondrocytes lines and achieved a stable reduction of MMP13 expression in chondrocytes.The reduction of total MMP13 secretion by CRISPR/Cas9 indirectly reduced the degradation of ECM and increased the concentration of type II collagen [246].Meanwhile, to solve the decomposition problem of CRISPR-Cas therapeutic molecules during delivery and to enhance the therapeutic effect, Liang et al. used cartilage-targeted exosomes for direct delivery to knock down the MMP13 gene and achieved a more significant therapeutic effect [110].The publication of this study suggests that CRISPR-Cas therapy has stepped into new territory.The classical targets also include aggrecan and type II collagen.The study confirmed that the use of dCas9 to induce dual overexpression of the two can effectively achieve the deposition of sGAG and type II collagen, provide better support for the ECM, control chondrocyte growth and differentiation, and better regulate the cell phenotype [247,248].And essentially, the original purpose of CRISPR-Cas was to modify mutated genes to fundamentally alter the various types of hereditary diseases and cancers that result from genetic mutations.The use of gene mutation therapy based on this idea to achieve the realization of gene upregulation or the correction of mutations during cartilage repair is a new idea.Nonaka et al. used CRISPR to repair a functional single-base mutation in transient receptor potential vanilloid 4 (TRPV4).The mutation leads to an increase in calcium ions and ultimately to ectopic dysplasia.The experimental results demonstrated that the mutant group showed significantly accelerated chondrogenic differentiation and SOX9 mRNA expression [249]. Recently, it has been increasingly recognized that OA is also a mitochondrial disease [250].Mitochondria from diseased chondrocytes show a significant increase in mass, reduced capacity of antioxidant enzymes, decreased activity of respiratory complexes, and overproduction of reactive oxygen species (ROS) and reactive nitrogen species (RNS) compared with healthy cells [251][252][253].Current studies demonstrate that the changes are highly correlated to mutations in mitochondria DNA (mtDNA) [254].Once the mutation occurs, it will easily generate proteases that lead to mitochondrial oxidation and phosphorylation, resulting in mitochondrial dysfunction and damage [255].In addition, mtDNA is susceptible to exogenous stimulation and has a high probability of mutation [250].Although mitochondria have the function to repair their own mtDNA through a series of ways such as double-bond break repair and base excision repair, it is not realistic to maintain mitochondrial homeostasis under extreme environments (e.g., OA) through this fragile self-repair ability [256,257].Once such damage reaches a threshold, mtDNA damage will lead to mitochondrial pathological phenotypic changes and lasting impairment of physiological functions, causing disruption of metabolism within chondrocytes [258,259].Gene editing targeting mitochondria to treat OA has better prospects, such as targeting mitochondria with peptide nucleic acids complementary to mtDNA templates to inhibit replication of mutant sequences [260][261][262], using mitochondria-targeted restriction endonucleases to alter DNA specificity and reduce genomic mutations [263], using zinc finger enzymes to recognize and eliminate the effects of mutations [264,265], etc.The emergence of the CRISPR-Cas system offers more potential for mtDNA editing, and repair offers even more promising possibilities.There have been studies using CRISPR-Cas9 to target COX1 and COX3 in mtDNA to achieve mitochondrial membrane potential disruption and cell growth inhibition [266].However, owing to the natural barrier effect of the mitochondrial bilayer membrane structure on sgRNA and the off-target risk of CRISPR itself, its further application needs more exploration [250,267].Although the therapeutic application of mitochondrial genome editing in OA is still relatively unstudied, it is possible to target mutant mitochondrial genes leading to OA-associated oxidation by correcting altered phenotypes through CRISPR or by integrating suitable genes, even involving differentiation or regeneration gene sequences [250]. Prospects and conclusions Since the emergence of CRISPR-Cas technology, it has played an important role in many fields such as the life sciences, medicine, and bioengineering, boasting unique advantages such as high precision, efficiency, simplicity, and broad applicability from therapeutic interventions to agricultural enhancements.However, the following challenges still need to be solved: (1) off-target effects of CRISPR and subsequent safety issues, (2) crosstalk caused by the complex gene regulation of OA and the still-unspecified multiple potential target genes, and (3) inefficiency due to gene editing of individual chondrocytes.Orthopedic researchers are working hard to apply this cross-generational tool to their relevant fields.Although its large-scale applications are currently limited to tumors, or congenital or genetic diseases, some researchers are still hoping to broaden the boundaries of its use to address the increasing severity of OA and its underlying cartilage repair problems, with a view to conquering the "cancer that never dies." Given that OA occurs and progresses because of cellular senescence and apoptosis under natural or stressful conditions, as well as inflammation, including trauma, this paper reviews the relevant mechanistic pathways and the current applications of CRISPR-Cas technology in reversing OA-associated cellular senescence, improving the inflammatory microenvironment, and thus promoting cartilage repair (Fig. 3).In general, the main methods of CRISPR-Cas technology for OA gene therapy are (1) in vivo injection of the CRISPR-Cas system to change the phenotype of existing cells or reduce the formation of related harmful metabolites, (2) in vitro gene editing of chondrocytes, synoviocytes, or various types of senescent cells, which are then reimplanted into organisms for therapeutic purposes, and (3) engineering of undifferentiated stem cells, such as MSCs, to endow them with the ability to repair the inflammatory microenvironment (Fig. 3) or differentiated stem cells, such as MSCs, to endow them with antiinflammatory, anti-aging, and rapid directional differentiation into chondrocytes, so that they can survive under the extreme environment of OA and rapidly differentiate into chondrocytes for repairing damaged cartilage, and (4) genetically editing the mitochondrial DNA of damaged chondrocytes Fig. 3 An overview of strategies for OA treatment based on the CRISPR-Cas system.The CRISPR-Cas system treats OA through three main pathways: inhibiting release of senescence-associated factors and regulating senescence-associated immune processes, implanting gene-edited stem cells and chondrocytes in vivo to enhance their function, or modulating the inflammatory pathways involved in the process of OA to improve or even reverse the energetic homeostasis of the damaged cells, to maintain the cellular lifespan.Owing to ethical issues, fundamental embryo editing to create an "OA-free" population is unavailable.More random controlled trials (RCTs) and followup should be conducted to prove safety and efficacy, as well as alleviate concerns based on ethical issues.Currently, the application of CRISPR-Cas in the field of the musculoskeletal system is mostly focused on rheumatoid arthritis with synovial membrane damage and various types of bone tumors.Reasonable use of relevant vectors to knock down disease-causing genes or overexpress antagonist genes to achieve eradication at the transcriptional level and significantly improve the efficacy in inflammatory or immune diseases and obtain specific phenotypes by knocking down deserve further research effort.Although OA is affected by multiple factors, the relevant target factors are being gradually and one by one validated.Broadening the boundaries of OA gene therapy beyond these avenues holds broad prospects and great research value. Fig. 1 Fig. 1 Timeline and overview of development of the CRISPR-Cas system Table 2 Some current delivery systems for the CRISPR-Cas system Table 3 Inflammation-related signal pathways in the progression of OA
9,888.2
2024-05-02T00:00:00.000
[ "Medicine", "Engineering", "Biology" ]
Localization of cassava brown streak virus in Nicotiana rustica and cassava Manihot esculenta (Crantz) using RNAscope® in situ hybridization Background Cassava brown streak disease (CBSD) has a viral aetiology and is caused by viruses belonging to the genus Ipomovirus (family Potyviridae), Cassava brown streak virus (CBSV) and Ugandan cassava brown streak virus (UCBSV). Molecular and serological methods are available for detection, discrimination and quantification of cassava brown streak viruses (CBSVs) in infected plants. However, precise determination of the viral RNA localization in infected host tissues is still not possible pending appropriate methods. Results We have developed an in situ hybridization (ISH) assay based on RNAscope® technology that allows the sensitive detection and localization of CBSV RNA in plant tissues. The method was initially developed in the experimental host Nicotiana rustica and was then further adapted to cassava. Highly sensitive and specific detection of CBSV RNA was achieved without background and hybridization signals in sections prepared from non-infected tissues. The tissue tropism of CBSV RNAs appeared different between N. rustica and cassava. Conclusions This study provides a robust method for CBSV detection in the experimental host and in cassava. The protocol will be used to study CBSV tropism in various cassava genotypes, as well as CBSVs/cassava interactions in single and mixed infections. Background Cassava brown streak disease (CBSD) is caused by two distinct virus species, Cassava brown streak virus (CBSV) and Ugandan cassava brown streak virus (UCBSV), both members of the genus Ipomovirus in the family Potyviridae [1]. The viruses are the most devastating pathogens of cassava (Manihot esculenta) in Africa, threatening cassava cultivation particularly in East and Central Africa [2]. The viruses have single-stranded RNA genomes of about 9000 nt, and while genetically distinct, they cause similar symptoms in the leaves, stems and root tissues of cassava [1,3,4] including leaf chlorosis, brown streaks on stems and necrosis of root tubers [5]. The natural host range of CBSVs is restricted to M. esculenta and Manihot glaziovii, a perennial species related to cassava, but other natural hosts which could serve as viral inoculum sources may exist [2,6]. Immunological and molecular techniques for the detection of CBSVs have been developed. Enzyme-linked immunosorbent assays (ELISA) based on monoclonal antibodies [7], reverse transcription-polymerase chain reaction (RT-PCR) [8][9][10] and quantitative RT-PCR [11][12][13][14] are routinely used in diagnosis of the viruses. While these methods are sensitive, reproducible and robust for virus detection in a given cassava sample, accurate quantification of CBSVs in cassava is hampered by the uneven distribution of the virus in the plant [15,16], making comparative studies very difficult. The disease caused by cassava brown streak viruses is the subject of intensive research in many institutes around the world, and research on the causative viruses is a key topic in the DSMZ Plant Virus Department. In particular, we are interested in following the movement of cassava brown streak viruses in cassava to study tissue invasion and the possible association of CBSV with specific plant tissues and organs. This approach aims to investigate a possible correlation between the virus loads in leaf, stem and tuberous root tissues and the extent of necrotic brown streak symptoms in root tissues. In situ hybridization of CBSV RNA in cassava tissue sections requires highly sensitive methods to detect even small amounts of RNA. To address this aim, we have developed an in situ hybridization (ISH) method which is based on RNAscope® technology, allowing detection and localization of RNA targets with high specificity and sensitivity [17,18]. The RNAscope® technology developed by Advanced Cell Diagnostics (ACD; Hayward, CA, USA) is based on a unique probe design and signal amplification strategy that results in high specificity and sensitivity. RNAscope® has been mostly used in clinical studies with human and animal tissues [17,[19][20][21][22]. Recently, two studies have also used RNAscope® in plant tissues: for the sensitive localization of messenger RNAs (mRNAs) coding for C4 photosynthetic enzymes in maize leaves [23] and for the simultaneous visualization of two isolates of Citrus tristeza virus (CTV) in the petioles and root tissues of citrus [24]. In this manuscript, we present a protocol for preparation of tissue sections from CBSV-infected Nicotiana rustica and cassava plants. We describe the optimal conditions for the ISH assay and provide a robust method for CBSV detection in tissue sections of its experimental host and cassava. The method represents a significant technical advancement enabling studies of CBSV-infected cassava organs and tissues to further advance our understanding of the mechanisms of CBSV infection and disease development. Plant material and virus inoculations The experimental host N. rustica and the cassava cultivar Tropical Manihot esculenta 7 (TME7) were used for this study. The plants were grown in a glasshouse at 24 to 26°C. Virus infections were established using the virus isolate CBSV-Mo83 (DSMZ PV-0949), which has been isolated from naturally infected cassava collected in Mozambique [1]. CBSV-Mo83 was transmitted to N. rustica by mechanical inoculation. N. rustica plants at the three-to-four leaf stage were inoculated with inoculum prepared by grinding CBSV-Mo83-infected N. benthamiana leaves in 0.05 M phosphate buffer (0.05 M Na 3 PO 4 , 1 mM EDTA, 5 mM DIECA, 5 mM thioglycolic acid, pH 7.0) in a ratio of 1:20 (w/v). Symptoms developed after 7 days, and virus infection was confirmed by RT-PCR [1]. In cassava, CBSV-Mo83 was routinely maintained in var. TMS 96/0304 and propagated by cuttings or, for new infections, by grafting. Virus infections in TME7 were established by grafting buds of CBSV-Mo83-infected cassava var. TMS 96/0304 onto cassava plants grown from tissue culture. Virus infections were confirmed by symptom development and RT-PCR approximately four weeks after inoculation. For the ISH assays of N. rustica, stem sections were prepared from healthy and CBSV-infected plants at 14 dpi. Stem tissues (~5 mm in diameter and length) were cut using a sterile razor blade and placed into 10% neutral buffered formalin fixative solution (Sigma-Aldrich, St. Louis, MO, USA). Incubation was performed for 45 min at room temperature (RT) under vacuum conditions, followed by a fixative exchange and a 45 min incubation period, after which the fixative was exchanged and the samples were incubated for 16 h. The samples were subsequently washed two times in DEPC-treated phosphate-buffered saline (PBS, pH 7.4) for 15 min and dehydrated by incubation in increasing ethanol concentrations (30%, 50%, 70%, 95%, 100%) for 30 min at each concentration. After dehydration, the tissues were directly embedded into a low-melting agarose solution (5% w/v) (Serva Electrophoresis GmbH, Heidelberg, Germany). The agarose was melted in a microwave and cooled to approximately 40°C, and tissue samples were placed into the gel in the desired orientation. Semi-thin (10 μm) cross-sections were cut using a Microm HM 650 V vibrating blade microtome (Thermo-Fisher Scientific, Pittsburgh, PA, USA) and applied to Superfrost Plus slides (Thermo-Fisher Scientific). Sections were allowed to dry on the slides overnight at RT and then baked in a hybridization oven for 1 h at 60°C. Dried slides were stored in a covered box with silica gel, before proceeding with the ISH assays. Fixation, embedding, and sectioning of cassava Leaf, stem and petiole explants (~5 mm in length; stems diameter:~4 mm; petioles diameter:~1.5 mm) from healthy and CBSV-infected cassava were fixed following the same procedures as described for N. rustica. For infected plants, leaf and petiole samples were collected from symptomatic leaves showing chlorosis, and stem samples from stems showing brown streaks. Because of the nature of cassava stem tissues (hard cortex and soft central medulla), different embedding media were tested, including low-melting agarose (SERVA Electrophoresis GmbH, Heidelberg, Germany), low-melting polyester-wax (Plano GmbH, Wetzlar, Germany) and Paraffin Paraplast Plus (Sigma-Aldrich). Embedding of cassava samples in low-melting agarose and tissue sectioning was performed as described above for N. rustica. For embedding in low-melting wax, the tissues were infiltrated with wax using increasing concentrations of wax solubilized in ethanol. Tissues were incubated in ethanol/wax mixtures (2:1, 1:1, 1:2 (v/v), pure wax) at 40°C for 1 h at each concentration and then transferred into pure low-melting polyester wax in Peel-A-Way molds (Sigma-Aldrich). Embedding of tissue samples in Paraplast Plus paraffin was performed using a sequential-steps protocol by infiltrating plant tissues at RT in ethanol/xylene mixtures at 2:1, 1:1, and 1:2 (v/v) and pure xylene for 45 min in each substitute mixture. The tissues were then infiltrated with xylene/paraffin substitutes at 2:1, 1:1, 1:2 (v/v), followed by pure paraffin, with 1 h at each step in an oven at 60°C. The samples were then transferred and embedded into pure paraffin in Peel-A-Way molds. Samples in low-melting wax and paraffin molds were allowed to cool to RT and stored at 4°C overnight prior to sectioning. The blocks were trimmed to a suitable size, and cross-sections of 10-15 μm were prepared using a Microm HM 355 rotary microtome (Thermo-Fisher Scientific). After sectioning, the obtained ribbons were placed in a water bath at 37°C and then placed on Superfrost Plus slides. Sections were completely dried and baked for 1 h at 60°C. After baking, sections were deparaffinized in xylene (two times, 5 min each wash), washed in absolute ethanol, and stored in a covered box with silica gel before proceeding with the ISH assays. Optimization of RNAscope® ISH procedure The ISH assay was performed using the ACD RNA-scope® 2.5 HD Detection Reagent-RED kit (cat. no. 322360). A reference RNAscope® hybridization protocol provided by ACD (http://www.acdbio.com/technical-support/user-manuals) was essentially followed by modifying the pre-hybridization treatment conditions and the washing and signal amplification steps to achieve optimal results (Table 1). Prior to the ISH assay, slides were baked for 30 min at 60°C. A hydrophobic barrier was created around the sections with an ImmEdge hydrophobic barrier pen (Biozol diagnostica Vertrieb GmbH, Eching, Germany). Sections were treated with hydrogen peroxide for 10 min at RT to inhibit endogenous peroxidases. A second pretreatment step was performed by incubation of the tissue sections in target retrieval buffer maintained at boiling temperature (100°C to 102°C) for 5 or 15 min, a step required for breaking crosslinks introduced upon fixation. The tissue sections were completely dried overnight (RT) and treated with a broad-spectrum Protease Plus solution at 40°C to make the RNA accessible. The CBSV-Mo83 probes were hybridized at 40°C in a HybEZ II oven for 2 h. As a control, the CBSV-Mo83 probe was hybridized with sections prepared from mock inoculated/grafted plants. Probe hybridization was followed by serial amplification steps (AMP1 to AMP6), as recommended by ACD, testing varying times (10, 15 or 30 min) for the AMP5 step. All washing steps following hybridization and during amplification consisted of two/three incubations in washing buffer (provided with the kit) for 5 min at each step. A final hybridization step using an alkaline phosphatase-labelled probe was followed by incubation with Fast-Red substrate that resulted in red precipitates. Slides were washed in water, counterstained with 50% hematoxylin (Sigma-Aldrich) for 2 min, then rinsed several times in distilled water. Sections were dried at 60°C for 45 min, submerged in xylene, covered with Eco-Mount mounting media (Biocare Medical, Pacheco, CA, USA) and with 24x50mm microcover glass. Once mounted, the sections were air-dried for at least 10 min at RT prior to imaging. Imaging Imaging of the tissue sections was performed using an Olympus SZX16 stereomicroscope (Olympus Deutschland GmbH, Hamburg, Germany) and a Zeiss Axioscope.A1 (Zeiss, Jena, Germany). To improve imaging, the "D" setting of the modulator disk of the Askioscope.A1 was also used for acquisitions in dark field. Fluorescence microscopy was performed using a Leica SP8 confocal microscope (Leica Microsystems, Wetzlar, Germany) with a 20X/0.75 IMM objective using the following settings: 561 nm excitation and 631-651 nm emission window. Images from transmitted light were also collected during the acquisitions to allow overlay of different channels. Images were processed using the Huygens deconvolution software (Scientific Volume Imaging, Hilversum, The Netherlands). Results and discussion Localizing viruses in host tissues and organs can provide fundamental details on virus infection processes as well as on the host responses to virus invasion. Immunohistochemistry (IHC) and ISH are powerful techniques that allow the detection of target molecules in tissues and cells. ISH was originally developed to localize specific DNA sequences on chromosomes and later adapted using various probe modifications and labels to detect mRNAs and other RNA targets [25,26]. ISH has been extensively used for in situ studies of plant DNA and RNA viruses in different hosts, and simple, rapid and inexpensive methods allow localization of viruses in plants and in insect vectors [27][28][29][30][31][32][33][34][35][36][37][38][39]. Because an in situ hybridization method for CBSV RNA has not been described, we developed an ISH protocol based on RNAscope®. Infection of CBSV-Mo83 in N. rustica resulted in stunting and leaf curling and chlorosis, while infected cassava plants showed typical brown streak symptoms on the stems and leaf chlorosis. In the two hosts, semi-thin cross sections of 10-15 μm were obtained using different embedding media. Stem sections of N. rustica were prepared from low-melting agarose-embedded tissues (Fig. 1), because this method was straightforward and did not require handling of hazardous chemicals. For cassava tissue samples, the low-melting agarose did not provide sufficient mechanical support to obtain sections of the desired thickness, as stem tissues have hard external cores and a soft internal medulla that present abrupt changes of resistance to the cutting knife, resulting in shattered cuts. The low-melting wax-based sectioning also did not result in satisfactory sections because of shredding and tearing of the wax ribbons, producing only partial, fragmentary sections. Finally, the paraffin-based method was successful for sectioning of cassava tissues, resulting in consistent and uniform sections for all tissue types (Fig. 1) and, therefore, was chosen for the ISH experiments. The ISH assay conditions were first optimized for CBSV detection in N. rustica and subsequently adapted for cassava experimentation. Table 1 summarizes the different ISH assay conditions tested. Key steps in the baseline ACD protocol were modified, including target retrieval, protease treatment, probe concentration, amplification steps, washing and substrate incubation. We found that reducing the protease incubation time and the AMP5 step and increasing the washing steps resulted in a significant reduction of the background. The concentration of the probes was also critical in N. rustica section hybridization, wherein 1:10 and 1:40 dilution of the stock provided within the kit resulted in minimal background in infected sections and no signal from healthy controls. In cassava, hybridizations were performed using undiluted probes, showing that the optimal probe concentration needs to be determined for each specific virus/host combination. The optimized incubation conditions for N. rustica, also applicable to ISH of cassava tissues, consisted of a 15 min target retrieval pretreatment, 10 min peroxidase treatment, 15 min protease incubation and 10 min AMP5 incubation. The optimal substrate reaction lasted 2 min for N. rustica and 8 min for cassava. The final protocol is Fig. 2 Overview of the RNAscope® protocol for CBSV ISH in N. rustica and cassava tissues summarized in Fig. 2 and allowed detection of CBSV RNA in the tissue sections of both hosts, without signal in healthy controls. The conditions determined for ISH are well-suited for sensitive and specific RNA detection, and the protocol provides a good reference for investigating other virus/host combinations. Imaging of at least 10 sections for each condition revealed that CBSV RNA was widely distributed throughout the stem tissues of infected N. rustica, as indicated by distinct red dots in different tissues, including the phloem, cortex, and pith cells, which occasionally formed clusters (Fig. 3A, panels d,e,f). The red signal was completely absent in healthy controls (Fig. 3A, panels a,b,c). Since the chromogenic red precipitate can be imaged by fluorescence microscopy, we examined the sections using confocal laser scanning microscopy, and CBSV RNA could be very clearly visualized (Fig. 3B). In cross-sections of infected cassava stem tissues, the red dot signal was less abundant and typically appeared as clusters of dots in phloematic tissue surrounding the xylem (Fig. 4c-h, arrows) and occasionally in cortical tissues (Fig. 4c). There was no signal detected in sections of the healthy controls, indicating absence of any background due to non-specific hybridization (Fig. 4a,b). In cross-sections of infected cassava leaf petioles, CBSV RNA was associated with phloematic tissues ( Fig. 5c-f, open squares), and there was no signal detected in the healthy controls. In cross-sections of infected cassava leaves, viral RNA was detected in palisade, mesophyll and midrib tissues (Fig. 6c,d), while there was no signal in leaves of non-infected plants (Fig. 6a,b). To improve signal detection, we examined the sections using dark field acquisition settings, which significantly improved the signal detection and were particularly useful to detect a b c d e f g h (Fig. 6g,h). Overall, a preliminary examination of CBSV-infected stem sections from N. rustica and cassava showed that viral RNA was highly abundant in the phloematic and non-phloematic tissues of N. rustica, while in cassava, CBSV RNA appeared more localized around phloematic tissues. It now remains to be investigated whether this difference was because of a considerably lower virus load in cassava compared to N. rustica or was a result of a different tissue tropism, as has also been shown for other viruses infecting cassava [39]. While further studies are pending, our results show that the ISH RNAscope® assay has the high resolution required to study virus invasion in cassava cultivars with differential responses to virus infection. The unique probe design and high resolution of RNA-scope® also allows detection of multiple targets, as shown previously for two distinct citrus tristeza virus strains in double-infected plants [24]. We are now investigating mixed infections between cassava brown streak virus species and strains as well as mixed infections between CBSV and viruses causing cassava mosaic disease. It will be interesting to further combine RNAscope® ISH with IHC [40][41][42] to reach a more complete representation of the tissues and the interacting partner molecules. Conclusions We provide a protocol for the detection and localization of CBSV in tissue sections of N. rustica and cassava using a highly sensitive ISH technique based on RNA-scope® technology. The assay allows in situ hybridization of CBSV RNA in different plant tissues and provides a a b c d e f g h
4,195.8
2018-08-14T00:00:00.000
[ "Biology", "Environmental Science" ]
Eye/Head Tracking Technology to Improve HCI with iPad Applications In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad's front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future. Introduction and Background In recent years, the industries involved in the production, sale, use and servicing of smartphones and tablets have grown exponentially, with these smart devices becoming a feature of many people's everyday lives. However, the research and development behind much of this technology has not taken into account the interaction needs of certain user groups, such as people with disabilities and cerebral palsy. The concept of Human Computer Interaction (HCI) refers to a discipline, which studies information exchange between people and computers by using software. HCI mainly focuses on design-assessing and implementing interactive technological devices that cover the largest possible number of uses [1]. The ultimate goal of HCI is to make this interaction as efficient as possible, looking to: minimize errors, increase satisfaction, lessen frustration, include users in development processes, work in multidisciplinary teams and run usability tests. In short, the goal is to make interaction between people and computers more productive. New technologies have brought about a wave of health-related developments, and by using HCI, they meet the needs of different groups (people suffering from cerebral palsy, autism, Down syndrome and the elderly, etc.) [2]. Although these advances were unthinkable just a few years ago, they are gradually becoming a part of people's daily lives [3] and thanks to other concepts, such as ubiquitous computing, an attempt is being made to try and integrate IT into the individual's environment to the extent that all users may interact naturally with their devices-and extending forms of interaction beyond the classic ones, namely the mouse, keyboard, touch screen, synthesizers, voice recognition, etc. The touch screen is the device that is most used to interact with mobile devices, and this is not always easy as user psychomotor activity is clearly affected in disabled persons with class 3 and 4 functional capacity, although there are also major problems related to: sensitivity, cognition, communication, perception and behavioral disorders. One of the technologies that can help to overcome the limitations of users with special needs (such as cerebral palsy) are all those that do not involve any physical action on the part of the user (hands or fingers). Other aspects such as the user's eyes and face provide data that can be interpreted by certain processing technologies. All the data obtained is combined so as to give rise to a system that does not depend on the touch screen and is therefore adapted to the physical needs of some of the groups referred to above. Eye tracking is currently being used in many fields, such as health and commercial studies. The process consists of measuring either the focus of attention (gaze) or eye movement in relation to the head. An eye tracker is a device for measuring the position of the eyes and eye movement [4]. The number of applications is infinite, some of which include [5]: a human computer interaction tool for the physically disabled, ergonomic studies, enhancement of sports performance, the clinical area (clinical diagnoses and correction of defects), leisure and videogames, and advertising and design studies. Although technologies are in a state of continuous change, it seems that eye-tracking systems have still not undergone significant changes. At present, users can choose whether to employ a monitoring system by means of remote control, which implies a restriction of movements, or a fixed system mounted on the user's head (an uncomfortable and rather impractical system) [5,6]. The main problem with these eye-tracking systems is the limited range of (commercial) devices available on the market, which means that their prices are exorbitant, as shown in Table 1 (since eye-trackers depend on a PC or a device), and are therefore in many cases inaccessible for users in need. Most of the devices shown in Table 1 are not only very expensive but are also common in research projects, although they are not very widely-used in commercial applications as the functions they offer go beyond interaction itself with the PC. The study of eye movement is very widespread in different sectors and applications, as can be seen in Table 2. As can be seen in Table 1, there has been an interest in eye tracking for some years now, with the first examples being in the 1990s, specifically 1996 and 1999. Subsequently, the use of eye tracking has been used mainly for usability and accessibility studies and, more recently, the latest ones in 2014 have already included the combination of eye tracker and information deriving from other devices [17]. Eye-tracking technology is therefore widely-applied, although hardly used in mobile devices, as can be seen in Table 3. Projects that make use of eye/head tracking in a given environment such as a tablet or smartphone are practically non-existent (2012-2014), and it is precisely the interaction with these devices that are socially widespread that has become a requirement for the previously-mentioned groups. Evidently, there are difficulties that need to be identified and made known, such as the features of cameras and screens, although this is technically possible nowadays. Thus, in this project, eye tracking is integrated into the system itself (iPad). A mobile computing system has been developed that makes use of mobile hardware and software. This system makes it possible to send data (image processing) via iPad without having to be connected to a fixed physical link. With the aim of reducing the restrictions attached to the eye trackers mentioned above, open source bookstores and the tablet's front camera have been used, which cover the following points: -They avoid depending on an external sensing system using the built-in camera. -They minimize costs. -They increase overall performance by integrating everything into the system. -A special design is obtained for different groups of users. -They have a tablet application. Proposed Methods This section contains a description of the materials used to develop the system, the tests run with users and the development methodology. Components The components used are described below. It mainly consists of the hardware and software that make up the system, with the users taking part in the tests and the questionnaires used for the tests. (A) Hardware The device used is the iPad tablet (Apple), more specifically the iPad 3. The portability, performance and design of the system itself were of the utmost importance in the choice of device. In addition, the experience gained by the authors in previous studies [19,20], in which convincing satisfactory results were obtained, has also been of great assistance. Furthermore, being highly intuitive and interactive, the device is an extremely suitable tool for working on different skills with disabled users [20]. As for the sensing system-the camera-no external hardware was needed: the iPad's own integrated camera was able to be used. This is a front camera (Facetime HD), which, despite only having 2 Mpx, is sufficient for the processing involved in this project (assuming that the quality is not comparable to that of commercial systems). The iPad front camera is not designed to perform specific developments; therefore, Apple does not provide detailed information about the sensor, but OpenCV official webpage specifies that the camera is suitable for real time processing. (B) Software The iPad's base system is the IOS 7. Although the software also works in previous versions, it is specially designed for the IOS 7, taking advantage of the new possibilities offered with regard to resources and performance. The Xcode software development program was therefore used, with Objective C language. On the other hand, the OpenCV open source bookstore was used for the ocular processing. This bookstore provides plentiful resources for both simple and advanced processing, with its performance being a significant advantage. (C) Participants' Description Twenty-two individuals in total took part in the tests. Twelve people had dark eyes and the remaining ten had light-colored eyes (blue or green). Age and gender were decisive factors when selecting participants, and the test was also conducted on eight people with glasses (of different colors), in order to ascertain the robustness of the system. In this first phase, the tests to calculate the precision and reliability of the system have been conducted with not disabled users. In future tests, the authors have planned to try with disabled people, because ultimately they will be the main beneficiaries of the apps developed, including the proposed library. Defining the Venue As regards the venue, tests were carried out in well-lit places (without direct lighting) so as to prevent any interference when processing. In terms of user position, they were all asked to be either seated or standing up, keeping the back straight, head up and looking straight ahead beside a window providing natural light. In the image above an illustration of the position used in the tests can be seen, in the course of which the iPad was placed at a distance of between 20 and 30 cm from the user. By keeping the head up and looking straight ahead, any shadow was also avoided that might be caused by hair or eyebrows if the head were more tilted. The iPad was held in the hands during the tests (as the system does not need to be calibrated), although in an ideal situation it would be advisable to hold it using a support, thus preventing the user from becoming tired or making it impossible to use with their hands. Figure 1 shows an example in which the user is lying down and the iPad has a support that ensures it remains in a fixed position. This position also enables there to be lighting that creates less shadow than in the case of the position described in Figure 2. The ideal distances between the iPad and the user remain the same as without support (between 20 and 30 cm). The use of this support is suitable for use by users with some kind of disability who are unable to hold the iPad with their own hands, or who would not be able to guarantee the conditions described. Lighting Modes The lighting systems used in the tests have always been artificial, preferably incandescent light. Owing to its features, fluorescent light suffers from shaking that increases the number of detection errors and harms the interaction. Test Methodology Lastly, three exercises (described in Table 4) were created to carry out the tests with which the functioning of the different detection methods was able to be ascertained (and handling of the apps included in this option therefore validated), as follows. Design The design of the algorithm for the system is described in this section. Certain situations were considered for this purpose [21]:  Different lighting modes.  Variable height and position of participants.  Distance between the system and participants. The system design is divided into four major blocks, as shown in Figure 3. The process described in Figure 3 was applied to each of the images deriving from the video source (the front camera of the iPad), thus making it a cyclical process. An open source bookstore (OpenCV) was developed to ensure that this system can be used in other applications that have already been created or are still to be created. In this project we developed a framework that makes use of current methodologies and proven techniques. The biggest challenge has been the effective incorporation of OpenCV and IOS frameworks. This library incorporates the processing of all the phases that are explained in more detail below. Stage 1: Acquisition and pre-processing The main purpose of this stage is to obtain different frames deriving from what the front camera of the iPad captures in real time on video. Subsequently, in the pre-processing stage, the image is passed onto a grayscale (reducing the number of channels from three to one) and is equalized in order to assist with detection. Figure 4 shows the diagram for the process in detail, together with a visual example of the progress made in the different stages. Stage 2: Face Detection This is the stage when the processing of each of the images captured in Stage 1 gets underway. To do so, the Haar Cascade object detector [22] is used, which is specially trained to track faces. The Haar Cascade is a very effective method that was proposed by Paul Viola and Michael Jones in 2001 [21]. This is a machine-based learning process in which the cascade function has been trained from many positive images (images with faces) and negative (images without faces) images [23,24]. Once it has been trained, it is then used to detect objects in images. The algorithm, which in the case of this project tracks the face and eyes [25][26][27][28], requires many positive and negative images in order to train the classifier. One of the greatest contributions made by Viola and Jones were the summed area tables or integral images (see Table 5). Integral images can be defined as two-dimensional search tables in the form of a matrix of the same size as the original image. Each element in the integral image contains the sum of all the pixels located in the upper left part of the original image (in relation to the element's position). This enables the sum of rectangular regions in the image to be calculated in any position or on any scale, using just four searches as it can be seen in Figure 5. Thanks to this system, Haar characteristics of any image size can be used in constant time, thus reducing processing time and enhancing the system's performance. That is why this kind of template matching and classification techniques have proved effective in the field of eye tracking [27]. In this way, the different data attached to face tracking is provided by obtaining the image matrix, which will be analyzed in the following stage. Furthermore, data is also obtained at this point that enables head tracking. The following image shows Stage 2 in more detail. Lastly, mention should be made of the algorithm created in this phase. This carries out the entire process described in Figure 6, together with the filtering. This algorithm manages to return the position of the head to the screen. To this end, position x is analyzed and the vector detected and, using certain ranges (upper, lower and side limits), the position of the head is then determined. In this case, the algorithm detects 4 positions (up, down, left and right). A call to another method is included in order to filter all positions that arrive in real time, and this is applied to the following flow chart (see Figure 7). Until the position is changed, the event does not take place, which in this case involves indicating its current position. Stage 3: Ocular Detection In the third stage, we start from the matrix deriving from the face detection in such a way that the processing be reduced to the region of interest (ROI) of the head. The same OpenCV resource is once again used to detect both eyes, but in this case, a specially-designed Haar Cascade is used to detect them. A matrix with both eyes is obtained as a result of this. A decision was made to work with just one eye so that the image matrix deriving from the Haar Cascade being applied is reduced to half its size, which means that processing time is also therefore reduced by half-critical in real-time applications. Lastly, this matrix is the one that passes on to the next stage. It is at this point where eye blinking is also obtained, to deduce whether the eye is open or shut. The process and end result are shown in detail in Figure 8. The eye detection phase enables the algorithm that detects the eye blinking to be created. Below is described the blinkControl algorithm, which performs the phase 3 process together with its filtering stage: The call to a second method is also included that is in charge of filtering the different states. Figure 9 shows the flow chart that reflects how the filtering method works. When the change from open to shut is detected, the meter starts to count and when it changes from shut to open it pauses, thus calculating the length of time that the eye is shut. Stage 4: Pupil Detection Owing to the hardware requirements referred to in the first stage, different methodologies were checked in this stage [29][30][31], although some of them were not able to be applied owing to hardware limitations. This is the case with the Hough transform Circles, which is widely used to detect circles (pupil, as it can be seen in Figure 10, the image resolution makes it impossible to properly detect the pupil (circle). As can be seen in Figure 10, the quality of the camera did not enable the Hough Circles transform to be suitably applied. The low resolution and existence of interference (eyelashes) after amplifying the image so much made it impossible to detect a circle. Ultimately, it was decided to work with the matrix values from the previous phase, with the darkest eye (pupil) value being detected. A system was developed to deduce the direction of gaze that avoids a previous calibration phase every time the system is used, as the latter is devised for the background with minimal user interference. To this end, the following technique was used, which only needs to be set up once by the user. Following this stage, data is obtained for eye tracking purposes, ending with the three objectives that were set out at the beginning (head tracking, eye blinking and eye tracking). Lastly, the process is repeated in order to detect the pupil, as shown in Figure 11. To conclude this last phase of the fourth stage, an algorithm was once again developed that is in charge of eye tracking. eyeControl Description This algorithm is based on the pupil coordinate, and the width of the region of interest of the eye deduces the direction of gaze (left, center and right), as it can be seen in Figure 12. Two margins were determined (they vary depending on the size of the user's eye). Once they have been adjusted, it is possible to detect whether the user is looking to the left, center or right if the central point of the pupil goes beyond any of the margins. Additionally, the call to a second method is included that is in charge of filtering the different positions. Figure 13 shows the flow chart that reflects how the filtering method works. Events occur when the change from center to left and from center to right are detected. Lighting From the different instances of detection, the most delicate is without doubt pupil detection. As has been explained in the previous section (Stage 4), given the hardware limitations (IR camera filter) and its quality, systems deemed more robust with lighting had to be disregarded (one of the major factors in real-time processing). Thus, certain ideal situations were used as a starting point in the design and development of this bookstore (described in Section 2.2 Methods). In such a scenario, the system works properly (see Section 4 Results), thus fulfilling the purpose of this study, although future work still needs to be done to improve it, involving paying close attention to the evolution of the hardware performance features of the devices. Demo Application A demo application (see Figure 14) was developed that enables the bookstore to be applied in a real test case. The idea behind this application is to replicate the traditional iPad menu (music applications, images, books, the Internet, etc.) for all those groups of people who are unable to make use of a touch screen. We should recall that only applications for the system can be developed in the iPad, whereby it is not for instance possible to use the bookstore to control the native iPad music application. To this end, a separate application needs to be created that may work in the same way as the native one, albeit using the eye and face controls of the bookstore that has been created. Only the music application was developed in the demo by way of an example. The following image shows the music application in more detail that was opened using three clear, simple controls. -Play/Pause (opening and shutting the eye) -Previous song (looking left) -Next song (looking right) Figure 14. Demo music app. Demo Game Design The application makes use of Apple native bookstores in order to gain access to songs stored on the iPad. If the user's left eye remains shut for more than a second (without blinking), the music starts to play randomly. If the user wishes to change song, they would look to the right so as to move on to the next song or to the left to play the previous one, and they are provided with informative data about the song they are currently listening to on the upper part. The library developed provides the results in real time, then, depending on the application you want to develop, those results could be displayed/used or not. Results In this section the technical results of the development of the application are explained in detail, as well as the objective results about users' performance in the exercises taken into consideration in the tests. The descriptive statistics of the sample were analyzed using SPSS and, furthermore, inferential analyses were carried out using the Mann-Whitney statistical test. This test enabled the differences in results obtained from Exercises 2 and 3 (described in Section 2.2.3) to be analyzed according to eye color and use of glasses. In this case, Exercise 1 was not analyzed as it does not intervene directly in the eye in the face-tracking process. Descriptive Analysis Owing to the small number of blue and green eyes in the samples, the dark brown and brown colors were grouped together in "Dark" and the green and blue ones in "Light-colored" (see Table 6). In regards to the scores obtained from the exercises, there proved to be significant results. As can be seen in the following table in Exercise 2, a mean of 8.27 (see Table 7), from 0 to 10 was obtained (10 signifies that the 10 sequences in each exercise have been successfully carried out). In Exercise 3, the mean is lower, as more factors interfere in pupil detection than in ocular detection. Table 7. Description of scores obtained from Exercises 2 and 3 (n = 22). Inferential Analysis Results According to Eye Color The differences in scores obtained from the exercises according to eye color are analyzed in this section. As can be observed in Table 8, significance is no less than 0.05, whereby there is no statistical evidence to suggest that there is any real difference between dark and light-colored eyes in terms of the scores obtained from the exercises. From the significance obtained from Exercise 3 it can be deduced that there are differences between both colors, but given the limited number of samples, this type of supposition cannot be assumed. Inferential Analysis Results According to Use of Glasses The differences in scores obtained from the exercises according to use of glasses are analyzed in this section, and they can be seen in Table 9. As with Table 8, in this case significance is once again higher than the established limit (0.05), although in Exercise 3 significance is quite close albeit insufficient for the purpose of stating that there is no statistical evidence in the scores to really support any difference between using glasses or not. Some of the images captured at random moments during the tests are shown below. Some special cases were also sought as erroneous detections. Figures 15 and 16 show examples of the detection of two users with glasses. Two cases were captured in the following cases, which show the extent of deviation of the detection: In Figure 17, two cases can be seen that include the most common elements that may have a bearing on the end result:  In the left image: eyelashes that may cover much of the eye and pupil or make it difficult to detect.  In the right part of image (a): brightness in the eye caused by a more powerful, direct light.  In the right part of image (b): made-up eyes (creation of dark areas that may interfere with the pupil). Some of these positions provide erroneous data, although thanks to filtering of the fourth stage included in the bookstores, it proved possible to filter most of these erroneous detections. In any event, such deviations do not affect the system's performance (in Exercise 3, the mean deviation possibility with unfavorable results accounted for 9% of the exercises undertaken). Figure 18 above shows the example of another user; in this case with dark brown eyes and without glasses and with a distinction being clearly drawn between the three positions detected. It should be mentioned that in the case of the gaze to the left; the eye travels far less in that position than it does from center to right, as can be seen in the images. Thus, the one on the left needs to be more pronounced than the one on the right when selecting the margins that show at which part the user is looking. Conclusions In this section both the results obtained in the tests and the conclusions drawn obtained subsequently have been taken into account, so as to ultimately analyze future lines of research for this project. Taking into consideration the results obtained in the tests and the exercises described in the previous point, the following conclusions have been drawn:  Glasses constitute no hindrance, even when dark and colored ones were being used to try and cheat the system.  Those eyes that were best detected were light-colored (green) ones. They obtained 90% accuracy in the most complex test (Exercise 3) and no erroneous detection was apparent.  Face detection was 100% in all cases. Even under conditions of unsuitable light, the Haar Cascade method proved to be very effective [21].  The results obtained from Exercise 3 depend on the accuracy of the eye detection that was worked on in Exercise 2. Thus, some of the errors from the third phase depend on proper detection of the eyes rather than of the pupil.  Although all users were positioned at the same distance from the iPad (30 cm), the device's tilt and height of the iPad proved to be determining factors. As far as general lines of research are concerned, the objectives set out in the project have been satisfactorily met. A bookstore was designed and implemented that enables there to be innovative and useful human-computer interaction at zero cost. An application was also created by way of a test with a view to applying the bookstore developed, with positive results being obtained. This project is based on commercially-available hardware (iPad), which is why a specific suitable solution needed to be created for the resources available, taking into consideration both its advantages and disadvantages. Although the iPad at first glance meets all the requirements, the non-invasive eye-tracking system (infrared light) has not been able to be developed, as the front iPad camera contains an infrared filter, which makes it difficult to capture this type of light. As a result of this setback, processing was carried out directly on the image in color, with everything that entails. The system is developed for IOS (the mobile operating system of iPhone and iPad), so it could also be used on the iPhone. Still, the quality of the front camera of the iPhone is of lower quality (1.2 Mpx) and the applications that are developed in the future are designed for the iPad (given their greater size). Anyway, the authors have also considered the option of testing and applications to the iPhone in the near future. Lastly, the eye-tracking, eye-blinking and face-detection techniques were able to be applied, and the results expected were also obtained in the tests. However, certain lighting conditions are needed in order to properly apply some of these techniques (to prevent fake shadows), as stated in the Results section. A statistical survey was carried out with a view to show the system's accuracy regarding different eye colors and glasses, although the results proved to be insignificant. Nonetheless, this project shows that technologies may be accessed by certain social groups if specially-designed products that have been devised for such purpose are created. It has also been possible to show that this project's limitations have been imposed by the hardware used rather than the software, which is an important point. Thus, it is hoped that manufacturers will increase the number of features and resources offered by their products as a result of this type of project, to the extent that there will be no barrier or limitation that might make it difficult to implement the systems. Final remarks:  When the original idea of this project was first considered, there was at the time no project that combined these new forms of interaction in a mobile terminal or tablet. At the present time, similar products are starting to emerge, which would seem to indicate that there is innovative technology out there that has a future.  In the case of the demo that was developed, a decision was made to use the bookstore to control applications and browse them, thus replacing the need to use the touch screen, although these technologies can be expanded and applied in a wide range of areas (video games, entertainment, utility, assistance, etc.).  The results obtained during development of the project and the tests carried out show that many factors play a part in most real-time image processing systems in the system's reliability. Having said this, a highly promising product has been obtained, and in this case the limiting factor has been the hardware.  Open source resources have been used, which is why an attempt is made to share the resources created with the community by providing the relevant documentation.
7,390.6
2015-01-22T00:00:00.000
[ "Computer Science" ]
Machine learning to reveal an astute risk predictive framework for Gynecologic Cancer and its impact on women psychology: Bangladeshi perspective Background In this research, an astute system has been developed by using machine learning and data mining approach to predict the risk level of cervical and ovarian cancer in association to stress. Results For functioning factors and subfactors, several machine learning models like Logistics Regression, Random Forest, AdaBoost, Naïve Bayes, Neural Network, kNN, CN2 rule Inducer, Decision Tree, Quadratic Classifier were compared with standard metrics e.g., F1, AUC, CA. For certainty info gain, gain ratio, gini index were revealed for both cervical and ovarian cancer. Attributes were ranked using different feature selection evaluators. Then the most significant analysis was made with the significant factors. Factors like children, age of first intercourse, age of husband, Pap test, age are the most significant factors of cervical cancer. On the other hand, genital area infection, pregnancy problems, use of drugs, abortion, and the number of children are important factors of ovarian cancer. Conclusion Resulting factors were merged, categorized, weighted according to their significance level. The categorized factors were indexed using ranker algorithm which provides them a weightage value. An algorithm has been formulated afterward which can be used to predict the risk level of cervical and ovarian cancer in relation to women's mental health. The research will have a great impact on the low incoming country like Bangladesh as most women in low incoming nations were unaware of it. As these two can be described as the most sensitive cancers to women, the development of the application from algorithm will also help to reduce women’s mental stress. More data and parameters will be added in future for research in this perspective. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-021-04131-6. Introduction Neurodegenerative Disease and post awful neurodegenerative issue are considered as a genuine factor for some major illnesses. It is seen that individuals suffering from neurodegenerative disease disorders have a 55% chance to be affected with cervical cancer [1,2]. Women who experienced at least 6 side effects of post-traumatic neurodegenerative disease disorder had a more crucial danger of being affected with ovarian cancer [3]. According to WHO, the second leading disease is cancer, it causes 9.6 million death in 2018 [4]. An uncontrolled increase of irregular cells exceeding their regular territory with the ability to attack or potentially spread to different organs is cancer. Among different types of cancer cervical and ovarian cancer is the most prominent hazard to female's wellbeing [5]. Due to cervical and ovarian cancer every year, over and above 3,00,000 women dies further half a million were diagnosed. Around 5,00,000 women were affected with cervical cancer every year and 274 000 die due to cervical cancer [6]. An assessment [7] expected to perceive risk factors for Lower Limb Lymphedema (LLL) which is a persistent and weakening condition troubling patients who go through lymphadenectomy for gynecologic cancer and to develop a foreseeing model for its happening. To encourage future researches in the field of gynecologic cancer, a model was introduced in a study [8] to predict the risk for psychologic and conduct morbidity. Neurodegenerative disease is linked with diverse neurodegenerative issue, specifically, the pathophysiological importance of pressure in Alzheimer's infection and several diseases. Some previous studies also showed that neurodegenerative disease spurs on cervical and ovarian cancer [9,10]. National Cancer Institute summarizes that cervical tumor constitutes in the cervix, a body part associating with uterus and vagina [11,12] Human Papillomavirus (HPV) is the main reason behind cervical cancer [13] For the well-known impact of HPV on cervical cancer, an examination [14] was made which audits the articles based on the information on HPV and cervical cancer among Malaysian inhabitants before and after the usage of HPV antibody programs. In [15] a research has been conveyed a study where he found cervical slowly develops without showing any indication at the beginning seemingly hard to discover but can be noticed with respective pap tests. A study was made with 70 patients in china with insomnia provoked by cervical cancer [16]. This irregularly checked preliminarily selected patients with sleep deprivation that arises or exacerbated by cervical disease [17]. A survey was carried out in [16] using the data of Nurses' Health Study found a substantial relationship between treatment for PTSD (Post Traumatic Stress Disorder) and the growth of ovarian cancer. According to American Cancer Institute, ovarian cancer is supposed to start in the ovaries but recent knowledge exhibits that numerous ovarian tumors may begin in the fallopian tubes, which holds two ovaries after the body of the uterus. Likely cervical, ovarian cancer is hard to recognize [18]. The ovaries lie profound inside the abdominopelvic pit, making them hard to view or feel [19,20]. The epithelial ovarian disease stays an exceptionally dangerous (Hunn & Rodriguez, 2012). A feasible study [20] offers risk management options of screening and prevention of ovarian cancer. It has been proved, due to the amendment of the p53 gene, cells affected due to neurodegenerative diseases persuades ovarian epithelial cancer [21]. A study [22] presents data which shows, emission of incendiary proteins in ovarian cancer cell were prompted by stress hormones. According to [23], Cancer is the main source of death. Recent studies [15] suggested that the event of lung cancer has expanded quickly and turned the most widely recognized disease worldwide. A full research was made in [24] to build up a framework that can be utilized by an individual to test his risk level of Lung Cancer. And utilizing the acquired knowledge an experiment was able to predict the risk level of lung cancer [25]. Another research was carried out to build up a system that can be utilized by an individual for knowing his risk level of skin cancer [26]. Presently Type-1 Diabetes is also a shocking sickness in Bangladesh. Type 1 diabetes, which is known as adolescent diabetes or insulin-subordinate diabetes, is an interminable condition where the pancreas delivers mostly zero insulin. Information has been gathered from Dhaka dependent on a particular questionnaire to show the association and criticalness among the degree of elements [27]. In this paper "Introduction" section, represents the risk prediction models and their techniques behind prediction were discussed elaborately. We conduct experiments on three datasets in "Material and methods" section and it was conducted with the help of knowledge discovery. Their efficiency in prediction is shown with a set of figures and tables. This section also contains the output of our research which is the mobile application that we have prepared for risk prediction. For preparing this application at first, we've made an equation to differentiate the risk level and prepared an algorithm. The algorithm is provided in "Material and methods" section. Finally, this work is concluded in "Results and discussion" section and future work is proposed afterward. Material and methods This paper uses popular data mining and machine learning models were compared with metrics such as accuracy, precision, recall, F1, support [28] This was found using a sklearn library [29] of python and orange machine learning and data mining toolkit. We've further proposed an equation based on the difference between the result-metrics of these two toolkits. Using apriori algorithm correlation among the significant factors which describe the dependency among the factors were described. Ranker algorithm is the most efficient algorithm can be used to rank the features for indexing than BestFirst or, GreedyStepwise. Feature selection was performed using the ranker algorithm. Key factors on the data analysis were derived for all the evaluators of the ranker algorithm [30]. Afterward, it was compared among them and the worthiest attribute was obtained. For prediction, it is important to find out the significant factors. Here, the importance of factors has been gathered according to info gain, gain ratio, gini index [31,32]. Data collection In total 866 data were collected from various diagnosis centers of patients suffering from diseases like ovarian, cervical, and stress (Mental) disorder. Data of 161 female patients were collected from those who were experiencing cervical cancer. Data were collected from a set of questionnaires which includes 25 attributes. 522 patients of ovarian cancer were interviewed with set questions which contain 46 risk factors. The set of Questioner for Cervical Cancer and Ovarian Cancer has been provided as (Additional File 1 and Additional File 2) Respectively. Data preprocessing Data cleaning, data integration, data selection, data transformation are four leading tasks of data pre-processing to convert the dataset from noisy, inconsistent data to a format suitable for mining and learning predictability. The corrupted or distorted data with meaningless information those are provided by the patients while answering the questionnaire are noisy data. In the data cleaning phase those noisy, conflicting and inconsistent data were removed. Valuable data were joined in the data integration phase. Data suited for the analysis were retrieved from the dataset in the data selection phase. Finally, in data transformation, data were converted to proper structures which fits for data mining and machine learning analysis. To ignore the collision of the data a small amount of data was altered. Evaluation of the performance of machine learning models In this lesson, eight classifiers are known as SVM, Random Forest, Logistic Regression, AdaBoost, Naïve Bayes, Neural Network, kNN, CN2 rule Inducer were used for evaluation with orange and almost 10 classifiers namely SVM, Random Forest, Logistic Regression, AdaBoost, Naïve Bayes, Neural Network, kNN, Gaussian Process, Decision Tree, Quadratic Classifier were used for assessment of machine learning models. In this context, the performance was measured using standard metrics like the area under the ROC curve (AUC), precision, classification accuracy, recall, specificity, F measure, support. A decision tree was constructed with the important factors of cervical, ovarian, and stress datasets. Performance measures: Classification accuracy rates for the datasets were analyzed. For each dataset, two classes were identified namely positive and negative. There are four possibilities for a single prediction e.g., true positive, true negative, false positive, false negative. True positive and true negatives are described as how many correct predictions were made. False-positive and false-negative provides how many incorrect predictions were made of positive and negative classes when they belong to positive and negative classes. Accuracy: It defines the number of correct predictions that were correctly classified from the total number of predictions in ratio Eq. 5) Support: Defines the number of datasets that were analyzed after training and splitting the whole dataset. From my analysis, it is seen that sklearn uses only 24% of the whole dataset that was used Eq. (6) The first and foremost concept to judge the probability is to find significant factors through various analyses. The study was undertaken with lots of derivatives and algorithms to find out the significant factors. The level of important factors was acquired using information gain, gain ratio, gini index. Information Gain: It is a measurement of the decrease in uncertainty. It is estimated from entropy. Entropy is the measurement of the probability of changeability of the processed information. The higher the entropy, the harder it is to make any determinations from that data Eq. (7) Summation of the feature probability of values times the log probability of the same label. By deducting the value of label and features from the entropy of label information gain is obtained Eq. (8) ( Gain Ratio: Modifies information gain by taking ignored information into account including the number and sizes of the branches that reduces the bias of information gain Eqs. (9,10) Gini Index: It measures the impurity of a single feature. It is obtained by subtracting the sum of squared probabilities from one Eqs. (11,12) From information gain, we acquired the certainty of individual features for a specific label. Gain ratio provides us the same including the intrinsic information of the dataset. Gini index provides how much filthy an individual factor is. All of these values are gathered in terms of 0 and 1. Equation analyzing the chi-square test and results of feature selection evaluators we found out the most significant factors which are working behind cervical and ovarian cancer in connection with stress. Then these factors were given different scores based on their significance level. Afterward, the Eq. 12. was defined to separate the risk levels of an individual. Results and discussion The results and discussion section has been discussed and analyzed in this section. Some data mining and machine learning techniques have been applied. We have analyzed the datasets of cervical and ovarian and found common pattern significant attributes. The attributes were selected as common and highly significant factors and correlated with the possibilities of Cervical or Ovarian cancer. Table 1 shows the values of Info Gain, Gini Index, and the Gain ratio of the parameters. Table 1 also shows the Chi-Square test values along with ranking values. Chi-square values were acquired from statistical analysis and ranked values are found from the Data Mining algorithm. A different analysis of the parameters was conducted with different attribute evaluator and has been shown in Table 2. The serial position of the attributes in different attributes refers to the position of the attribute in the ranking table of corresponding sub evaluators. Figure 1 indicated the logistic regression analysis values of the actual and predicted data. In x-axis total of 25 values represents 25 separate parameters and the serial of the parameters are as same as the serial of Table 2. The 24th parameter is related to the mental health or stress of women. The following figure depicts that the higher no of affected women has been shown by linear logistic regression. KDE plot of Fig. 2 tends to estimate the probability distribution functions of affected and non-affected women. The sub-parameters were assigned numeric values e.g., 1 represents above 60, 2 represents 46-60 etc. The affected plot varies from 1 to 1.45 means a higher number of affected women rely on age more than 60 and similarly, 2-0.7 points describe that the second most affected women had an age of 46-60. Violin plot visualizes the distribution of data and its probability density as shown in Fig. 3. Children of 3-5 or above 5 suffer from Fig. 3. Violin of 0 level means having a higher number of children which had those cancers. Tables 3, 4 and 5 shows the accuracy of the data for Mental Stress, Ovarian cancer and Cervical cancer according to different machine learning classifiers and also displays the classification accuracy, F1, precision metrics which can be used to compare the machine leaning models. The accuracy level is organized between 0 and 1. The prediction accuracy of the model increases with the value getting closer to 1. It also indicates the significance. All the significant factors and sub-factors of the diseases were first indexed with the help of ranker algorithm and then combined to get a whole picture which is later used for anticipation. The compared notable features with weightage values have been displayed in Table 6. Finally, an algorithm has been developed based on the weightage values of Table 6. After analyzing the significances of the factor of cervical, ovarian, and stress we have derived an algorithm for predicting the risk levels of the diseases which is shown below, Step 1. Start Step 2. read weights Step 3. total_weights ← weights Step 4. prediction_difference ← Step 5. if total_weights < = prediction_difference + lowest then print LOW RISK Step 6. else if total_weights < = (prediction_difference*2) + lowest then print MEDIUM RISK Step 7. else if total_weights < = (prediction_difference *3) + lowest then print HIGH RISK Step 8. else print VERY HIGH RISK Step 9. Stop The flowchart of the algorithm has been shown in Fig. 4. With the help of the above algorithm, we found out the respective flowcharts for the diseases. At last, we put all the flowcharts and significant factors together to elicit the superior significant factors. Afterward combining cervical, ovarian, and stress factors we have drawn the flowchart for all of them. From those flowcharts and using the algorithm that predicts from the significant factors we have prepared an application shown in Figs. 5 and 6. The application is well prepared with the intention to store the future data in the cloud provided by the users and use that information for further investigation. We got better results with the sample data after training and testing of the models which makes us confident to use the for predicting those diseases. After utilizing the upcoming data, we will be able predict the diseases much more accurately. The combined decision trees of cervical and ovarian cancer were pointed in Fig. 4, which indicates there are maximum chances to be infected by cervical virus if a woman had more than 2 children. If she had 1-2 children and her first intercourse was made when she was less than 16 years old than there are also maximum possibility of cervical cancer. In here, a decision is made taking 6 precious factors for emerging cervical cancer. Likewise, taking 15 parameters a decision is made to find out the risk of appearing ovarian cancer. The most risk factors are abortion, age of husband, alcohol consumption, etc. Conclusion Cervical and ovarian cancers are the dominant causes of women's demise in Bangladesh. The majority of the people are unconscious of it. Death is inescapable because of cervical and ovarian cancer. From the findings, we got the evidence that immune response may be damaged due to neurodegenerative disease may even enhance the development of cancer. In this scrutiny, risk factors of cervical and ovarian cancer were analyzed carefully. Here, data mining and machine learning models like SVM, Random Forest, Logistics Regression, AdaBoost, Naïve Bayes, Neural Network, kNN, CN2 rule, Decision Tree, Quadratic Classifier have been used and those models were compared with two different tools. The obtained result for neurodegenerative disease shows that AdaBoost performed the best with a classification accuracy of 78.8% in orange and 79% in Sklearn. In the case of cervical cancer, Logistics Regression provides the best score of 84.8% and with Sklearn we've got 79.3%. On the other hand, SVM shows the best accuracy of 88.3% in orange, and the decision tree provides 98.6% classification accuracy in Sklearn for ovarian cancer. Based on all the analyses finally, an algorithm along with a smart app was developed by the weightage values generated from the analysis. Future works can be done by improving the dataset size, tuning parameters, and more effective analysis. Summary of the work Cervical and ovarian cancer is the one of the most frightening disease among females in the low approaching nation like Bangladesh. The community of Bangladesh are lacking behind in education and awareness about these two cancers. Previous studies have found that stress is somehow influencing these two cancers. Any kind of prediction of cervical and ovarian cancer among Bangladeshi female's is not available in this modern age. Purpose: To find out the association between factors and the most significant factors of stress, cervical, ovarian cancer. Contribute a prediction on befalling cervical and ovarian cancer based on their worthy factors as well as stress parameters. Methods: A case control study has been made on 298 patients of cervical and 522 patients of ovarian cancer. Cases of 197 and control of 100 were considered for cervical cancer. In case of ovarian cases of 267 and control of 254 beheld for data mining analysis. For analyzing performance of several machine learning models e.g. Logistics Regression, Random Forest, AdaBoost, Naïve Bayes, Neural Network, kNN, CN2 rule Inducer, Decision Tree, Quadratic Classifier were compared with their standard metrics. For certainty info gain, gain ratio, gini index were revealed of both cervical and ovarian cancer. Attributes were ranked using different feature selection evaluators. Then the most significant analysis was made with the significant factors. Factors like children, age of first intercourse, age of husband, pap test, age are significantly higher factors of cervical cancer. On the other hand, genital area infection, pregnancy problem, use of drugs, abortion, number of children important factors of ovarian cancer. The analysis was made with significant factors of stress, cervical and ovarian cancer that will help us to predict the risk of occurring cervical or ovarian cancer and may help to abate the cancer not only from Bangladesh but all over the world. After analyzing a weightage table has been created to make an algorithm which can predict risk level of two fatal cancer of Women (cervical and Overian) along with mental health.
4,761.4
2021-04-24T00:00:00.000
[ "Medicine", "Psychology", "Computer Science" ]
On the Electronic Effect of V , Fe , and Ni on MgO ( 100 ) and BaO ( 100 ) Surface : An Explanation from a Periodic Density Functional A periodic density functional study of the V, Fe, and Ni sublayer doped MgO(100) and BaO(100) surfaces was carried out using a periodic approach in the context of the GGA approximation. Results suggest that doping atoms accommodate better in the MgO than in BaO because covalent radii of the doping atoms are closer to that of theMg atom. Sizes of the doping atom, bulk forces, and electronic effects play an important role in the structural changes observed in doped surfaces studied herein. From all the doped studies, Ni doped Ba(100) surface is shown to be a promising material for trapping molecules with partially occupied states. Introduction Alkaline earth metal oxides (AEMO) have been studied in the last years for their ability to trap atmospheric gases as NO and CO [1,2].The capability of adsorbing molecules is due to its high Lewis basicity which increases as the alkaline earth metal becomes larger and more electropositive [3].Superbase sites can be generated promoting the surface with zero-valent alkali metals [4].Zero-valent alkali metals donate one electron to the lattice which is localized in the defective site and displays similar electron density characteristics such as an oxygen vacancy (F + center) [5].The basicity of these sites is such that NO is transformed to paramagnetic NO 2 2− ; these molecules are easily detected by the ESR technique [6]. Apart from the alkali metals, other elements can dope the crystalline structure of the AEMO.Raschman and Fedoročková [7], using atomic absorption spectroscopy, found that low concentration of CaO, Fe 2 O 3 , Al 2 O 3 , and SiO 2 are present in natural MgO.These elements, as in the case of alkali metals, affect the basicity of the AEMO and hence the activity of these materials when they are used as catalysts or trapping materials.Ueda et al. [8,9] studied the reaction of nitriles with methanol to form , -unsaturated nitriles using different M-MgO catalysts (M = Al, Fe, Cr, Mn, Ni, and Cu).The authors found that whereas MgO was practically unable to transform acetonitrile, doped MgO displayed some activity and, depending on the doping, the selectivity can change to acrylonitrile or propionitrile.A more recent study of the NO storage and reduction over Pt-BaO/Al 2 O 3 catalysts showed that it is possible to modify the storage-reduction selectivity controlling the Pt concentration [10].Then, the alkali or transition metal atoms transform the electronic environment of the AEMO surfaces producing changes in their basicity which modify their performance for the molecular adsorption. From a theoretical point of view, only few works are found in the literature about metal doped AEMO.Halim et al. [11], using embedded clusters in the context of density functional theory (DFT), found that defective sites and surface doped with transition metal atoms drastically enhance the adsorption of CO on AEMO.Baltrusaitis et al. [12], by the use of a periodic DFT approach, studied the adsorption of NO 2 , CO 2 , and SO 2 on Ca and Fe doped MgO(100) surface.Because self-diffusion [13] or counterdiffusion [14] could occurs in AEMO, the doped surfaces were built replacing a Mg atom of the second layer of an undefected surface by an Ca or Fe atom.The authors found that, despite the depth of the doped, the surfaces displayed an enhanced adsorption in comparison with the undoped surfaces.These results showed as the doping affects the electronic environment of the surface even when the metal atom diffuses to more internal layers.In a previous work, we studied the interaction of NO on defective Au doped BaO(100) surface [15] using a periodic DFT approach.We found that overlap of the density of states (DOS) between the surface and the adsorbate gives a qualitative point of view of the adsorption energy; then adsorption energy increases as the overlap between adsorbate and surface increases. The aim of this work is to evaluate, using a periodic DFT approach, the electronic changes produced by the sublayer doping of the MgO(100) and BaO(100) surfaces with a V, Ni, or Fe atom.The different atomic radii and electronegativities of the transition metal atoms give more insight about the structure and electronic changes produced by the doping in the AEMO structure.Electron density differences (EDD) are used to identify the electron density movement due to the doping. Computational Details Geometry optimizations were performed using the Vienna ab initio simulation program (VASP) [16,17].Kohn-Sham equations were solved with the generalized gradient approximation (GGA) proposed by Perdew and Wang [18].The projector-augmented-wave (PAW) method of Blöchl [19] in the formulation of Kresse and Joubert [20] was applied to describe electron-ion interactions.Standard PAW potentials were used for Mg, Ba, Fe, V, Ni, and O with a valence electron distribution of 2s 2 , 5s 2 5p 6 6s 2 , 3d 7 4s 1 , 3d 4 4s 1 , 3d 9 4s 1 , and 2s 2 2p 4 , respectively.Brillouin-zone sampling was performed on Monkhorst-Pack special points [21] using a Methfessel-Paxton integration scheme.The plane-wave cutoff was set to 300 eV throughout all calculations except for the optimization of bulk unit cell parameters where at 400 eV plane-wave cutoff was used.This cutoff has shown to be high enough to reach sufficient convergence in this kind of systems [15]. The cubic MgO and BaO bulks were optimized starting from the experimental structure [22] with a k-point sampling of 4 × 4 × 4 Monkhorst-Pack k-point mesh.The geometric parameters obtained were 4.236 and 5.604 Å for the MgO and BaO, respectively.The values obtained herein are in good agreement with the experimental parameters (4.211 and 5.523 Å for MgO and BaO, resp.) and with previous values obtained with a similar approach [1].It is well known that a four-layer slab with a vacuum of the same thickness is enough to reach a surface energy convergence for these oxides [1].In this work, the MgO(100) and BaO(100) surfaces were modelled with 2 × 2 five-layer slab (see Figure 1) where the two bottom layers were fixed to the optimized bulk structure and a vacuum separation of 13 Å thick was set between two periodically repeated slabs.The sublayer doped was carried out substituting one Mg or Ba atom of the second layer by one doping atom.We tested the effect of different spin multiplicity on the stability of the doped surface studies.All doped surfaces (both MgO(100) and BaO(100) surfaces) showed ferromagnetic arrangements of spin and we obtained a quadruplet, quintuplet, and triplet states for the V, Fe, and Ni doped surfaces, respectively. The electronic structure was analyzed from the DOS plots and the electron charge distribution was examined by the Bader method [23,24].The EDD was plotted through where (surf+metal) is the electron density of the adsorption surface plus the metal atom and (surf) and (metal) are the electron densities of the surface at the optimized adsorption positions and the metal atom, respectively.Δ provides information on the electron redistribution upon the adsorption process.Thus, positive values correspond to density gain and negative values to density loss.BaO(100) surfaces.For the MgO(100) surface, there is a little increase in all metal-O distances even in the case of the Ni atom in which covalent radius is smaller than the Mg atom (1.24 Å for the Ni atom and 1.41 Å for the Mg atom [25]).This is in good concordance with the experimental findings in Li doped MgO(100) surfaces; the Li-O distance increases [26][27][28] in comparison with the bulk Mg-O distance although the Li covalent radius (1.28 Å) is smaller than that of the Mg atom.It is evident that not only does the atom size play an important role in the structural changes observed in doped MgO(100) surfaces, but also bulk forces and electronic effects must affect the interaction between the doping atom and the O atoms.Nolan and Watson [29], using a similar approach to the one used in this work but with the PBE functional, found that the PBE functional produces a wrong description of the geometry and electronic structure of the doped MgO(100) surface. Results and Discussion The results obtained in this work and those obtained with the same GGA functional [30,31] seem to indicate that the PW91 functional performs much better than its homologous PBE functional for the study of structural properties of the doped MgO(100) surface.Due the largest cell vectors in the BaO(100) surface, the volume occupied by the Ba atom is big enough to avoid the optimal interaction among the doping atoms and the six O atoms.The doping atoms are nearest to the O S atom (interaction distances below the 2 Å) and to the four O E atoms in the case of the V atom.For Fe and Ni atoms, largest interaction distances are observed with respect to the O E atoms suggesting that interaction is very weak.For the three doping atoms, distance with respect to the O B atom is above 3.2 Å which indicates that there is no interaction between them.The Bader analysis shows that the doping atoms transfer charge to the surface (see Table 2).In general, the charge transferred depends of the atom electronegative; that is, the more electronegative the atom, the less the charge transferred to the surface.V atom transfers approximately 1.5 e − for both MgO(100) and BaO(100) surfaces; however, Fe and Ni atoms give less charge in the case of BaO(100) surface than in MgO(100) surface.Analyzing the EDD plots of the doped surfaces (see Figures 2 and 3) it is possible to observe that, for the MgO(100) surface, the three doping atoms interact with the six O atoms.This is consistent with the fact that covalent radii of the doping atoms are closer to that of the Mg atom (differences less than 0.17 Å); hence, the doping atoms fit very well in the Mg site.However in the case of the BaO(100) surface, where doping atoms seem to interact with only five O atoms, there is no electronic density movement in the interaction region among doping metals and the O B atom; furthermore, for Fe and Ni the electron density difference is insignificant in the region between the metal and the O E atoms.EDD plots corroborate the assumption made with the structural analysis; in the BaO(100) surface, the Fe and Ni atoms interact only with one O atom diminishing the possibility of transferring charge to the surface.Since Fe and Ni atoms lost approximately one e − , these systems could be similar to those observed in Li doped MgO(100) surfaces (Li + O − ) which have been reported to be responsible the promotion of reactions in oxidation of methane [32].Figure 4 displays the total DOS for the doped MgO(100) surfaces.In a previous work [15], we noted that the valence band of the BaO(100) surface goes to low energy regions as the surface increases the electron reservoir, that is, from the undefected to the S surface (which has two electrons trapped in the defect).Observing the changes that occurred in the band structure of the doped MgO(100) surfaces, it can be seen that the doping atom introduces similar changes in the valence band than in the case of O defective surfaces.These changes are due to the inclusion of high energy states of the doping atom (see blue line in Figure 4) near the Fermi level.As the electron number of the doping atom increases, its electronic states near the Fermi level are more stable and closer among them (due to the crystal symmetry) resulting in less displacement of the valence band of the surface to low energy regions.Regardless of the doped surface, the nearest electronic states below the Fermi level belong to the doping atom.Displacement of the valence band to low energy regions is also observed in the case of the doped BaO(100) surfaces; however, in the case of Ni atom, the nearest electronic states below the Fermi level do not belong completely to the doping atom (see Figure 5).In addition to the and orbitals, the BaO(100) surface has 6d orbitals (from the Ba atom) closer to the Fermi level (see Figure 6).Then, electronic states from the Ni atom, some of which have energies near to the highest energy states of the Ba atom, pushes up to high energy regions part of the valence band of the surface.The nearest band below the Fermi level is now formed by electronic states of the doping atom and the surface.These results suggest that doping atoms not only introduce new electronic states near the Fermi level but also can move surface electronic states to the Fermi level increasing the ability of the doped surfaces to the adsorption of molecules with partially occupied states [15]. Conclusions V, Fe, and Ni sublayer doped MgO(100) and BaO(100) surfaces were studied using a DFT periodic approach in the context of the PW91 functional.Our results indicate that PW91 functional performs well for the study of structural properties of the doped MgO(100) and BaO(100) surfaces.Doping atoms enclose very well in the sublayer of the MgO(100) surfaces; however, due to the higher covalent radius of the Ba atom, in the BaO(100) surface doping atoms accommodate near to the top layer displaying a strongest interaction with the O atom of the surface.The interaction of Fe and Ni in the BaO(100) surface could be similar to that observed in Li doped MgO(100) surfaces. In general, doping atoms insert new electronic states near the Fermi level which move down the valence band of the surface to low energy regions.However, some orbitals of the Ni atom move up the higher electronic states of the BaO(100) surface.Then, the valence band near below the Fermi level is formed by electronic states of the surface and the doping atom which could improve the performance of the BaO(100) surface for trapping molecules with partially occupied states.Sizes of the doping atom, bulk forces, and electronic effects play an important role in the structural changes observed in doped MgO(100) and BaO(100) surfaces. Figure 1 : Figure 1: Lateral (a) and top (b) views of the slab used for all calculations.Oxygen atoms in red balls and metal atoms in green balls. Figure 2 : Figure 2: EDD for the V (a), Fe (b), and Ni (c) doped MgO(100) surfaces.The blue region corresponds to a density gain and the yellow region to a density loss.Oxygen atoms in red balls and metal atoms in green balls.For a better overview, only a 1 × 1 × 1 cell was used. Figure 3 : Figure 3: EDD for the V (a), Fe (b), and Ni (c) doped BaO(100) surfaces.The blue region corresponds to a density gain and the yellow region to a density loss.Oxygen atoms in red balls and metal atoms in green balls.For a better overview, only a 1 × 1 × 1 cell was used. Figure 4 : Figure4: DOS of the V (a), Fe (b), and Ni (c) doped MgO(100) surfaces.In all cases, undoped, doped, and projected DOS of the doping atom are displayed with black, red, and blue lines, respectively.Only region near to the Fermi level is plotted for a better overview. Figure 5 :Figure 6 : Figure5: DOS of the V (a), Fe (b), and Ni (c) doped BaO(100) surfaces.In all cases, undoped, doped, and projected DOS of the doping atom are displayed with black, red, and blue lines, respectively.Only region near to the Fermi level is plotted for a better overview. Table 1 displays the distances among the doping metals and the six nearest O atoms (one superficial (O S ), one in the bottom (O B ), and four equatorial (O E )) for the MgO(100) and Table 1 : Equilibrium distances ( Å) between the doping atom (Mg or Ba atom in the case of undoped surface) and the nearest O atoms.O S corresponds to the superficial oxygen atom, O E corresponds to the equatorial oxygen atoms, and O B corresponds to the bottom oxygen atom.
3,698.8
2016-02-16T00:00:00.000
[ "Materials Science", "Physics" ]
CALHM2 is a mitochondrial protein import channel that regulates fatty acid metabolism For mitochondrial metabolism to occur in the matrix, multiple proteins must be imported across the two (inner and outer) mitochondrial membranes. Classically, two protein import channels, TIM/TOM, are known to perform this function, but whether other protein import channels exist is not known. Here, using super-resolution microscopy, proteomics, and electrophysiological techniques, we identify CALHM2 as the import channel for the ECHA subunit of the mitochondrial trifunctional protein (mTFP), which catalyzes β-oxidation of fatty acids in the mitochondrial matrix. We find that CALHM2 sits specifically at the inner mitochondrial and cristae membranes and is critical for membrane morphology. Depletion of CALHM2 leads to a mislocalization of ECHA outside of the mitochondria leading to severe cellular metabolic defects. These defects include cytosolic accumulation of fatty acids, depletion of tricarboxylic acid cycle enzymes and intermediates, and reduced cellular respiration. Our data identify CALHM2 as an essential protein import channel that is critical for fatty acid- and glucose-dependent aerobic metabolism. Introduction Mitochondria have numerous critical functions in cellular energetics.They are the powerhouses of eukaryotic cells, using energy from the oxidation of nutrients such as fatty acids to regenerate ATP.Mitochondrial function is closely linked to its complex structure.Each mitochondrion has two (outer and inner) membranes, which together partition the organelle into an intermembrane space and a central matrix.The matrix side of the inner membrane is the site of fatty acid oxidation (β-oxidation), the primary metabolic pathway for the conversion of fats into energy 1 2 .βoxidation liberates acetyl-CoA that can then enter the tricarboxylic acid (TCA) cycle.Reduced products made in the TCA cycle are oxidized in the electron transport chain, which results in the translocation of protons across the inner mitochondrial membrane into the intermembrane space creating an electrochemical gradient.The ATP synthase uses this gradient to generate ATP.Therefore, the inner mitochondrial membrane is critical for energy production 3,4 .Multiple β-oxidation enzymes are associated with the inner membrane, including the mitochondrial trifunctional protein (mTFP), which catalyzes three of the four mitochondrial steps of fatty acid oxidation [5][6][7][8] . The mTFP and many other inner membrane and matrix proteins are nuclear encoded and translated in the cytosol; therefore, they must be imported across the two mitochondrial membranes.A canonical import pathway has been well-de ned where protein precursors with mitochondrial targeting presequences are imported by two ion channels: TOM (the translocase of the outer membrane) and TIM23 (the inner membrane translocase subunit) 9- 14 .Once in the inner membrane or matrix, a mitochondrial processing peptidase removes the presequences, and chaperones take over to refold the proteins into their three-dimensional structures [15][16][17] .Many of the foundational studies de ning TIM/TOM targets were performed in yeast, where β-oxidation of fatty acids occurs in the peroxisome not the mitochondria 18 .This opens the possibility that metazoans may require an evolutionarily divergent system to transport β-oxidation enzymes into the mitochondria. Here, we show that CALHM2 localizes to the inner mitochondrial membrane and acts as an import channel for the ECHA subunit of the mTFP.CALHM2 resembles a connexin channel that is well conserved across vertebrates but has not been assigned a function 19,20 .We now show that CALHM2 can independently translocate ECHA across a lipid bilayer.Loss of CALHM2 results in the mislocalization of ECHA to the cytosol, which leads to an accumulation of cytosolic fatty acids.The loss of CALHM2 and mislocalization of ECHA result in severe metabolic compromise, impairing ATP production and reducing mitotic rates.These ndings highlight the indispensable role of CALHM2 in importing ECHA to the matrix and de ne a pathway, distinct from TIM/TOM, for protein import into the mitochondria. CALHM2 is localized to the inner mitochondrial membrane To ascertain the function of CALHM2, we rst sought to determine its localization in the cell.We assessed the endogenous subcellular localization of human CALHM2 in human telomerase reverse transcriptase (hTERT) immortalized human retinal pigment epithelial (RPE) cells by structured-illumination microscopy.We found that CALHM2 co-localized with the mitochondrial-speci c dye MitoTracker and TOM20 (Fig 1a).CALHM2 does not appear to co-localize with the ER marker anti-Sarcoendoplasmic Reticulum Calcium ATPase (SERCA) (Fig 1a). Further analysis of the structured-illumination imaging revealed that CALHM2 is enveloped by the TOM20 staining and overlaps with superoxide dismutase 2 (SOD2), a component of the mitochondrial matrix (Fig 1b).Quantitative co-localization analysis using Pearson's Correlation showed a higher correlation between SOD2 and CALHM2 than CALHM2 and TOM20 (Fig 1b), supporting the view that CALHM2 is at the inner membrane.To con rm that our anti-CALHM2 antibody signal is speci c, we generated three independent RPE cell lines for CALHM2 knock-down (KD), targeting two distinct sites of the CALHM2 gene (Lines 1.1 and 1.2 target the same site, while Line 2 targets a second site, Ext Data Fig 1).In these cell lines, the CALHM2 signal is reduced con rming the speci city of our Next, we studied the submitochondrial localization of CALHM2.We puri ed mitochondria from RPE cells and treated them with different concentrations of digitonin to remove the outer mitochondrial membranes.In this assay, proteins from the outer mitochondrial membrane are lost differentially compared to proteins in the inner membrane or matrix.We used the following proteins as markers of different mitochondrial compartments: TOM20 and mitochondrial calcium uniporter (MCU) for the outer and inner mitochondrial membranes, respectively, and PDH for the matrix.Treatment with digitonin reduced TOM20 levels substantially, indicating removal of the outer membrane, while reduction in MCU levels was less, and PDH was relatively preserved (Fig 1c, d).CALHM2 was well preserved compared to TOM20, suggesting that it is not localized in the outer membrane.CALHM2 levels were more comparable to MCU and PDH.Next, we assessed subcellular localization to mitochondrial-associated membranes (MAMs) via biochemical fractionation (Fig 1e,f).CALHM2 and the mitochondrial matrix protein PDH are enriched in the mitochondrial fraction, while Long-chain-fatty-acid-CoA ligase 4 (FACL4) is enriched in the MAM fraction (Fig 1e,f).Together, these data allow us to conclude that CALHM2 is localized to the inner mitochondrial membrane. For greater structural resolution, we performed electron and expansion microscopy to reveal if CALHM2 is localized to the inner boundary membrane, the cristae membranes, or the cristae junctions.Immunogold labeling revealed that CALHM2 is most frequently distributed in cristae membranes (Fig 1g).Using expansion microscopy, we con rmed that CALHM2 is not localized to the plasma membrane and is exclusively in the mitochondria at low magni cation (Fig 1h).At high magni cation, CALHM2 is found most frequently at cristae and cristae junctions (Fig 1i). CALHM2 binds to the mTFP and regulates mTFP levels To begin elucidating a role for CALHM2 in mitochondria, we expressed a CALHM2-Myc construct in RPE cells, immunoprecipitated the myc epitope, then performed liquid chromatography mass spectrometry (LC-MS/MS) to identify associated proteins.Surprisingly, amongst the top binding partners of CALHM2 were ECHA and ECHB, subunits of the mTFP (Fig. 2a, Ext Data Table 1).The mTFP is a hetero-octamer, with two genes, HADHA and HADHB, that encode the a (ECHA) and β (ECHB) subunits of the mTFP, respectively.Together these two subunits of the mTFP perform three consecutive steps in β-oxidation: 2-enoyl-CoA hydratase activity, an NAD + -dependent 3hydroxyacyl-CoA dehydrogenase activity, and a CoA-dependent 3-ketothiolase activity.These steps culminate in the production of acetyl-CoA that is fed into the TCA cycle to produce NADH and FADH 2 for oxidation in the electron transport chain. To verify that CALHM2 binds to the mTFP, we performed reciprocal immunoprecipitation and western blotting (Fig 2b).With IP of either CALHM2, ECHA, or ECHB, we could detect each of the other proteins that are not present in an IgG control.Interestingly, while both subunits of the mTFP and two chaperone proteins (HSP90 and HSP70) were detected in this LC-MS/MS analysis, other proteins of the inner mitochondrial and cristae membranes such as components of the electron transport chain were not detected (Extended data Table 1).Struck by the mitochondrial localization of CALHM2, the association of CALHM2 with both subunits of the mTFP (ECHA, ECHB), and the importance of the mTFP in fatty acid metabolism, we focused our analysis on the relationship between ECHA, ECHB, and CALHM2. Next, we wondered if CALHM2 regulates mTFP protein levels or function.We isolated mitochondria and examined protein levels in two different CALHM2 KD lines (Fig 2c).Testing the mitochondrial extracts, we noted reduced levels of ECHA and ECHB in both KD cell lines (Fig 2c).To evaluate whether ECHA was mislocalized as opposed to simply reduced in level, we compared the cytosolic fraction to our mitochondrial fraction.Interestingly, ECHA appeared relatively elevated in the cytosolic fraction compared to the mitochondrial fraction in CALHM2 KD cells vs. WT (Figure 2d).This result suggests that CALHM2 may be required for the proper localization of the mTFP to the mitochondria. To better address the localization of ECHA in response to CALHM2 KD, we returned to our structured illumination immuno uorescence studies.We compared the localization of ECHA to TOM20, present in the mitochondrial outer membrane.In WT cells, ECHA appears to be surrounded by the TOM20 mitochondrial signal (Figure 2e -see merged channel).However, in the CALHM2 KD cell line, ECHA appears to be mislocalized and the geometry relative to TOM20 is not preserved.In this case, quantitative co-localization analysis using Pearson's Correlation showed a higher correlation between TOM20 and ECHA in CALHM2 KD cells compared to WT indicating that ECHA was mislocalized (Fig 2e). CALHM2 affects mitochondrial cristae and fatty acid levels If CALHM2 is necessary for mTFP mitochondrial localization and function, we expected CALHM2 KD cells to exhibit abnormal fatty acid metabolism as previously described in mTFP disease and HADHA mutant and KD cells 21 .BODIPY is a commonly used indicator of intracellular neutral lipid levels.We found that CALHM2 depleted cells have an increase in the BODIPY signal (Fig 3a,b), suggesting an accumulation of these lipids in the cytosol. In addition to its role in the mTFP, the α subunit of the mTFP (ECHA) acts as an acyltransferase in cardiolipin remodeling 21 .Cardiolipin is an essential diphosphatidylglycerol lipid in the inner mitochondrial membrane that plays a critical role in creating normal cristae structures and positioning of inner membrane transport complexes 22,23 .Cardiolipin was reduced in CALHM2-depleted mitochondria compared to WT (WT) (Fig. 3c), further evidence that mTFP function was impaired. Cardiolipin is speci c to the inner membrane making up 20% of the lipid content.Therefore, reduction of cardiolipin is likely to affect mitochondrial inner membrane structure.We examined mitochondrial ultrastructure by transmission electron microscopy of CALHM2 KD cells and identi ed abnormalities in cristae morphology (Fig 3d).In WT cells, the mitochondria are oblong with characteristic cristae formed by the inner membrane.In the majority of CALHM2 KD cells, the number and size of the mitochondria are normal (Fig. 3d and Ext Data Fig 2 ); however, the mitochondria have a reduced number of cristae as predicted by the decrease in cardiolipin levels (Fig 3d).Altogether, we conclude that CALHM2 is essential for inner mitochondrial membrane structure and cardiolipin levels in addition to normal lipid metabolism. Due to the disruption of mitochondrial cristae and a presumed failure to metabolize fatty acids, we predicted that cellular ATP content should be low in CALHM2 depleted cells.We found that ATP levels of CALHM2 KD cells are approximately 50% of the WT levels (Fig 3e).In the context of abnormal lipid metabolism and a drop in ATP levels, we wanted to assess the overall health of these cells via cell division rate.We counted the number of cells in mitosis per hour.Over a 36 hr window, the number of cell divisions in CALHM2 KD cells was roughly a third of the number in WT cells (Fig 3f), suggesting a global growth defect. CALHM2 affects cellular respiration The TCA cycle generates energy via the oxidation of acetyl-CoA that can be derived from carbohydrates during glycolysis, fatty acids during β-oxidation, or proteins during amino acid catabolism.We predicted that a loss of CALHM2 would lead to a reduction in fatty acid metabolism as mTFP no longer localizes to the mitochondrial matrix.When β-oxidation is inhibited, glycolysis may compensate by producing acetyl-CoA from pyruvate (Fig 4, illustration).This process results in the acidi cation of the extracellular medium due to the production of lactate, which can be measured as the Extracellular Acidi cation Rate (ECAR). To determine how CALHM2 KD cells use glycolysis compared to WT cells, we measured ECAR in the presence and absence of glucose.In the absence of glucose, we observed no difference in the ECAR of WT and CALHM2 KD cells, indicating similar overall rates of glycolysis under these conditions (Fig 4a).Upon the addition of glucose (5 mM, Fig 4b), we observe an equivalent increase in ECAR between the two groups, suggesting glycolysis is not increased in response to the reduction of β-oxidation in CALHM2 KD cells (overlapping red lines in Fig 4b after glucose addition compared to gray lines which are reproduced from Fig 4a).Strikingly, the addition of the ATP synthase inhibitor oligomycin to WT cells approximately doubles their ECAR response to glucose, revealing a higher glycolytic capacity.However, in CALHM2 KD cells, exposure to oligomycin does not elevate the acidi cation rate any further (Fig 4b, split in red lines after oligomycin), showing a diminished response of glycolytic acidi cation to ATP synthase inhibition.These data suggest that CALHM2 KD cells are either at their maximal glycolytic capacity and cannot respond to the oligomycin-induced loss of ATP production or that CALHM2 KD cells have a reduced demand for ATP synthesis under these conditions. To understand the mitochondrial respiratory rate and capacity of CALHM2 KD cells, we next assessed oxygen consumption rate (OCR) in response to various metabolic substrates.In the absence of glucose (2 mM glutamine), baseline and protonophore (FCCP)-stimulated respiratory capacity are decreased by nearly half in CALHM2depleted cells compared to WT (Fig 4c, e), consistent with a reduction in oxidative capacity.In 5mM glucosecontaining media, both cell lines strongly suppress their respiration.Respiration is almost completely inhibited by glucose in CALHM2 KD cells compared to a less severe reduction in WT cells (Fig 4d, e, compare red lines to gray).This reduction in OCR shows that both WT and CALHM2 KD shift toward glycolytic ATP production in the presence of glucose; however, the relative difference between them indicates that respiratory capacity is markedly lower in CALHM2 KD cells. Since CALHM2 KD cells seem to be at their maximal glycolytic capacity in 5mM glucose, we sought to bypass glycolysis by providing either pyruvate or lactate (in No glucose) to determine if respiration could be rescued (Fig 4 illustration).The acute response and maximal capacity of WT cells to lactate are enhanced over CALHM2 KD (Ext Data Fig 3a-c).This suggests that either LDH or redox shuttling is limiting the use of lactate for acetyl-CoA production in CALHM2 KD cells.Interestingly, upon the addition of pyruvate, CALHM2 KD cells increase respiration comparably to WT cells but do not reach WT levels (Fig 4 e,f, comparing dotted red to solid red lines after pyruvate).Additionally, the maximal respiratory rate (FCCP response) is still lower in CALHM2 KD cells, consistent with a diminished respiratory capacity (Fig 4e).In summary, while CALHM2 cells can respond to pyruvate, they still have a diminished respiratory capacity suggesting that downstream metabolism may be affected such as the TCA cycle or electron transport. PDH is localized to the mitochondrial matrix and links glycolysis to the TCA cycle by converting pyruvate into acetyl-CoA (Fig 4 , illustration).As our data indicate that CALHM2 KD cells have a disrupted oxidative response to endogenous glycolytic products, we sought to characterize the levels and activity of PDH and essential TCA enzymes.To address step-by-step defects in glycolytic and mitochondrial metabolism, including PDH activity, we performed Mass Isotopomer MultiOrdinate Spectral Analysis (MIMOSA) on WT and CALHM2 KD cells.This technique uses the incorporation of mass [U- 13 C 6 ]-D-glucose in place of the unlabeled forms for analysis of labeled products by LC-MS/MS 24 .The relative rates of production of matrix acetyl-CoA from pyruvate versus other sources such as β-oxidation are determined from the fractional contribution of pyruvate oxidation by the mitochondria (V PDH /V cs ).PDH activity is signi cantly reduced in CALHM2 KD cells ( In summary, CALHM2 KD cells are unable to e ciently utilize endogenous lactate and pyruvate to drive the TCA cycle and the ETC.Interestingly, we expected to see a defect primarily in β-oxidation of fatty acids but not glycolysis/TCA/ETC; however, our data highlight that CALHM2 is essential for these other metabolic processes. CALHM2 is a protein import channel for ECHA Our results thus far suggest that CALHM2 is localized to the mitochondria, regulates mTFP localization, and is essential for normal cristae structure, cardiolipin and fatty acid levels, and normal cellular respiration.Based on cryo-EM, CALHM2 is a connexin-like transmembrane channel 19,20 , and given the mislocalization of ECHA in our IF studies, we speculated that CALHM2 imports the mTFP to the matrix side of the inner membrane (Fig 5a). To investigate the ion channel's biophysical properties, we carried out proteoliposome and planar lipid bilayer recordings of puri ed reconstituted CALHM2.Previous whole-cell electrophysiology of overexpressed human CALHM2 showed that CALHM2 produces a robust voltage-dependent current in the absence of Ca 2+ and is Ca 2+ inhibited 19 .In keeping with this previous report, our single channel recordings demonstrate that CALHM2 forms a large conductance voltage-gated channel with multiple sub-conductance states and a peak conductance value of ~1 nanoSiemens (nS) (Fig. 5b, far left CTL before ECHA and Ext Data Fig. 4).Similarly to the whole cell currents reported 19 , CALHM2 forms a negatively rectifying channel which is inhibited by the addition of calcium to the bath during the recordings (Fig 5b Next, to test whether CALHM2 might be an import channel, we designed ECHA and ECHB N-terminal peptides and added them to the recording chamber during electrophysiological measurements.When the N-terminal ends of transiting mitochondrial proteins pass through import channels, they inhibit channel activity in a concentration dependent manner 13,25,26 (Fig. 5a).Consistent with this notion, in patch-clamp recordings of proteoliposomes, we observed an inhibition of CALHM2 channel activity upon the addition of both the ECHA and ECHB peptides whereas a control peptide (N terminus of COXIV) had no effect (Fig. 5b and Ext Data Fig. 4a-d). . In contrast to ECHA, in a representative recording with ECHB, the rst addition of peptide fails to inhibit conductance, but instead increases the frequencies of transitions between subconductance states suggesting an interaction of ECHB with the channel but a failure to completely inhibit conductance (Fig 5e While electrophysiological recordings show an inhibition of CALHM2 channel activity with either peptide in a concentration-dependent manner, we could not distinguish between channel transit or inhibition (Fig 5j).Therefore, we used a low concentration (2.5 µg) of ECHA, where there was no discernible block of CALHM2.At higher concentrations (5 µg), the channel was partially inhibited (Ext Data Fig 4i).To test for transit of the peptide through the channel, we added the low concentration (2.5 µg) of ECHA to only the cis side of the lipid bilayer chamber, recorded channel activity, and then removed the solution from the trans side for MALDI TOF analysis (Fig 5h).The MALDI TOF trace con rmed the transit of the ECHA peptide from the cis to the trans side (Fig 5h,j).We did not detect the ECHB peptide on the trans side when we repeated this experiment with ECHB on the cis side (Fig 5i, Ext Data Fig 4j).This result was surprising, as we have shown ECHB directly interacts with CALHM2 in the IP assay and can inhibit channel activity (albeit less e ciently) in a concentration dependent manner. Because of the differential transit of ECHA and B through the channel, we studied the charge distribution of the ECHA and ECHB presequence peptides.The alignment of amino acid sequences of ECHA and ECHB revealed that the former has more negatively charged amino acid residues and is more linear in structure, which could explain the differences in the interaction of these peptides with CALHM2 (Ext Data Fig. 4k). Discussion CALHM2's double barrel structure is striking and opened a door to identify a cellular role for this protein.When expressed in cells heterologously, previous reports described CALHM2's electrophysiological properties, but these studies gave no deeper understanding of its cellular function.We have now discovered that CALHM2 resides on the matrix side of the inner mitochondrial membrane and is necessary for the import of a mitochondrial enzyme to the matrix.Although unexpected, this role is nevertheless consistent with its established structure.CALHM2 exempli es a system for protein import into mitochondria that is divergent from the canonical TIM/TOM pathway. The electrophysiological properties of CALHM2 may be predictive of its function in mitochondrial protein import. Our reconstituted proteoliposome and lipid bilayer single channel studies con rm that CALHM2 is inhibited by Ca 2+ .It is likely that CALHM2 is mostly in the closed state in vivo, since a frequently open pore in the inner mitochondrial membrane would abolish the proton gradient generated by the electron transport chain.The ECHA presequence is negatively charged, linear, and inhibits channel conductance (suggesting that it enters the pore), and the protein may reside in the pore during cell life, preventing a large leak from forming in the mitochondrial inner membrane. Importantly, our results also demonstrate that the ECHA peptide does in fact transit through the CALHM2 channel based on mass spectrometry on the trans side of the planar lipid bilayers.Once through, enzymes must chaperone the protein into its three-dimensional structure.Consistent with this idea, our co-IP mass spectrometry data identify two chaperones (HSP90 and HSP70), which further supports the function of CALHM2 as a protein import channel. The mTFP is composed of two subunits, ECHA and ECHB.Interestingly, we nd that ECHA and ECHB peptides interact differently with the CALHM2 channel.This nding is perhaps predicted by the decreased number and alternative positioning of charged amino acid residues and the non-linear structure of ECHB.The differential structure of the ECHB peptide may make it more di cult for this protein to inhibit the channel conductance.Indeed, in the presence of the ECHB peptide, we observed opening of the channel upon application of an increased voltage difference across the membrane, clearly differentiating between the e ciency of ECHA and ECHB in their effects on the channel.One intriguing possibility is that ECHB may help open the channel for ECHA transport.Future experiments will further clarify the role of ECHB in protein import to the matrix. Mutations in either ECHA or ECHB result in metabolic diseases of mTFP de ciency 21,[27][28][29] , and defects of ECHA result in long-chain 3-hydroxyacyl-CoA dehydrogenase de ciency (LCHAD) 30,31 .The activities of ECHA and ECHB are critical for survival and the disruption of their functions can lead to sudden death and severe cardiomyopathy 21,32 .However, a detailed analysis of the metabolic state of patients with disrupted mTFP function has been lacking. We comprehensively analyzed the metabolic defects in CALHM2 depleted cells.We nd that CALHM2 depletion results in mTFP de ciency, with phenotypes that highlight critical functions of ECHA, β-oxidation and cardiolipindependent mitochondrial inner membrane structure maintenance 21 .We nd that loss of ECHA in the matrix alters cardiolipin amount, and cardiolipin is an essential component of mitochondrial cristae leading to abnormal cristae morphologies in CALHM2 depleted cells.Regarding energy metabolism, CALHM2 depleted cells have a dramatic reduction in β-oxidation, manifested as lipid accumulation in the cytosol, as expected from disruption of ECHA location.Unexpectedly, however, glucose-dependent TCA cycle activation is also disrupted with the loss of multiple TCA cycle enzymes.The resulting metabolic de ciencies include reduction in glycolytic capacity, reduction in TCA cycle and ETC components and activity.These metabolic changes result in reduced respiration and impaired cell mitosis.Interestingly, although cardiolipin is reduced with alterations in mitochondrial cristae, the impact on ETC protein levels which resides in these cristae seems less dramatic than on the TCA cycle enzymes which reside in the matrix.An exciting avenue for future study is the possible role of CALHM2 in the import of matrix enzymes such as those comprising the TCA cycle. Finally, we originally identi ed CALHM2 in a patient with congenital heart disease and heterotaxy, suggesting it could have a role in embryonic patterning 33 .To this end, we previously showed that mitochondrial metabolism plays a signi cant role in establishing the vertebrate body plan 34 .This suggests that mitochondrial metabolism may have evolved to exploit bioenergetics to support multicellularity.Many of the foundational studies describing mitochondrial protein import have exploited the advantages of the eukaryotic system S. cerevisiae 9,[35][36][37][38][39][40] .However, fatty acid oxidation in metazoans evolved differently.Yeast perform fatty acid oxidation in peroxisomes, whereas metazoans perform fatty acid oxidation in the mitochondrial matrix where it is tightly coupled to respiration.Therefore, unlike in yeast, proteins necessary for fatty acid oxidation in multicellular organisms must be imported across the two mitochondrial bilayers to enter the matrix.We propose CALHM2 as this evolutionary innovation in protein import. Western Blots Protein concentrations were quanti ed using DC Protein Assay (Biorad).Western blotting was performed using standard protocols and 40 μg was loaded onto a polyacrylamide gel.GAPDH was used as a loading control for whole cell extracts.For mitochondrial puri cations, VDAC was used as a loading control.For protein detection, we used anti-mouse or anti-rabbit HRP conjugated secondary antibodies (Jackson Immuno Research Laboratories) and Western Lightning Plus ECL (Perkin Elmer). Oligos for these sequences were annealed and ligated to the LentiCRISPRv2 plasmid that was cut with BsmBI as described 41 .Lentivirus was produced with these plasmids as recommended on the Addgene website, and used to infect RPE cells.Forty-eight hours after infection, RPE cells were selected with 10 µg/mL of Puromycin.Puromycin resistant cells were replated and used to isolate single clones by serial dilution in 96-well plates. Structured Illumination Microscopy (SIM) Images were acquired using a U-PLANAPO 60X/1.42PSF, oil immersion objective lens (Olympus, Center Valley, PA) and CoolSNAP HQ 2 CCD cameras with a pixel size of 0.080µm (Photometrics, Tucson, AZ) on the OMX version 3 system (Applied Precision) equipped with 488-, 561-, and 642-nm solid-state lasers (Coherent and MPB communications).Samples were illuminated by a coherent scrambled laser light source that had passed through a diffraction grating to generate the structured illumination by interference of light orders in the image plane to create a 3D sinusoidal pattern, with lateral stripes approximately 0.270 nm apart.The pattern was shifted laterally through ve phases and through three angular rotations of 60º for each Z-section, separated by 0.125 nm.Exposure times were typically between 75 and 150 ms, and the power of each laser was adjusted to achieve optimal intensities of between 1,000 and 3,000 counts in a raw image of 12-bit dynamic range, at the lowest possible laser power to minimize photo bleaching.Raw images were processed and reconstructed using Softworx software (GE healthcare) to reveal structures with 100-125 nm resolution 42 .The channels were then aligned in x, y, and rotationally using predetermined shifts as measured using a target lens and the Softworx alignment tool. Expansion microscopy (pan-ExM) Tissue expansion was performed as previously described (Panluminate, Inc) 43 .Brie y, kidney cells were incubated in a solution of acrylamide and fomaldehyde.After xation, the cells were embedded in the expansion gel solution and then placed in MilliQ water for expansion.Gels were then re-embedded and the process was repeated until the desired expansion was achieved at which point antibody labeling and pan-staining were performed as described 43 . Mitochondrial and mitochondria-associated ER membranes (MAMs) MAMs and mitochondria were isolated from kidneys of adult C57BL/6 mice as previously described 44 .For outer mitochondrial membrane solubilization, the isolated mitochondria were resuspended in PBS supplemented with 250 mM mannitol with or without digitonin (2 and 4 mg/ml).Samples were vortexed in a multi-vortex at room temperature for 15 minutes and centrifuged at 10,000 g for 10 minutes.The pellet of mitochondria was resuspended in loading buffer for western blot analysis.Digitonin was prepared as a 40 mg/ml stock solution in water. Isolation of mitochondria from RPE cells. Mitochondria were isolated from RPE wild-type and CALHM2 KD cells.In brief, cells were transferred to ice-cold isolation buffer (250 mM sucrose, 20 mM Hepes (pH 7.2), 1 mM EDTA, and 0.5% BSA), supplemented with 1x Halt protease inhibitor.Cells were minced, homogenized with a Dounce homogenizer, and centrifuged at 1000 × g to pellet nuclei, cell debris and unbroken cells.The supernatant was centrifuged at high-speed (6000 × g for 15 min at 4 °C); the pellet containing mitochondria was washed in isolation buffer and pelleted by centrifugation at 6000 × g.The isolated mitochondria were kept on ice and used within 4 h.Protein concentration was determined by the BCA method using bovine serum albumin (BSA) as a standard. Immunoprecipitation of human CALHM2 protein for Mass Spec Human CALHM2 Myc-DDK-tagged ORF clone (Origene Technologies, RC200512) was expressed in HEK 293T cells and overexpression was veri ed by Western blot analysis using a mouse anti-Myc antibody (Cell Signaling Technology).Mitochondria overexpressing CALHM2-Myc-DDK protein were isolated from HEK cells and solubilized with 1 mM N-dodecyl maltoside for 30 min on ice.The solubilized mitochondria were centrifuged for 5 min at 16,000 × g to remove any remaining membrane fragments.To IP Myc-DDK-tagged CALHM2, EZview Red ANTI-FLAG M2 A nity Gel (Sigma) was added at a dilution of 1:100 and incubated with gentle agitation for at least 2 hr at room temperature or overnight in the cold room.The beads were then washed twice before being treated with Mycpeptide to elute Myc-DDK-tagged CALHM2.The eluate fraction was analyzed with LC-MS mass spectrophotometry to identify the proteins interacting with CALHM2.Another Myc-DDK-tagged protein (ATP synthase c-subunit) was used as a negative control to rule out the possibility of non-speci c binding of identi ed proteins to Myc-DDK tag. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identi er PXD010387 45 . In order to reduce non-speci c binding, Protein A beads were then added to the sample and incubated overnight at 4 o C. Protein A beads were then magnetized to isolate the supernatant.The supernatant was incubated with primary antibody overnight at 4 o C. New Protein A beads were added and then magnetized.Beads were washed 3 times in PBS.Then 2x SDS loading dye was added and heated to 95 o C for ve mins.Samples were then analyzed by western blot. Electron Microscopy Cells were xed in 2.5% glutaraldehyde in 0.1M sodium cacodylate buffer, pH 7.4 for 1 hr.Buffer rinsed cells were scraped in 1% gelatin and spun down in 2% agar.Chilled blocks were trimmed and post xed in 1% osmium tetroxide for 1 hr.The samples were rinsed three times in sodium cacodylate rinse buffer and post xed in 1% osmium tetroxide for 1 hr.Samples were then rinsed and en bloc stained in aqueous 2% uranyl acetate for 1 hr followed by rinsing, dehydrating in an ethanol series, in ltrated with Embed 812 (Electron Microscopy Sciences) resin, and then baked overnight at 60 o C. Hardened blocks were cut using a Leica UltraCut UC7.Sections (60 nm) were collected in formvar/carbon-coated nickel grids and contrast stained with 2% uranyl acetate and lead citrate. They were viewed using a FEI Tencai Biotwin TEM at 80Kv.Images were taken on a Morada CCD using iTEM (Olympus) software. Neutral lipid staining and measurement Neutral lipid staining performed using BODIPY (Invitrogen).Cell were washed 3 times in PBS, cells were then incubated in a solution of BODIPY in DMSO and diluted 1:200 in PBS.Cells were then xed in 4% PFA/PBS and were observed by confocal microscopy to monitor uorescence levels at Ex/Em = 489/503 nm.The uorescence intensities of the stained cells were quanti ed using ImageJ. Measuring Cardiolipin Content Mitochondria were as previously described.Cardiolipin content of mitochondria was quanti ed using the Cardiolipin Assay Kit (Biovision #K944).Brie y, 20 mg of mitochondria was added to a 96-well white plate and a total volume of 50 μl was added to the well with Cardiolipin buffer and probe.A background control was prepared using 100ml cardiolipin buffer.The samples were incubated for 10 minutes at room temperature.Cardiolipin content was measured at Ex/Em 340/480nm using a Victor 3 plate reader (Perkin Elmer) and correlated to the standard curve with known amounts of cardiolipin after subtraction of background. Determination of mitochondrial ATP content. Luminescent ATP detection kit (Sigma-Aldrich) was used for determination of ATP level wild-type and CALHM2 knock-down mitochondria.The working reagent lyses mitochondria to release ATP, which then interacts with added re y's luciferase and luciferin to produce light.The light intensity was a direct measure of the mitochondrial ATP content.luminescence was measured with a Victor 3 plate reader (Perkin Elmer). Cell Growth Assay RPE wild-type and CALHM2 knock-down cells were plated at 35,000 cells/mL concentration into 6-wells plate and incubate over-night at 37ºC.The next day the cells were washed with fresh complete media and placed into an Evos FL2 auto equipped with an onstage incubator set for 37ºC and 5%CO 2 (Thermo Fisher).Nine different elds for wild-type and CALHM2 KD wells cells were imaged every 20min with bright eld phase contrast using a 20X Olympus objective for a total of 36hr.The time-lapse data for each cell line was imported into Imaris software 10.1 (Oxford instruments) for cell segmentation to identify and quantify the number of mitotic cells per hr during the 36hr period. Agilent XF96 Pro Respirometry 50,000 RPE cells per well were plated in an Agilent XF96 cell culture plate 24 hours prior to the respirometry measurements on an Agilent Technologies XF96 Pro Analyzer.One hour before the study, RPE cells were washed and incubated at 37 O C in DMEM (Sigma D5030) media supplemented with 2.0 mM Glutamine, 10 mM HEPES and 0.2% fatty acid free BSA, pH 7.4.Oxygen consumption rates (OCR) and extracellular acidi cation rates (ECAR) were measured in accordance with manufacturer's instructions unless otherwise indicated (Agilent Technologies formerly Seahorse Bioscience).Basal oxygen consumption measurements (8 cycles) were made followed by an acute injection of either 5mM Glucose or 5mM Pyruvate compared to 1x assay media control injection (12 cycles). After the acute respiration recordings, mitochondrial oxidative function and acidi cation were rst tested with the addition of oligomycin A (5 mM) [Sigma], an ATP synthase inhibitor.To induce maximal respiration, the proton uncoupler Tri uoromethoxy carbonylcyanide phenylhydrazone [Sigma] (FCCP, 20 mM) was injected.Finally, a mixture of antimycin A [Sigma] (10 mM) and rotenone [Sigma] (5 mM), an inhibitor of complex III and I respectively, was injected to shut down electron transport and assess non-mitochondrial oxygen consumption.Each respirometry cell well was normalized using Hoechst 33342 stain [Life Technologies] using the XF Pro and Cyt5 imaging integrated system (Agilent Technologies formerly Biotek). MIMOSA -LC-MS/MS Analysis Metabolite concentrations and 13 C-enrichments were determined by mass spectrometry using a SCIEX 5500 QTRAP equipped with a SelexION for differential mobility separation (DMS).Samples were injected onto a Hypercarb column (3 μm particle size, 3x150 mm, Thermo Fisher Scienti c) at a ow rate of 1 mL/min.Metabolites were eluted with a combination of aqueous (A: 15mM ammonium formate and 10 μM EDTA) and organic mobile phase (B: 60% acetonitrile, 35% isopropanol and 15mM ammonium formate) according to the following gradient: t=0min, B=0%; t=0.5min,B=0%, t=1min, B=40%; t=1.5min,B=40%; t=2min, B=0%; t=6min, B=0%.Metabolite detection was based on multiple reaction monitoring (MRM) in negative mode using the following source parameters: CUR: 30, CAD: high, IS: -1500, TEM: 625, GS1: 50 and GS2: 55.DMS parameters were DT: low, MD: 2-propanol, MDC: low, DMO: 3 and DR: off, while Separation Voltage (SV) and Compensation Voltage (CoV) were optimized individually for each metabolite in order to maximize signal intensity and isobar resolution.The individual MRM transition pairs (Q 1 /Q 3 ) are listed in Table S1.Retention times were con rmed with known standards and peaks integrated using El-Maven.The atomic percent excess (APE) was calculated using Polly interface and corrected for background noise and for natural abundance (Elucidata Corporation).Endogenous taurine, an intracellular osmolyte, was used as internal control for cell density as previously described 46 . Steady-State Flux Ratio V PDH /V CS was calculated according to equation 1. Acetyl-CoA and oxaloacetate enrichments were calculated based on the deconvolution of citrate mass isotopomers applied to MRM Q 1 /Q 3 =191/111 24 .The sources of [M+3] malate were distinguished based on the enrichment ratio between succinate and malate and validated by comparison with oxaloacetate enrichments 24 . Puri cation of human CALHM2 protein for electophysiology Myc-DDK-tagged CALHM2 plasmid was overexpressed in HEK 293T cells for 48 hours followed by the mitochondria isolation.n-dodecyl-ß-D-maltoside (DDM) -solubilized mitochondrial lysate was incubated with the EZview Red ANTI-FLAG M2 A nity Gel beads for 2h then the beads were washed extensively to remove non-speci cally bound proteins.Myc-peptide was used to elute Myc-DDK-tagged CALHM2 protein. Patch clamp and planar bilayer electrophysiology The recordings of CALHM2-reconstituted proteoliposomes were performed by forming a giga-ohm seal in intracellular solution (10 mM Hepes, pH 7.3, 120 mM KCl, 8 mM NaCl, 0.5 mM EGTA,) using an Axopatch 200B ampli er (Axon Instruments) at room temperature (22-25 °C).Recording electrodes were pulled from borosilicate glass capillaries (WPI) with a nal resistance in the range of ~50 MΩ.Signals were ltered at 5 kHz using the ampli er circuitry. Proteoliposomes were prepared according to published protocols 47,48 .Brie y, 50 mg of phosphatidylcholine (Sigma) was dissolved in 1 mL of chloroform.A thin lipid lm was formed on a glass surface by evaporating the chloroform.Liposomes were formed by the reconstitution of the lipid in rehydration buffer containing 250 mM KCl, 5 mM HEPES, and 0.1 mM EDTA.Then, 20 μg of recombinant CALHM2 protein was added to 100 μL of the liposome mixture (∼2 mg of lipids, nal), and the samples were vortexed twice.Ca 2+ (7.5 mM, nal concentration) and ECHA or ECHB peptides (5 µg, nal concentration) were added into the bath during the recordings without perfusion. Planar lipid bilayer recordings were performed in intracellular solution by using α-L-phosphatidylcholine (Sigma) for forming the bilayer membrane.ePatch ampli er (Elements) was used for lipid bilayer recordings.ECHA or ECHB peptides (5 µg, nal concentration) were added on the cis side of the cuvette during the recordings without perfusion.Puri ed CALHM2 was added on the cis side and a constant voltage was applied to achieve protein insertion into the bilayer. For proteoliposomes and planar lipid bilayer recording, to access the channel activity at negative and positive voltages, a voltage ramp was performed where the voltage was changed from -100 to +100 mV within 60 seconds. For data analysis, Clamp t software (Molecular Devices) was used.The measured current was adjusted for the holding voltage assuming a linear Current-Voltage relationship.The conductance (G) is expressed in pS, following equation G = I/V, where I is the peak membrane current in pA and V is the membrane holding voltage in mV.Group data were quanti ed in terms of peak conductance and probability of channel opening, where NPo is the number of open channels ("level" in pCLAMP) times the probability of channel opening at each level.All population data were expressed as mean ± SEM. The matrix was then mixed with the solution from the trans side of bilayer cuvette (4:1), and the sample (2 µl) was deposited on a 384 polished steel sample plate.A MALDI 20000 method enabling detection of peptides with a size of up to 20000 Da was used. Structural Analysis The three-dimensional models of ECHA and ECHB N-terminal signal peptides were generated by AlphaFold 49 .The electrostatic potential maps of peptides were generated using APBS-PDB2PQR software. Statistical Analysis All experiments were conducted with a minimum of three replicates, and the numbers reported in the graphs re ect data from multiple experimental runs.Data were tested for normality using the Shapiro-Wilk test where sample sizes permitted, which informed the selection of appropriate parametric or nonparametric statistical methods.For normally distributed data, results are expressed as means ± SEM.Statistical comparisons between two experimental groups were performed using a two-tailed t-test.For comparisons involving multiple groups, one-way analysis of variance (ANOVA) was utilized, with Fisher's post hoc test applied to account for multiple comparisons. The homogeneity of variances across groups was assessed using Levene's test.When Levene's test indicated equal variances, a standard t-test was employed.In cases where Levene's test revealed unequal variances, Welch's t-test was used instead.For data that did not follow a normal distribution, the Mann-Whitney U test was applied. Post-hoc comparisons were adjusted for unequal variances using Dunn's test with Bonferroni corrections. Statistical signi cance was de ned as p < 0.05 in all gures.For co-localization experiments using immuno uorescence, Pearson correlation coe cients of the different channels were calculated using Volocity 6.3 software (Perkin Elmer). Fig 4g) compared to WT.Indeed, PDH protein levels are signi cantly reduced in CALHM2 KD cells (Fig 4h).We next measured TCA cycle intermediates by LC-MS/MS and found that CALHM2 KD cells have a dramatic decrease in concentration of all the measured metabolites (citrate, succinate, malate) (Fig4i-k).These data suggest that there may be insu cient anaplerosis to maintain the TCA cycle metabolite pool.The related enzymes of the TCA cycle, citrate synthase, succinate dehydrogenase A (SDHA), and malate dehydrogenase (MDH2) are also reduced (Fig4I-n).Finally, we examined components of the electron transport chain to address the last steps of oxidative metabolism in the mitochondria.Complexes I, II, III, and V levels are decreased in CALHM2 depleted cells, although more markedly for CI and CII (Ext Data Fig3d,e), supporting the conclusion that TCA cycle and ETC enzymes are downregulated. In a representative trace (Fig 5c), the rst addition of the ECHA peptide signi cantly reduces the peak conductance of the channel and the number of subconductance states (Fig 5c -C is closed, O1-O3 are smaller, less open, subconductance states) and see amplitude histogram 5d (green to blue peaks).The second addition of the peptide completely inhibits channel conductance (Fig 5c,d, right end of trace, loss of O1-O6, amplitude histogram -blue to red peaks).The group data con rm that ECHA reduces CALHM2 conductance in a concentration dependent manner (Ext Data Fig 4e,f).In bilayer recordings, we failed to observe CALHM2 channel activity at positive voltages, consistent with the published report on whole cell currents (Ext Data Fig 4f,h) top panel and f green to purple).Further addition of ECHB reduces the probability of channel opening and peak conductance (Fig 5e top panel and f, Ext Data Fig 4g,h).To study if the interaction between CALHM2 and ECHA and ECHB peptides is voltage-dependent, we changed the voltage from -20 mV to -50 mV (compare top and bottom in Fig 5e and amplitude histograms f, g).The voltage change reopens the channel, although at a subconductance state.Subsequent additions of ECHA peptide completely inhibit channel conductance in a dose-dependent manner (Fig 5e bottom panel and g).These results indicate that ECHA is more e cient than ECHB at inhibiting CALHM2 conductance at the concentrations tested in our assay. Figures Figures Figure 4 CALHM2 Figure 4 i) Total citrate relative to Taurine concentration in control and CALHM2 KD cells; ****p<0.0001.j) Total succinate relative to Taurine concentration in control and CALHM2 KD cells.**p≤0.01.k) Total malate concentration relative to Taurine in control and CALHM2 KD cells; **p≤0.01.l) Representative immunoblot showing reduced level of citrate synthase protein from whole cell lysates.(β tubulin serves as control for protein loading).m) Representative immunoblot showing reduced level of SDHA protein from whole cell lysates.(β tubulin serves as control for protein loading).n) Representative immunoblot showing reduced level of MDH2 protein from whole cell lysates.(β tubulin serves as control for protein loading).
9,998.8
2024-09-13T00:00:00.000
[ "Biology", "Medicine" ]
Comparative Study on Power Gating Techniques for Lower Power Delay Product, Smaller Power Loss, Faster Wakeup Time The power gating is one of the most popular reduction leakage techniques. We make comparison among various power gating schemes in terms of power delay product, energy loss, and wake-up time using the 45-nm Predictive Technology Model. In my conclusion, the Dual-Switch Power Gating (DSPG) shows lower power delay product, smaller energy loss, faster wake-up time than the other power gating schemes such as the Single-Switch and Charge-Recycled Power Gating schemes. Based on these advantages, the DSPG is suggested in this paper as a viable candidate suitable to a fine-grain leakage control scheme, where logic blocks go in and out very frequently and shortly between the active and sleep modes. Introduction As CMOS size continues to be scaled, transistor density increases, and power consumption becomes a very important constraint in very-large-scale integration (VLSI) design. Power dissipation comes from two sources including static power and dynamic power. Dynamic power is calculated when system is in active mode. The static component is power as no signals are changing their value. The dynamic power consists of switching power and short circuit power. Switching power is caused by charging and discharging of load capacitance. Short circuit power is caused by charging of internal nodes. The main sources of static power are sub-threshold leakage, gate leakage, gate induced drain leakage, oxide tunneling and junction leakage [1]. As device scaling goes on, these leakage current sources are more and more increasing that is as much as a third of total power [2]. The leakage current is particularly important in mobile devices, where the battery lifetime is decided by their leakage during sleep time. To mitigate the leakage current, a number of low-leakage techniques have been developed for many years [3][4][5][6]. Among them, power gating techniques have been used widely for many years, where leakage current can be cut off by an NMOS header or PMOS footer with high threshold voltage [7]. At wake-up moment, the header or footer that was off, becomes turned on, and a logic block powered by the header or footer goes to an active mode from a sleep mode. The new power gating schemes with charge recycling technique have been introduced [8][9] where an amount of switching energy which should be lost in turning on and off power switches can be lowered. This energy saving comes from the charge sharing which happens between a virtual V DD and V SS lines, at both a sleep-in and wake-up moment. Here, virtual V DD and V SS lines are connected to real power supply and ground supply through the PMOS switch and NMOS switch, respectively. When the sleep time is very short, however, the charge-recycled power gating can lose more energy than the conventional power gating schemes without charge sharing. Moreover, the charge-recycled power gating needs more time in equalizing its virtual V DD and V SS lines. Thereby, its wake-up time is longer. This large energy loss and slow wake-up may prevent the chargerecycled power gating from being used particularly in a finegrain leakage control scheme, where the logic blocks go in and out between the active and sleep modes very frequently and shortly. Thus, the sleep time of fine-grain leakage control scheme is likely much shorter than a coarse-grain leakage suppression scheme [10][11][12]. To be useful in the fine-grain leakage reduction scheme, a power gating circuit should be able to awake the logic block as fast as possible at the wakeup moment [13][14]. And, also, an energy loss due to this power gating has to be as small as possible. Recently, dual power gating has been re-visited in results of low leakage consumption compared to the conventional power gating and the charge recycling power gating (CRPG) with same timing constraint [15]. Here, three schemes are analysed in scenario of 10% or 20% timing overhead. We can realize that to make the same timing constraint, overhead switch area of dual power gating technique should increase four times compared to the conventional power gating and charge recycling power gating at least. It means that cost will be increased in case of dual power gating to achieve lower leakage consumption in high speed applications. In term of low cost, low leakage consumption, the dual power gating is analysed by comparison with the other two schemes in this paper. A solution with low cost, low power delay product, fast wakeup time and small energy loss consumption in a reasonable speed can be very helpful in applying this technique, for example, in wireless sensor network systems. In this paper, we extend my work to prove more advantage of the proposed leakage reduction technique [16]. We continue to compare three power gating schemes which are the Single-Switch Power Gating (SSPG) which can be regarded as the conventional power gating technique, Charge-Recycled Power Gating (CRPG) [8][9], and Dual-Switch Power Gating (DSPG), respectively, in terms of energy loss due to power gating, power delay product, wake-up time, so on. The comparison tells us that the DSPG has the lowest energy loss regardless of how long the sleep time is, among 3 schemes. Moreover, the DSPG can wake up faster than the CRPG because it does not need any more time in charge sharing. And, we need to mention the ground bounce noise which becomes more significant with supply voltage being scaled down, as IR drop and di/dt noise that are introduced by abrupt change of virtual power lines increase [17]. The ground bounce noise in the DSPG has been known better than the other two due to its small voltage swing and small rush current on power lines [17][18]. Based on the comparison, we suggest in this paper that the DSPG with smaller energy loss, smaller power delay product and faster wake-up is more suitable to the fine-grain leakage control scheme than the others. Thus, this paper shows the advantages of DSPG than the others and based on these advantages, we suggest the DSPG as a viable candidate suitable to the fine-grain leakage control scheme. Figure 1(a) illustrates the SSPG scheme that has two logic blocks, L 0 and L 1 , which are made of low threshold voltage (low V TH ) transistors with large leakage current. To cut off the leakage during the sleep time, the L 0 and L 1 are powered by the header, MP 0 and the footer, MN 0 , respectively, which are made of high threshold voltage (high V TH ) transistors. Here the V SSV and V DDV are virtual V SS and V DD lines, respectively, which are connected to real V SS and V DD line when the header and footer are turned on. On the contrary, when the MP 0 and MN 0 are off, the V SSV is raised up to V DD and V DDV is lowered to V SS . Here, the PGN and PGP mean enable signals for the MN 0 and MP 0 , respectively. Figure 1(b) shows the CRPG scheme, where the V SSV and V DDV which are controlled by the MN 0 and MP 0 , respectively, are connected each other through the MN 1 and MP 1 that constitute a transmission gate. This transmission gate is turned-on at both sleep-in and wake-up moments in which charges are shared each other between the V SSV and V DDV . The TGN and TGP turn on the transmission gate at both the sleep-in and wake-up. The DSPG is shown in Figure 1 Gating (DSPG) scheme. Figure 2 compares the V SSV and V DDV waveforms of 3 schemes. Here, the sleep-in and wake-up happen at the t 0 and t 3 , respectively, and the t sleep means the sleep time. The PGN is a control signal for the footer and the TGN is a control signal for the transmission gate in the CRPG. Various power gating schemes At the both sleep-in and wake-up, the transmission gate should be turned on for a short time of the t 1 -t 0 and the t 3 -t 2 , as shown in Figure 2. We can see the V DDV of SSPG, firstly. When the sleep time is long in Figure 2, the V DDV has a voltage swing as large as the ∆V 0 at the wake-up time of t 3 . Thus, the SSPG loses a large amount of switching energy at this moment. When a sleep time is short, ∆V 0 has a small voltage swing, thus SSPG only loses a small amount of switching energy at this wakeup moment. Next, for the CRPG, the V SSV and V DDV are equalized during the t 1 -t 0 , then, they start to decay toward the real V DD and V SS , respectively. The V SSV and V DDV are equalized again during the t 3 -t 2 , and they are restored to the real V SS and V DD at the t 3 , respectively. At time of t 3 , the CRPG in Figure 2 has a voltage swing as large as the ∆V 1 on its V DDV . Comparing the CRPG with the SSPG, we can realize that the CRPG has larger voltage swing on its V DDV than the SSPG at the wakeup when the sleep time is short. It means that the CRPG is not effective in saving energy when the sleep time is short. Unlike the SSPG, the ∆V 1 of CRPG are almost the same regardless of the sleep time. This is because that the V DDV and V SSV of CRPG are equalized every the sleep-in and wake-up moment, thus their voltage swings being about half V DD regardless of the sleep time. Finally, the DSPG is considered, its V DDV swing is as small as the ∆V 2 . With a short sleep time, the DSPG's swing voltage is like the SSPG. When a sleep time becomes longer, the ∆V 2 becomes larger but it does not exceed half V DD unlike the SSPG. For this long sleep time, its voltage swing is almost the same with the ∆V 1 , of the CRPG. Figure 3 shows analysis results of 31-stage ring oscillator at temperature of 27 o C. The simulation is done using the 45-nm Predictive Technology Model (PTM) [19] with various voltage supplies. Here, the DSPG uses both PMOS and NMOS switches to cut off power lines, thus drop voltage on these switches is a little bit larger than SSPG which uses only NMOS switch. Consequently, delay of DSPG is slightly higher than that of SSPG as shown in Figure 3 (a). However, power delay product is a metric related to efficiency energy measuring energy consumed per switching event. The power delay product of DSPG is 7% smaller than that of SSPG as shown in Figure 3 (b) indicating that the DSPG has higher energy efficiency even in the active mode. Figure 4(a) shows the comparison of 3 schemes in terms of energy loss. The logic block used here is composed of 50% INVs, 25% NANDs, and 25% NORs. Here the power switch's channel width used in this paper is 10% of the total channel width of logic block. The power-gating energy loss is defined by an amount of energy which is lost between the sleep-in and wake-up moment. For a certain sleep time, if the energy loss due to power gating is smaller than the active leakage energy which is expected to dissipate during the sleep time, we can save some amount of energy using power gating scheme. On the contrary, if the energy loss is larger than the active leakage energy, we had better not to use the power gating. A sleep time when the energy loss of power gating becomes the same with the active leakage energy is defined as a crossover time. This crossover time is very important when we try to apply a power gating technique to the fine-grain leakage control circuits, where logic blocks are subject to transit between the active and sleep modes very frequently and shortly. In Figure 4(a), when the sleep time is short, the SSPG needs power-gating energy loss smaller than the CRPG. As mentioned earlier, this is due to that the SSPG has smaller voltage swings on its V DDV and V SSV than the CRPG when its sleep time is short. As the sleep time becomes longer, the CRPG begins to have smaller voltage swings on the V DDV and V SSV than the SSPG thus needing smaller energy loss of power gating thereby some amount of energy being able to be saved. Among these 3 schemes, the DSPG shows the smallest power-gating energy loss when a sleep time is either short or long. For the short sleep time, the V DDV and V SSV of DSPG change as small as the SSPG thus minimizing its energy loss as small as the SSPG. Comparing with the CRPG, the DSPG can reduce the energy loss by 85% for the sleep time=10ns and 27C. And, for the long sleep time=10s, the DSPG can save 30% than the SSPG. This saving is caused from that the V DDV and V SSV swing of DSPG is only about half of the swing of SSPG, as shown in Figure 2. One more thing to note is that the DSPG does not lose any amount of energy in equalizing the V DDV and V SSV thereby being able to save more energy than the CRPG, as shown in Figure 4(a). The crossover time can be extracted from Figure 4(a). The SSPG, CRPG, and DSPG have 35ns, 100ns, and 30ns, respectively. Figure 4(b) shows the energy loss of power gating at a temperature of 100 o C. Comparing Figures 4(a) with (b), we can notice that the crossover times of 100 o C are shorter than those of 27 o C.This is because sub-threshold leakage at 100 o C is larger. One more concern in the CRPG is an equalizing time which is defined by the t 1 -t 0 and t 3 -t 2 in Figures 2. The CRPG needs this time for the transmission gate to equalize the V DDV and V SSV resulting in a longer wake-up time than the SSPG and DSPG. If this equalizing time is not long enough to equalize the V SSV and V DDV fully, an amount of energy loss of the CRPG can be increased. Figure 5(a) shows that the power gating energy loss can be changed in the CRPG with varying the equalizing time. When the equalizing time becomes shorter, the CRPG has larger energy loss. For the SSPG and DSPG, their energy loss has nothing to do with the equalizing time. To achieve the energy loss as low as around 30pJ, the equalizing time should be longer than 250ps. This equalizing time is added to the wake-up time. This slow wake-up may prevent the CRPG from being used in a fine-grain leakage control scheme, where a short wake-up time is demanded not to degrade the active-mode performance. Figure 5 Figure 4(b), the SSPG, CRPG, and DSPG have the crossover times of 17ns, 35ns, and 12ns, respectively, indicating that the DSPG can be the most suitable to the fine-grain leakage control demanding a short crossover time. the wake-up times of SSPG, CRPG, and DSPG with varying a sleep time. As expected, the wake-up time of CRPG is the longest among 3 schemes due to the equalizing time. For the SSPG and DSPG, their wake-up times become longer and saturate with a sleep time increasing. Here the wake-up time is defined by a time when the V SSV and V DDV are restored to 90% of their final values of V SS and V DD . We also investigated the layout overhead of SSPG, CRPG, and DSPG. The SSPG and DSPG have the same layout area as long as their power switches have the same size. The CRPG, however, needs a larger area for its transmission gate as shown in Figure 1(b). To equalize the V DDV and V SSV in a short time, we need to increase the width of MP 1 and MN 1 in Figure 1(b) more thereby the area overhead being larger. Table 1: 32-bit input vectors applied to the 32-bit carry-look-ahead adder. Simulation results FFFFFFFF FFFFFFFF Figure 6. (a) Power-gating energy loss of the 32-bit Carry-Look-Ahead adder when the sleep time is as short as 10ns for the 45-nm PTM, V DD =1.1 V, and W PG /W Logic =10%. The DSPG consumes almost the same energy with the SSPG, but its energy loss is much smaller than the CRPG by as much as 72% on average. This result is consistent with Figure 4(a). (b) Power-gating energy loss of the 32-bit Carry-Look-Ahead adder when the sleep time is as long as 4s. The DSPG consumes smaller energy than the SSPG and CRPG by as much as 32% and 18% on average, respectively. As expected from Figure 2, the SSPG and DSPG show the largest and smallest energy loss, respectively. In this paper, the width of the transmission gate in Figure 1(b) is half of the width of power switches, thus the area penalty of CRPG being as large as 15% compared with the penalty of SSPG and DSPG as small as 10%. The three power gating schemes are applied to a 32-bit Carry-Look-Ahead (CLA) adder to compare the energy loss due to power gating. The 32-bit adder is implemented using the 45-nm PTM, at V DD =1.1V and 27C. Figures 6(a) and (b) show the energy loss of 32-bit CLA adder when a sleep time is 10ns and 4s, respectively. The simulated input vectors of 32-bit adder are shown in Table 1. In Figures 6(a) for the sleep time=10ns, the CRPG shows the largest energy consumption which is caused by the V 1,S larger than the SSPG and DSPG, respectively. From this figure, the SSPG, CRPG, and DSPG have average energy loss of 2.3pJ, 8pJ, and 2.25pJ, respectively. When the sleep time is as long as 4s, the SSPG seems to lose the energy on average as much as 35.2pJ compared with the CRPG of 29.2pJ and DSPG of 23.9pJ. Among 3 schemes, the DSPG loses the smallest energy for its power gating, making the DSPG the most suitable to the fine-grain leakage controlled VLSIs. ISCAS-85 Benchmark circuits, that are C432, C449 and C880, are verified to show that DSPG is better than others in term of leakage power consumption. The normalized leakage power is compared at 27 o C and 100 o C as shown in Table 2 and Table 3 respectively. Conclusion Among various power gating technique, we have compared 3 power gating schemes in terms of power delay product, energy loss, wake-up time using the 45-nm Predictive Technology Model. The comparison results show that the DSPG is smaller energy loss, lower power delay product, faster wake-up time than the other power gating schemes. Based on these advantages, we suggest the DSPG as a viable candidate suitable to a fine-grain leakage control scheme, where logic blocks go in and out very frequently and shortly between the active and sleep modes.
4,496.8
2018-08-13T00:00:00.000
[ "Computer Science" ]
A Novel Polar Copolymer Design as a Multi-Functional Binder for Strong Affinity of Polysulfides in Lithium-Sulfur Batteries High energy density, low cost and environmental friendliness are the advantages of lithium-sulfur (Li-S) battery which is regarded as a promising device for electrochemical energy storage systems. As one of the important ingredients in Li-S battery, the binder greatly affects the battery performance. However, the conventional binder has some drawbacks such as poor capability of absorbing hydrophilic lithium polysulfides, resulting in severe capacity decay. In this work, we reported a multi-functional polar binder (AHP) by polymerization of hexamethylene diisocyanate (HDI) with ethylenediamine (EDA) bearing a large amount of amino groups, which were successfully used in electrode preparation with commercial sulfur powder cathodes. The abundant amide groups of the binder endow the cathode with multidimensional chemical bonding interaction with sulfur species within the cathode to inhibit the shuttling effect of polysulfides, while the suitable ductility to buffer volume change. Utilizing these advantageous features, composite C/S cathodes based the binder displayed excellent capacity retention at 0.5 C, 1 C, 1.5 C, and 3 C over 200 cycles. Accompany with commercial binder, AHP may act as an alternative feedstock to open a promising approach for sulfur cathodes in rechargeable lithium battery to achieve commercial application. Background The lithium-sulfur (Li-S) rechargeable battery cells offering a theoretical cathode specific capacity of 1675 mAh g −1 , which is five times higher than those commercial lithium ion batteries (LiCoO 2 and LiFePO 4 ), have been applied in a variety of the most promising energy storage devices to address the increasing energy storage demands for various technological applications [1][2][3]. Unfortunately, despite its considerable advantages, its practical use has been frustrated by several problems [4][5][6]. (1) The sulfur is low electron conductivity (5 × 10 −30 S cm −1 at 25°C), which generally causes low utilization of active materials. (2) Large variation in volume occurs during charge-discharge cycling, corrode cathode where they are not recycled on charge. (3) "Shuttle effect", another major problem caused by the high dissolution of the discharge/charge, intermediates in organic electrolytes. Polysulfides dissolve into the electrolyte and penetrate through the separator to the lithium metal anode and then they are reduced to solid precipitates (Li 2 S), leading to quick capacity decay with the loss of active materials and an additional problem of low Coulombic efficiency in a rechargeable Li-S battery [7][8][9][10]. Although various approaches have been employed to overcome this problem [11][12][13][14], such as Ndoped materials [15][16][17], carbon-based materials [18], conductive polymers [19,20], metal oxides [21][22][23], and transition metal disulfides [24]. None have proven commercially viable due to its high cost and not suitable for large scale manufacturing. The binder is an important ingredient in Li-S battery, it functions to bond and keep the active materials in the electrode, to ensure well electrical contact between the active materials and conductive carbon, as well as to link the active materials with the current collector [24][25][26][27][28]. In particular, the recent investigations on silicon anodes have revealed that ideal binder should not only able to adhesion strength and ductility with inexhaustible tolerance of large volume change and still physically and/or chemically trap capacity to attain high initial reversible capacity and excellent cycle ability. Polyvinylidene fluoride (PVDF) is widely used as a conventional binder for Li-S batteries [28]. However, due to the linear-molecular structure, PVDF just play a role of physical adhesion, enabling the mechanical linkage of the active materials with additives, the function will fade with time when there is no bonding between those polymers and the carbon substrate, resulting in the vexed problem that polysulfide dissolved in the aprotic electrolyte. Therefore, new functional binder is the urgent needs to make up for the deficiencies of PVDF. Thus it can be seen crosslink structured binder for further improvement in the cycle life is constructed with the increased number of active sites between polysulfides and binder. Recent investigations shown that functional materials endued with amine groups have been viewed as an ideal hunter to absorb polar lithium sulfides and nonpolar carbon surface, which effectively prevents loss of active mass during cycles [29,30]. Hence, in this paper, we introduced a multi-functional AHP binder with plenty amide groups as an efficient binder for Li-S batteries. Strong interactions between discharge products polysulfides are created throughout the cathode by the unique amide/amino crosslink structures of designed binder to buffer the shuttle effect of sulfur cathodes. Unlike conventional polymeric binders (PVDF), obvious superiority of our design is the binder with interconnected polar structure to form a stable electrode, and exhibit ductile architecture, resulting in a marked improvement of conventional C/S cathode in cycle life. It is noteworthy that the presented strategy is not engineered in any specialized manner, thus making the process commercially viable. Synthesis of the AHP Binder Ethylenediamine (EDA), hexamethylene diisocyanate (HDI), and N,N-dimethylformamide (DMF) were purchased from Aladdin and used as received. The novel AHP binder was prepared by a copolymerize process using EDA (10 mmol) and HDI (5 mmol) in DMF solvent with high-speed magnetic stirring for 4 h at 60°C. Then the product was uniformly dispersed in DMF solution with a mass ratio of 1 mg per 10 uL solvent. Characterization X-ray photoelectron spectroscopy (XPS, Kratos Axis Ultra Dld, Japan) was used for elemental analysis and chemical bonding information after synthesis of EDA and HDI. Scanning electron microscope (SEM) was used to observe the surface topography of S cathodes with different binder before and after cycle. Preparation of S@AHP Cathodes and Electrochemical Measurements Bulk sulfur (Alfa Aesar, 043766) and acetylene black (Hefei Kejing Materials Technology Co., Ltd) with a mass ratio of 6:4 was ball milled for 60 min at 300 rpm. The obtained mixture was then heated at 210°C for 12 h to encapsulate sulfur in the acetylene black. After cooling to room temperature, the C/S composite was obtained. Then the thermogravimetric analysis (TGA, SDT 2960, USA) was performed on an SDT 2960, TA Instruments to confirm the mass of sulfur. Typically, the preparation of electrodes and battery assembly were that the electrodes from the C/S composite were prepared by making slurry of C/S and AHP binder in a mass ratio of 8.5:1.5 in DMF solvent, respectively. The slurry was then casted on the surface of Al foil and dried under vacuum at 60°C overnight. Electrodes contained approximately 0.5 mg of sulfur per square centimeter, and 30 uL electrolyte was used in a coin cell. For comparison, C/S cathodes with various binders were prepared using PVDF and PTFE (both from Hefei Ke Jing Materials Technology Co., Ltd) instead of AHP by similar route. The electrolyte was 1 M LiTFSI dissolved in a mixture of 1,3-dioxolane (DOL) and 1,2-dimethoxyethane (DME) (1:1 v/v) with 1wt% lithium nitrate (LiNO 3 ) as additive. Cells were assembled in an argon-filled glove box, and the electrochemical test of discharge-charge properties and cyclic voltammetric were tested on the CT2001A cell test instrument (Wuhan LAND Electronic Co., Ltd) and CHI660E (Shanghai Chenhua instrument Co., Ltd) electrochemical workstation, respectively. Figure 1a shows the design concept and synthesis schematic of the polar AHP binder that the linear HDI, acting as a bridge, grafts with EDA by the undiscriminating reactions between amine groups in EDA and isocyanates, resulting in (or forming) a much active site structure of AHP. Upon polymerization, the AHP structure incorporating a series of amide groups enable the binder to attain strong binding energy with polysulfides [31]. Comparing with commercial binders (such as PVDF, PTFE, Fig. 1b), significant advantages were introduced to the novel binder for Li-S battery. Polar group of amide was incorporated to strong anchor Li 2 S n species, which is thought to have a strong affinity to lithium polysulfides, effectively keeping them within the cathode region and thus improving the electrochemical stability of the Li-S battery [32][33][34][35]. Results and Discussion To better understand the effect of the mechanical and chemical characteristics among electrode components, the binding structural viewpoint of S with AHP and commercial linear binders (such as PVDF, PTFE) are expected to be different (Fig. 1c). During the discharge process (Li insertion), owing to the formation of insulating Li 2 S on the carbon matrix, linear PVDF or PTFE are forced to mechanical stretch or moved by the expanded polysulfides. The volume density of Li 2 S (1.67 g/cm 3 ) is much lower than that of S (2.03 g/cm 3 ), which would cause a volume expansion of the S cathode of ∼22% compared to their initial state during the whole discharge process. Upon charging (Li desertion) in the same cycle, the polysulfides shrink back to their original state; the linear binders, however, cannot fully follow the shrinkage of polysulfides, thus leading to an inevitably contact loss of electroactive materials from the carbon matrix, coupled with polysulfides dissolution, result in inferior performance of most sulfur-carbon composites. This issue becomes more prominent over extended cycling. On the contrary, AHP have plentiful amide side groups physically/chemically entangled to grasp polysulfides, leading to reinforced binding ability with polysulfides via hydrogen bonding [29,30]. Thus, in this polymeric AHP binder provides multidimensional noncovalent interactions with the polysulfides surfaces through the amide groups. These interactions not only allow the AHP binder to accommodate the massive volume expansion of S cathodes during discharge process but also keep the polysulfidesbinder interactions even during charge process. The AHP binder was covalently cross-linked after the undiscriminating reactions of EDA with HDI. Therefore, to observe the possible mechanism of the discovered reaction between bio-inspired EDA and HDI, X-ray photoelectron spectroscopy (XPS) survey scanning spectra were utilized to prove the possible chemical characterization of the covalently binder, as presented in Fig. 2. Definitely, deconvolution of the N 1s signal reveals peaks for both amine (399.3 eV) and amide (400.1 eV) groups can be found, undoubtedly emerges from AHP binder, indicating that EDA has taken part in the reaction with HDI. It is to imply that after reaction, the amino groups were also retained and form polar amide groups. Besides, the C 1s signal can be well resolved corresponding to C-N amide bond at 288.4 eV and C-NH 2 bond at 285.7 eV. Furthermore, the C=O (531.2 eV) 1s peak was observed (or collected) in the wide O 1s spectra. All of the results have provided hard evidence that the polymerization with EDA and HDI had occurred to formamide groups, which are benefitted to capture polysulfides. In order to explore the electrochemical performance of S electrodes with the AHP binder, a series of electrochemical tests were carried out. CR2025 type coin cells were fabricated using lithium foil as the counter electrode. Figure 3a shows cyclic voltammetry (CV) of the C/S/AHP composite cathode at a scan rate of 0.01 mV S −1 between 1.5 and 3 V (vs Li/Li + ). According to the multiple reaction mechanism between S and Li, two cathodic peaks are clearly observed: one is located at~2.30 V attributing to the transformation of S 8 to long-chain Li 2 S n (4 ≤ n ≤ 8), and the other at 2.05 V was ascribed to the further reduction of S to form low-order Li 2 S n (n < 4), and finally Li 2 S. Anodic peaks are caused by the decomposition of Li 2 S and corresponding to the reverse process of the transformation of sulfur species to Li 2 S n . Consistent with the CV analysis above, Fig. 3b shows the typical twoplateau charge/discharge profile of the C/S/AHP composite cathode at a current rate of 0.5 C, which could be assigned to the formation of long-chain polysulfides (high plateau) and short-chain polysulfides (low flat plateau), which was typical charge/discharge profile of Li-S cells. Long-term cycling and well stability is the first goal of a commercial battery, the electrochemical stability of the C/S/AHP composite cathode was investigated by testing under 1 C for 100 cycles, compared with similar electrodes using PTFE and PVDF as binders. The cycle life, discharge capacity, and Coulombic efficiency of electrode with AHP as binder were significantly better than those with PTFE and PVDF. As shown in Fig. 3c, after 100 cycles, the capacity of AHP binder stabilizes at 628 mAh g −1 with 81.2% retention (Fig. 3d) at 1 C. It is shown, in our work, the capacities of the conventional PTFE and PVDF binders drop very fast. The discharge capacity of S@PTFE started at capacity of 728 mAh g −1 but degraded severely to 395 mAh g −1 after the same number of cycles, which corresponds to 54.3% capacity retention, and the PVDF binder dropped more severely with 47.3% capacity retention. The S@AHP electrode exhibits the best cycling performance compared with common binders in Li-S batteries when the commercial sulfur powder is taken as active material. The stable cycling performance and high Coulombic efficiency (99%) imply that the AHP binder is benefit to confine polysulfides in the eletrode. Over 200 cycles, the reversible capacities of the C/S@AHP electrode at different rates of 0.5 C, 1 C, 1.5 C and 3 C also show excellent stability (Fig. 3e). Apparently, the enhanced reversibilityof AHP binders contribute a lot to cyclic performance of S electrode a possible mechanism is that the plentiful amide groups could efficiently inhibit the leakage of polysulfide during cycling. The much improved performance of the C/S@AHP electrode was due to the fact that the polar amino group of the binder provides the strong affinity to absorb lithium polysulfide intermediates, resulting in enhanced cycling performance [29,30]. The electrochemical impedance spectroscopy (EIS) measurements of AHP and PVDF binder were conducted within the frequency range between 0.1 Hz and Fig. 2 XPS scanning spectra for a AHP binder and b, c, d high-resolution C 1s , N 1s, and O 1s spectrum of AHP, respectively 1 MHz. The Nyquist plot, presented in Fig. 4a, is composed of a depressed semicircle at high frequencies that corresponds to the solution resistance (R s ) and the interfacial charge transfer resistance (R ct ), which is related to the electrochemical activities of the composites [34][35][36]. The short, inclined line in the low-frequency region is associated with a semi-infinite Warburg diffusion process (W) of soluble lithium polysulfide in the electrolyte. According to the quantitative analysis (Fig. 4b), the changes between S@AHP cathode and S@PVDF cathode in Rs are not significant. In contrast, the variation of R ct is strongly associated with charge transfer of cathodes [36][37][38]. This results are benefited to the excellent adhesion of AHP binder that strong and multidentate interactions with active materials were created that can effective promotion contact between S and the carbon matrix and is conducive to the charge transfer, resulting in a marked improvement of maintaining good electrical conduction for charge transfer by accommodating the S cathode. To further investigate the mechanical stability of C/S electrodes, C/S eletrodes were prepared with 15% AHP and PVDF as binder, respectively. The surface morphologies of different electrodes were characterized by SEM and shown in Fig. 5. Before cycling, no significant differences between the sulfur electrodes with different binders were observed, the active sulfur composite and acetylene black can be clearly distinguished in each electrode, indicating the AHP binder can effective bond the active materials. Differently, AHP binder presents an example of a more uniformly coated electrode in Fig. 5c. In particular, binder "bridges" emerge of the AHP binder between the adjacent C/S materials can be observed, indicating that the AHP binder has sufficient capacity to connect the active materials. In addition, the coated film has a strong adhesion to the Al foil. No materials peel off during the subsequent operations in which the electrode is bended and folded repeatedly. After 50 deep galvanostatic discharge at 0.5 C, Both S@AHP and charge voltage profiles at 0.5 C (1C = 1672 mA/g). c, d The comparison performance between S@AHP, S@PTFE, and S@PVDF at a rate of 1 C, and the retention over 100 cycles. e Long-term cycling performance of S@AHP cathodes between 1.7-2.8 V with cycling performance and Coulombic efficiency at different current rates (0.5 C, 1 C, 1.5 C, and 3 C) S@PVDF cathodes exhibited uniform morphology distributions (Fig. 5b, d) as well as similar SEI formation filling the void space after the lithiation. The difference between the samples, however, became evident after 50 cycles such that S@AHP still preserved the uniform morphology distribution to a large extent (Fig. 5d), whereas S@PVDF clearly showed micrometer scale cracks (Fig. 5b, red oval) over the entire area of the film, indicating the AHP binder is better at preserving the original film morphology utilizing its superior multidimensional binding capability based on active site of amide groups [39]. These results clearly demonstrate that the polar AHP binder has capacity to maintain the electrical and mechanical integrity of S based cathodes upon deep galvanostatic cycling. Conclusions In summary, we have successfully developed a polar binder with plenty amide groups as multidimensional bonding site for high-performance Li-S cells, making substantial progress in improving electrochemical properties and therefore resolving the chronic insufficient cycle lives of S cathode. We demonstrate the lots of amide functional groups of the AHP binder with high binding strength construct effectual trap the sulfur species and subsequently confine them within the cathode and inhibit the shuttling effect, while the excellent mechanical properties of the S@AHP cathode with suitable flexible to buffer the volume change of sulfur. When AHP was applied to assemble cells with commercial sulfur and acetlene black have been cycled, they could show the stable capacity retention at different rates of cycles. As a result, we believe the synthesis of this polymeric polymer will arouse the battery community's interest in fabricating long life Li-S cells and provide a novel method for synthesis new materials for Li ion batteries.
4,118.8
2017-03-16T00:00:00.000
[ "Materials Science" ]
A Review on Grid-Connected PV System The concept of injecting photovoltaic power into the utility grid has earned widespread acceptance in these days of renewable energy generation & distribution. Grid-connected inverters have evolved significantly with high diversity. Efficiency, size, weight, reliability etc. have all improved significantly with the development of modern and innovative inverter configurations and these factors have influenced the cost of producing inverters. This paper presents a literature review of the recent technological developments and trends in the GridConnected Photovoltaic Systems (GCPVS). In countries with high penetration of Distributed Generation (DG) resources, GCPVS have been shown to cause unwanted stress on the electrical grid. A review of the existing and future standards that addresses the technical challenges associated with the growing number of GCPVS is presented. Maximum Power Point Tracking (MPPT), Solar Tracking (ST) and the use of transform-less inverters can all lead to high efficiency gains of Photovoltaic (PV) systems while ensuring minimal interference with the grid. Inverters that support ancillary services like reactive power control, frequency regulation and energy storage are critical for mitigating the challenges caused by the growing adoption of GCPVS. INTRODUCTION Renewable energy is increasingly considered essential for meeting current and future energy needs [1]. Photovoltaic (PV) power, as it is clean and unlimited source of energy, is probably the best technology amongst all renewable energy sources and therefore a considerable amount of research has been conducted recently in this field. To better utilize the PV power, grid interconnection of PV system is needed. PV power rendering to the utility grid has been the fastest growing renewable energy technology by far since it attracted the attention of policy makers [2]. It is generally accepted in the scientific community that human activity is affecting climate change and that a majority of this impact comes from fossil fuel combustion caused by the electric utility industry. In 2012, 32% of the total greenhouse gas emissions in the U.S. was from the electric power industry, the highest of all sectors. Conventional fossil-fuel generating facilities have in past met the majority of global electrical energy demands. However, environmental and climate change implications of fossil fuel-based generation present serious challenges to society and the environment. Distributed Generation (DG), particularly Photovoltaic (PV) systems, provides a means of mitigating these challenges by generating electricity directly from sunlight. Unlike off-grid PV systems, Grid-Connected Photovoltaic Systems (GCPVS) operate in parallel with the electric utility grid and as a result they require no storage systems. Since GCPVS supply power back to the grid when producing excess electricity (i.e., when generated power is greater than the local load demand), GCPVS help offset greenhouse gas emissions by displacing the power needed by the connected (local) load and providing additional electricity to the grid. As such, during peak solar hours (maximum solar irradiance), fewer conventional generation plants are needed. In addition, GCPVS reduce Transmission and Distribution (T&D) losses. Although average T&D losses amounted to 5.7% in the U.S. in 2010, losses during peak hours are higher [3]. For example, the estimated T&D losses for Southern California Edison and Pacific Gas & Electric exceeded 10% in 2010 [4]. IJTSRD | May-Jun 2017 Available Online @www.ijtsrd.com Locating DG assets close to loads can help to partially mitigate these losses. In this paper, we focus our attention on the growing adoption of GCPVS and the technical challenges posed by the mass proliferation of these DG systems on the overall performance and reliability of the electric grid. A review of the standards governing the safe installation, operation and maintenance of GCPVS, and the known methods of improving efficiency of PV systems are presented. Some transformer-less topologies based on half-bridge, fullbridge configuration and multilevel concept, and some soft-switching inverter topologies are remarked as desirable for grid-connected single-phase PV inverters with respect to high efficiency, low cost, and compact structure. We also focus on the role of the inverter as an active grid participant. Inverters designed with the ability to support electric grid ancillary services will become the norm in the foreseeable future, especially in light of the growing number of small and large-scale GCPVS that are being brought on-line. II. STANDARDS AND SPECIFICATIONS OF GRID-CONNECTED PV INVERTER The Distribution Network Operators are responsible for providing safe, reliable and good quality electric power to its customers. The PV industry needs to be aware of the issues related to safety and power quality and assist in setting standards as this would ultimately lead to an increased acceptance of the grid-connected PV inverter technology by users and the electricity utility industry. And for the system to be operated safely and reliably, these standards must be adopted, which will cater to build electricity consumer's trust, reduce costs and further flourish grid-connected PV inverter development. There are several standards on the market dealing with the interconnection of PV energy sources with the utility grid like International Electro technical Commission (IEC), Institute of Electrical and Electronics Engineers (IEEE) and National Electrical Code (NEC). These standards fix the limits for the inverter voltage changes, its operating frequency changes, power factor, harmonics in the current injected into grid, injection of DC current into the grid to avoid distribution transformers saturation [5] and also address grounding issue. These also contain information regarding islanding of PV systems when the utility grid is not connected to control voltage and frequency of the inverter, as well as techniques to avoid islanding of PV energy sources. In islanding state, the utility grid has been removed from the inverter, which then only supplies power to local loads. In addition to these standards, there are a few more among which the IEEE 1373 standard recommends practice for field test methods and procedures for grid-connected PV system, IEC 62116 standard recommends test procedure of islanding prevention measures for grid-connected PV inverters, IEC 61173 standard gives guidance on overvoltage protection for PV power generating system, IEC 61683 recommends the procedure for measuring efficiency of the PV system. III. THE GROWING TRENDS OF GRID-CONNECTED PV SYSTEMS The PV industry is expected to continue to grow due to several factors like the falling prices of silicon and PV modules, technological advancements in large scale manufacturing, many governmental incentives, maturation and proliferation of favorable interconnection agreements and continued technological improvement of power converter technologies. For example, the cost of manufacturing PV modules has reduced dramatically, from over 100 per watt in the 1970s to less than 1.00 per watt in 2014 [6]. In fact, large-scale wholesale orders can result in prices below $0.60 per watt [7]. that the growing popularity of Solar PV is a trend that will continue to rise. Although our survey yielded mixed reports as to when PV solar will be at grid parity with traditional generation sources, a common underlying theme among many researchers is that this will likely happen sooner than later. The Rocky Mountain Institute recently released a report that suggests that grid parity will be achievable by 2030 [8]. Scientists at the Argonne National Lab in Illinois have argued that this may happen by 2025 while the National Renewable Energy Laboratory (NREL) have publicly suggested that due to the rapid growth of GCPVS, grid parity may even happen as early as 2017 [9].In a survey of select International Energy Agency (IEA) member countries released in IJTSRD | May-Jun 2017 Available Online @www.ijtsrd.com 2013, of the total installed PV systems, more than 99% were estimated to be grid-connected. Utilityscale installations with large systems are beginning to make up for a sizable share of the PV market. In the U.S. alone, the utility sector was responsible for about two-thirds of the total new installations in the third quarter of 2014. The rise in the number of GCPVS, especially from the utility sector, does not come as a surprise, especially given that many governmental and regulatory bodies tend to promote programs aimed at expanding DG resources like GCPVS. IV. ISSUES CAUSED IN GRID-CONNECTED PV SYSTEMS As the overall costs of installing and owning GCPVS systems are declining, residential, commercial and utility scale adoption of this technology is on the rise. Although there are many benefits of GCPVS, such as its long working life (25-30 years), low operations and maintenance costs and obvious environmental advantages over fossil-fuel power plants, however, GCPVS have their own set of challenges. A number of scholarly works have suggested that the mass adoption and proliferation of GCPVS could create enormous stress on the electric grid. The root of problem is the inherent functional nature of GCPVS, primarily because their output generation decreases as the sun goes down. Consequently, they are unable to adequately contribute to the grid when demand increases in the hours following sunset (when demand for electricity is greatest). During this period, electric utilities ramp their generation from conventional generation plants to meet this surge in demand. The California Independent System Operator (CAISO) created the duck curve ( Fig. 3) to show the impact of GCPVS on the electric grid's operations based on CAISO‫׳‬s real-time analysis and forecast of electricity net demand from 2012 to 2020. The net demand load represents the amount of conventional generation plants (excluding renewable) that will need to be on-line during different times of the day. Available Online @www.ijtsrd.com GCPVS) will cause a reduction in conventional generation. It is also important to note that there is an increased risk of over generation in this zone. Finally, towards the end of the day, zone (c) is conventional plants will see the most stress. The unpredictability of DG resources (especially solar and wind) means that utility providers may not be able to properly control and plan for the variable system electricity demand. According to CAISO, ramps, over-generation and the resulting impact on frequency response will need to be quickly addressed as more GCPVS are installed, commissioned and connected to the grid. There are a number of solutions to these problems. Because traditional generation resources (steam combustion turbines and nuclear plants) take hours to start and ramp up, investment in generation plants capable of fast ramping, especially gas reciprocating engines and simple-cycle combustion turbines, can help mitigate the effects of the stress on the grid. Such resources serve as spinning reserves, operated on an "as-needed" basis to fill the gap created by non-dispatchable DG resources. Such new investments, however, may not be the most cost effective solution since their full nameplate capacity is rarely utilized. In addition, these facilities will take several years to plan, design and permit, and they will depend on fossil fuels. A preferred solution may be to couple the non dispatchable DG resources like GCPVS with storage systems that extend their operation by an extra hour or two after the sun begins to set. Such a strategy would negate the need for rapid ramping of reserves as GCPV output begins to decreases at sunset, thereby flattening the evening peak in zone (c) of the duck curve (Fig. 3) towards zone (b). V. DIFFERENT TOPOLOGIES OF GRID CONNECTED PV INVERTER In the grid-connected PV system, the DC power of the PV array should be converted into the AC power with proper voltage magnitude, frequency and phase to be connected to the utility grid. Under this condition, a DC-to-AC converter which is better known as is required. here are various kinds of grid PV inverters as shown in Fig. 5. The line inverter, in which the utility grid dictates the commutation process (the commutation process is International Journal of Trend in Scientific Research and Development, Volume 1(4), ISSN: 2456 GCPVS) will cause a reduction in conventional generation. It is also important to note that there is an increased risk of over generation in this zone. Finally, towards the end of the day, zone (c) is where conventional plants will see the most stress. The unpredictability of DG resources (especially solar and wind) means that utility providers may not be able to properly control and plan for the variable system electricity demand. According to CAISO, steep power generation and the resulting impact on frequency response will need to be quickly addressed as more GCPVS are installed, commissioned and There are a number of solutions to these problems. generation resources (steam combustion turbines and nuclear plants) take hours to start and ramp up, investment in generation plants capable of fast ramping, especially gas-fired cycle combustion the effects of the stress on the grid. Such resources serve as spinning reserves, needed" basis to fill the gap dispatchable DG resources. Such new investments, however, may not be the most costheir full nameplate capacity is rarely utilized. In addition, these facilities will take several years to plan, design and permit, and they will A preferred solution may be to couple the nondispatchable DG resources like GCPVS with energy storage systems that extend their operation by an extra hour or two after the sun begins to set. Such a strategy would negate the need for rapid ramping of reserves as GCPV output begins to decreases at sunset, thereby n zone (c) of the duck DIFFERENT TOPOLOGIES OF GRID CONNECTED PV INVERTER connected PV system, the DC power of the PV array should be converted into the AC power with proper voltage magnitude, frequency and phase to be connected to the utility grid. Under this condition, a AC converter which is better known as inverter is required. here are various kinds of grid-connected PV inverters as shown in Fig. 5. The line-commutated inverter, in which the utility grid dictates the commutation process (the commutation process is initiated by reversal of the AC voltage pol power switching devices like commutating thyristors. The turn-on operation of this device can be controlled by the gate terminal of the device while the turn cannot be controlled by the same. Turn device is performed with the h to the device. Contrarily, the self inverter, where the current is transferred from one switching device to another in a controlled manner, is characterized in that it uses such a power switching device, the potential at the gate terminal of which can control both the turn-on and the turn such as Insulated Gate Bipolar Transistor (IGBT) and Metal Oxide Semiconductor Field Effect Transistor (MOSFET). Power MOSFETs are used for low power typically less than 10 kW and high switching operation (20-800 kHz) and IGBTs are used for medium-to-high power exceeding 100 kW, but very high-frequency switching is not possible using IGBTs as the switching frequency is limited to 20 kHz. In case of grid-connected frequency switching is required to reduce an inverter's output-current harmonics, size of the magnetic (filter) used, and weight of the inverter. The self inverter uses a pulse width modulation (PWM) switching techniques to genera the output. The self-commutated inverter can control both voltage waveform as well as current waveform at the output side of inverter, and adjust or correct the power factor and suppress the harmonics in the current waveform which is r connected PV system, and is highly resistant to utility grid disturbances. In present days, due to evolution of advanced switching devices like Power MOSFETs and IGBTs, most inverters for distributed power systems such as PV systems now em commutated inverters rather than line inverters. on operation of this device can be controlled by the gate terminal of the device while the turn-off cannot be controlled by the same. Turn-off of such device is performed with the help of an add-on circuit to the device. Contrarily, the self-commutated inverter, where the current is transferred from one switching device to another in a controlled manner, is characterized in that it uses such a power switching the gate terminal of which can on and the turn-off operation, such as Insulated Gate Bipolar Transistor (IGBT) and Metal Oxide Semiconductor Field Effect Transistor (MOSFET). Power MOSFETs are used for low power 0 kW and high-frequency 800 kHz) and IGBTs are high power exceeding 100 kW, frequency switching is not possible using IGBTs as the switching frequency is limited to connected inverter, highfrequency switching is required to reduce an inverter's current harmonics, size of the magnetic (filter) used, and weight of the inverter. The self-commutated inverter uses a pulse width modulation (PWM) switching techniques to generate an AC waveform at commutated inverter can control both voltage waveform as well as current waveform at the output side of inverter, and adjust or correct the power factor and suppress the harmonics in the current waveform which is required for gridconnected PV system, and is highly resistant to utility grid disturbances. In present days, due to evolution of advanced switching devices like Power MOSFETs and IGBTs, most inverters for distributed power systems such as PV systems now employ a selfcommutated inverters rather than line-commutated In the self-commutated inverters may be voltage source inverter (VSI) or current source inverter (CSI) based on voltage or current waveforms at their input DC side. In VSI, the input side is a DC voltage source, the input voltage holds the same polarity, the average power flow direction through the inverter is determined by the polarity of the input DC current, and at the output side, an AC voltage waveform of the constant amplitude and variable width can be obtained. To limit current flow from the inverter to the utility grid a tie line inductor is used along with VSI. The input DC side terminals of a VSI are typically connected in parallel with a relatively large capacitor that resembles a voltage source. Parameter Voltage Source Inverter (VSI) Current Source Inverter (CSI) Power Source The input of VSI is a DC voltage source having small or negligible impedance. The input of a CSI is changeable current from a DC voltage source having high impedance. Inpt parameter The input voltage is maintained constant. The input DC side terminals of a VSI are connected in parallel with a capacitor and DC capacitor is small, cheap and efficient energy storage. The input current is constant but adjustable. The input DC side of a CSI is connected in series with an inductor, and DC inductor is bulky, expensive and contributes more losses. Load dependency The amplitude of output voltage does not depend on the load. Contrarily, the waveform of the output current as well as its magnitude depends upon the nature of load impedance. The amplitude of output current does not depend on the load. Contrarily, the waveform of output voltage as well as its magnitude depends upon the nature of the load impedance. Associated losses High switching loss but low conduction loss. Thus total power loss is low. Low switching loss but high conduction loss. Thus, the total power loss is high. DIFFERENCE BETWEEN VSI AND CSI [10] In CSI, the input side is a DC current source, the input current holds the same polarity, and therefore the average power flow direction through the inverter is determined by the polarity of the input voltage and at the output side, an AC current waveform of the constant amplitude and variable width can be obtained. The input DC side of the CSI is typically connected in series with a relatively large inductor that maintains the current continuity. A VSI can be operated in voltage control mode as well as in currentcontrol mode and in many times, VSI with current control mode is preferred for grid-connected PV system. In Table 1, some basic differences between a VSI and a CSI are presented. For the inverter of stand-alone PV system without any grid connection, voltage control mode should be used. However, both voltage control mode and current control mode can be used for the inverter of gridconnected PV system. In grid-connected PV system, inverter with the current control mode is extensively IJTSRD | May-Jun 2017 Available Online @www.ijtsrd.com used because a high power factor can be obtained by a simple control circuit, and also suppression of transient current is possible when any grid disturbances occur. VI. CONCLUSIONS Although the solar PV market has experienced astronomical levels of growth and cost reductions in recent years, there are many technical challenges and economic realities that need to be reconciled in order for DG resources like GCPVS to be at parity with conventional generation. For successful mass adoption of GCPVs, new technologies must be developed that will allow the inverter to do more than just provide DC/AC conversions. Modern gridinteractive inverters will need to provide Volt/VAR control (power factor and voltage stabilization), frequency regulation, enable storage and utilize modern communications protocols, all at a reasonable cost. This new generation of inverters has been rightly termed "smart inverters". Future GCPVS design will require inverters to monitor, react to and adjust their output based on instantaneous feedback from the grid. The inverter will also be able to save and share data with the facility management system for trending, predictive, preventative and corrective maintenance. These new breed of smart inverters will be able to log several data like available battery storage hours and capacity information, alarm on external events and provide day to day power management information. Rethinking the role and capability of the inverters can foster the mass adaption of GCPVS and equally help to create and support a more reliable grid.
4,912.6
2017-06-27T00:00:00.000
[ "Engineering" ]
Data on processing of Ti-25Nb-25Zr β-titanium alloys via powder metallurgy route: Methodology, microstructure and mechanical properties The data presented in this article are related to the research article entitled “Cyclic Shear behavior of conventional and harmonic structure-designed Ti-25Nb-25Zr β-titanium alloy: Back-stress hardening and twinning inhibition” (Dirras et al., 2017) [1]. The datasheet describes the methods used to fabricate two β-titanium alloys having conventional microstructure and so-called harmonic structure (HS) design via a powder metallurgy route, namely the spark plasma sintering (SPS) route. The data show the as-processed unconsolidated powder microstructures as well as the post-SPS ones. The data illustrate the mechanical response under cyclic shear loading of consolidated alloy specimens. The data show how electron back scattering diffraction(EBSD) method is used to clearly identify induced deformation features in the case of the conventional alloy. a b s t r a c t The data presented in this article are related to the research article entitled "Cyclic Shear behavior of conventional and harmonic structure-designed Ti-25Nb-25Zr β-titanium alloy: Back-stress hardening and twinning inhibition" (Dirras et al., 2017) [1]. The datasheet describes the methods used to fabricate two β-titanium alloys having conventional microstructure and so-called harmonic structure (HS) design via a powder metallurgy route, namely the spark plasma sintering (SPS) route. The data show the as-processed unconsolidated powder microstructures as well as the post-SPS ones. The data illustrate the mechanical response under cyclic shear loading of consolidated alloy specimens. The data show how electron back scattering diffraction(EBSD) method is used to clearly identify induced deformation features in the case of the conventional alloy. & The data are related to a new approach that uses powder metallurgy route combined with severe plastic deformation to process bulk and dense materials with a specific bimodal-like design. The concept described in the datasheet can be used to process a 3D network of ultrafine grains enclosing coarse grains. It can be applied to various metals and alloys. The data may be useful in comparing the mechanical behavior and properties under cyclic shear loading of heterogeneous (bimodal-like) microstructures obtained via conventional routes. The data show how EBSD investigations are used to identify the nature of mechanical twins in a β-Titanium alloy. Data β-titanium Ti-25Nb-25Zr alloys have been fabricated using SPS route. Two microstructures were obtained having conventional (homogeneous) and so-called harmonic structure (obtained after ball milling of the same powder). Initial powder microstructures, X-ray data of obtained compacts are presented. Stress-strain plots following simple shear cyclic tests are provided and scanning electron microscopy (SEM) images of as-processed microstructures and post-mortem EBSD investigations following simple shear cyclic tests are shown. Experimental design, materials and methods As described in [1], Plasma Rotating Electrode Process (PREP) was used to prepare β-titanium Ti-25Nb-25Zr powders used to prepare both conventional (homogeneous) and harmonic-designed structure. Additional ball-milling step was used for the latter. Figs. 1a and b shows SEM images of the initial powders used for processing of homogeneous and HS microstructures, respectively. After milling, the surface of the powder was severely deformed. Controlling the amount of stored energy via adequate milling conditions allows for the design of the HS during sintering [2][3][4][5]. Further, to obtain the corresponding homogeneous and HS alloy compacts, the powders were sintered using Dr Sinter (Japan) SPS apparatus. The sintering conditions are reported in [1]. Fig. 2 shows the X-ray diffraction patterns (XRD) of the compacted samples using a CuK-α radiation (λ ¼ 0.1541 nm) for both alloys. Data shows only peaks corresponding to a β-crystalline structure. Fig. 3 shows SEM images of the as-sintered microstructures of conventional (Fig. 3a) and HS (Fig. 3b) compacts, respectively. The β-titanium Ti-25Nb-25Zr HS alloy displays a microstructure that consists in a 3D network of ultrafinegrained shell surrounding multi-crystalline cores. The mechanical properties were evaluated by simple shear cyclic tests performed on an MTS M20 testing machine equipped with a shearing device with a load capacity of 100 kN and using a constant strain rate of 10 −3 s −1 . The sample geometry was 20 mm in diameter and 1 mm in thickness with 15 × 2 × 1 mm 3 sheared volume. The shear amplitude is incremental (by step of ε ¼ 7 1%). Fig. 4 compares the behavior of both conventional (blue dashed line) and HS (red plain line) specimens. A detailed description is given in [1]. Post-mortem microstructure investigations were carried out. Only the case of cyclically-deformed β-titanium homogeneous alloy is presented here. The corresponding EBSD data analysis is shown in
1,037.4
2018-02-03T00:00:00.000
[ "Engineering", "Materials Science" ]
Generalized Near Horizon Extreme Binary Black Hole Geometry We present a new vacuum solution of Einstein's equations describing the near horizon region of two neutral, extreme (zero-temperature), co-rotating, non-identical Kerr black holes. The metric is stationary, asymptotically near horizon extremal Kerr (NHEK), and contains a localized massless strut along the symmetry axis between the black holes. In the deep infrared, it flows to two separate throats which we call"pierced-NHEK"geometries: each throat is NHEK pierced by a conical singularity. We find that in spite of the presence of the strut for the pierced-NHEK geometries the isometry group SL(2,R)xU(1) is restored. We find the physical parameters and entropy. I. INTRODUCTION Rapidly rotating, (near-)extreme Kerr black holes (BHs) constitute a unique arena which offers both observational relevance and enhanced theoretical control. Several highspin candidates (cf. [1][2][3][4]) have been observed, and such BHs could produce characteristic signatures for various current and future experiments, including gravitationalwave detectors such as LIGO/Virgo, and optical observatories such as the recently triumphant [5] Event Horizon Telescope. Theoretically, (near-)extreme BHs are especially tractable since they develop an emergent conformal symmetry. More precisely, they admit a nondegenerate near-horizon geometry, the so-called near-horizon extreme Kerr (NHEK) geometry [6]. This geometry is interesting: every fixed polar angle slice of it can be thought of either as 2-dimensional anti-de Sitter space (AdS 2 ) with a circle nontrivially fibered upon it or (equivalently) as a quotient of the so-called warped AdS 3 spacetime. Consequently it enhances the isometry group of Kerr, R × Uð1Þ (corresponding to stationarity and axisymmetry), to SLð2; RÞ× Uð1Þ. This motivated the Kerr/CFT conjecture [7], which hypothesizes that the Kerr BH is dual to a (1 þ 1 dimensional) conformal field theory (CFT) living on the boundary of this near horizon geometry. This boundary can be thought of as the spacetime region in which the NHEK geometry is glued to the external, asymptotically flat, Kerr spacetime. The NHEK geometry has a simpler cousin-the Robinson-Bertotti universe or AdS 2 × S 2 . This spacetime arises as an analogous near-horizon limit of maximally charged Reissner-Nordström BHs. This type of BHs can be used to construct, remarkably simply, multi-BH configurations [8]. Those are static solutions to Einstein-Maxwell theory with an arbitrary number of maximally charged (all with the same sign), nonrotating BHs of any mass. The time-independence of these solutions is possible since the BHs' gravitational attraction and electric repulsion cancel each other precisely-in the full nonlinear theory-for arbitrary BH positions. A neat observation regarding these solutions was made in [9]. Consider a system of two such maximally charged BHs. When they are widely separated, there exist also well-separated near-horizon (approximately AdS 2 × S 2 ) throats surrounding each one of the BHs. When the BHs are close to each other (relative to a length scale defined by a characteristic mass), however, there exists a region around them which is approximately an AdS 2 × S 2 throat which surrounds both horizons, and only when moving further towards either one of the horizons does one recover the two separate throats. This phenomenon was coined in [9] "AdS fragmentation": the joint throat fragments into two smaller ones, when moving deeper into the infrared. This generalizes to an arbitrary number of throats: one "trunk" throat can fragment into several branches which can then branch again, and so forth. This compelling picture depends strongly on the properties of the special system of choice. The fact that it can be embedded in a supersymmetric theory, as a solution which preserves some supersymmetry [10], guarantees this type of behavior. In this paper, we propose the closest possible analogue, presumably, to fragmentation in the case of maximally rotating, uncharged BHs. Since these are not supersymmetric anymore and there is no known smooth stationary solution involving such BHs, we allow for a conical singularity between the BHs which balances the gravitational attraction and keeps the system stationary. We study a 1-parameter family of exact axis-symetric solutions describing two corotating extreme Kerr BHs of arbitrary masses which are held apart by a conical singularity with effective pressure, usually called a strut and as we rescale coordinates to zoom-in on the near-horizon region, we also shorten the strut separating the BHs. In this way we construct the exact solution corresponding to the region where NHEK fragments into two NHEK-like throats which are held apart by the strut. We call these "NHEK2" geometries. The solution presented here generalizes [11], which studied a similar construction for the equal-mass case. These infrared near-horizon geometries which the strut pierces on its way to the horizons are analogues of NHEK which include a conical singularity at one of the poles, extending from the horizon all the way to the NHEK boundary. We verify that this does not ruin the symmetry structure: the "pierced-NHEK" geometry still has an SLð2; RÞ × Uð1Þ isometry group. So while the full NHEK2 does not have SLð2; RÞ × Uð1Þ, it interpolates from a geometry that does have conformal symmetry in the ultraviolet to two throats that are also conformally symmetric, in the infrared. Introducing conical singularities has caveats which are important to stress. First, the stability, both classical and quantum mechanical, of these solutions is questionable. A second point is that the type of conical singularities we use here, the struts, are of excess angle type (rather than deficit angle); the effective stress-energy associated to such objects has negative energy density. Keeping these caveats in mind, we still hope that this construction may be useful in various contexts. First, such stationary BH binary solutions have been recently applied to study astrophysically motivated problems involving dynamical binaries (see for example [12] for the use of quasistationary, extremally charged solutions in a gravitational-wave application); even though the physics governing the dynamics of these systems is different it was argued in [13] (see also references therein) that in some cases such solutions can be used as tools for modeling the astrophysical systems' observational signatures, e.g., gravitational lensing. And secondly, these solutions may give some insight in the holographic, Kerr/CFT context. In this regard, it is interesting to note a recent study of holography and thermodynamics with conical singularities in the bulk [14]. It should be possible to generalize our construction to an arbitrary number of BHs with arbitrary masses. The workhorses of this paper are the binary BH solutions first found in [15] and further studied, including their construction via various solution generating techniques in [16][17][18][19][20][21]. These exact solutions are stationary, axisymmetric, asymptotically flat solutions which describe two rotating BHs held apart by a strut along the symmetry axis. The BHs of these solutions can have arbitrary masses and spins and in particular can be either co-or counterrotating. We are interested in the case in which the BHs are maximally corotating, with arbitrary masses. In particular, we start from the corotating solution described in [22], and for the convenience of the interested reader we describe it explicitly in the so-called Weyl-coordinates in Appendix A. This coordinate choice serves best to describe classes of stationary and axisymmetric solutions of Einstein's theory of general relativity in vacuum. The rest of this paper is organized as follows. We first construct the new generalized near horizon geometry of the stationary binary extreme-Kerr BH solution in Sec. II and analyze its physical properties. In particular, we show how it admits a localized strut along the symmetry axis between the black holes but is asymptotically NHEK. In Sec. III we zoomin further to the infrared of each throat, and find the nearhorizon geometries in which the strut pierces the horizons, extending from the horizon all the way to the NHEK boundary. We show that in spite of the strut, the pierced-NHEK geometries have an SLð2; RÞ × Uð1Þ isometry group. Finally, we will summarize the key results of the paper in Sec. IV. II. GENERALIZED-NHEK2: GENERALIZED NEAR HORIZON GEOMETRY OF EXTREME BINARY KERR BLACK HOLES SOLUTION In this section, we construct the generalized near horizon geometry of extreme binary Kerr (Generalized-NHEK2) black hole solution. Our starting point, is the stationary solution to Einstein equations in vacuum [22] that contains two extremal (zero-temperature) corotating black holes. For convenience and for fixing the notation, we reproduced the original results of [22] in Appendix A. We will only consider the solutions characterized by positive values of the mass that correspond to the parameter range Note that for P ¼ þ1 the equal mass case, treated in [11,21], is recovered; 1 the extreme mass ratio limit is recovered for P → ð ffiffiffiffiffiffiffiffiffiffiffiffiffi 1 − p 2 p Þ þ or P → ð−pÞ − . 1 As described in [22], there is another solution with positive mass that corresponds to −1 < P < −p. This solution belongs to a more problematic case containing a massless ring singularity outside the symmetry axis that we will not consider here. A. Near-horizon limiting procedure In previous works [11] we developed the necessary tools to inspect the extreme corotating binary Kerr black hole solution. This section is nevertheless self-contained. We proceed to compute the near horizon geometry of extremal nonidentical binary Kerr black holes solution, that we are going to refer to as "generalized-NHEK2". The solution of extremal BBHs [22]-that we reproduced in Appendix A-has a rather more compact representation in Weyl coordinates. We therefore perform the scaling computations in these coordinates. In this case, we find that the appropriate near-horizon limiting procedure for the extremal BBHs is Taking λ → 0 and keeping ðt;ρ;ẑ;φÞ fixed. As a result of this procedure, we find the generalized (nonidentical mass) generalized-NHEK2 geometry defined by the equations where where we use prolate spheroidal coordinates x and introduce the notation for 1= ffiffi ffi 2 p < P ≤ 1. B. Physical parameters Let us now consider the physical parameters of the generalized-NHEK2 solution. As we did at the level of the geometry, the near-horizon limiting procedure can be applied to the original physical parameters found in [22] (also reviewed here in Appendix A). Applying this technique yields the expressions for the masses M 1 , M 2 , and angular momenta J 1 , J 2 in the generalized-NHEK2 solution and the corresponding angular velocities satisfying at the same time the Smarr relation It is worth noticing that the new solution contains objects that are in thermal equilibrium. The black hole entropy is, as usual, the area of the event horizon divided by 4. This gives C. Ergospheres The generalized-NHEK2 spacetime that we constructed contains regions where the vector ∂ t becomes null. We will refer to the boundary region as the ergosphere, since they are inherited from the presence of such regions in the original stationary extreme BBHs geometries. For NHEK2 these are defined by regions where g tt ¼ 0 and give rise to a set of disconnected regions as shown in Fig. 1. Different values of the parameter P are bounded by the extreme mass ratio solution when P ¼ 1= ffiffi ffi 2 p and identical mass solution when P ¼ 1. The horizons of the black holes in generalized-NHEK2 are points in the ðρ;ẑÞ-plane and have finite horizon areas. There is a self-similar behavior close to each black hole that resembles the ergospheres' diagrams of isolated extremal Kerr black holes. D. Asymptotic behavior In the asymptotic limit, forρ ¼ r sin θ,ẑ ¼ r cos θ and r → ∞, the generalized-NHEK2 geometry in Sec. II A has a limiting metric that corresponds to the NHEK metric-in Weyl coordinates-(4) with functions In other words, the generalized-NHEK2 solution is asymptotically NHEK. It is worthwhile to mention at this point that in [23,24] it was shown that in the case of 4D Einstein gravity the NHEK geometry is the unique (up to diffeomorphisms) regular stationary and axisymmetric solution asymptotic to NHEK with a smooth horizon. The NHEK2 geometry that we unveil is asymptotically NHEK, but is not < P ≤ 1 where P ¼ 1 is the equal mass identical black hole case, and 1= ffiffi ffi 2 p ∼ 0.07071 the extreme mass ratio limit. diffeomorphic to NHEK; this is not in contradiction with the results of [23,24] since the NHEK2 geometry is not smooth on the strut which keeps the BHs apart. E. Conical singularity As we have shown in the previous subsection, the generalized-NHEK2 is exactly asymptotically NHEK without any conical defects. However, as in the original stationary, extremal BBHs geometry there is in the bulk, a conical singularity on theρ ¼ 0 axis localized between the two black holes. In Weyl coordinates the conical singularities can be easily computed Our computation for the generalized-NHEK2 metric shows that there is a nonremovable conical excess between the two horizons. Outside this localized conical singularity our solutions are smooth. III. PIERCED-NHEK: NEAR HORIZON LIMIT AT FINITE SEPARATION In this section it is shown that there exists a well-defined near-horizon limit of the stationary binary extreme Kerr solution [21,22] even when the BHs, which are held apart by a conical singularity, are separated by a finite distance. The near-horizon region is composed of two disconnected NHEK-like geometries, one near each of the BHs. Each such geometry can be thought of as "NHEK pierced by a cosmic string," the strength of which is determined by the distance between the BHs. The cosmic string/conical singularity balances the gravitational attraction of the companion BH, thereby enabling stationarity. The cosmic string extends all the way from the horizon to infinity in this geometry which we call the "pierced-NHEK." Our starting point is the solution given in [21] (that corresponds to the identical mass binary black hole metric in [22] for P ¼ 1 which for convenience we reviewed in Appendix A). As the most general solution is quite involved, we will start by fixing the parameters at a specific, convenient value which will be enough to convey our point regarding the existence of a nonsingular nearhorizon geometry. It could be nice to explicitly write down the full most general expression, for arbitrary value of P, but for the sake of simplicity we will only focus on the P ¼ 1 case. Starting with the solution presented in [21] with and coordinates denoted by fρ; z; t; ϕg, we choose to focus on the BH located atz ¼ κ and use the transformation IV. DISCUSSION The aim of this paper was to unveil and analyze the generalized-NHEK2 geometry. This geometry is obtained via a limiting procedure that we developed: a zoom-in on the near-horizon region of a 1-parameter family of corotating, 2 double-extreme Kerr solutions of arbitrary masses where the two BHs are parametrically close to each other and are held apart by a conical singularity (strut). The distance between the BHs is scaled to zero at the same rate of the zoom-in on the near-horizon region. This gives a relatively simple solution, which is asymptotically NHEK, and in the infrared flows to two separate throats which we call "pierced-NHEK" geometries: each of them is, approximately (when zooming further towards one of the horizons), NHEK pierced by a conical singularity on the symmetry axis, which runs from one of the poles up to the boundary. We find that in the deep infrared where the geometry is approximately pierced-NHEK, the presence of the strut does not break the isometry group SLð2; RÞ× Uð1Þ-it is restored there. In Fig. 2, we illustrate the structure of the generalized-NHEK2 geometry. The generalized-NHEK2 solution asymptotes to NHEK, yet it is not diffeomorphic to NHEK. This is not in contradiction to the discussions in [25,26] since in these papers, smoothness is assumed while here we allow for a conical singularity which balances the gravitational attraction between the BHs. This paper generalizes the construction studied recently in [11] for the equal mass case. Define prolate spheroidal coordinates ðx; yÞ by the metric is given by where: and the parameters are constrained so that p 2 þ q 2 ¼ 1; For the corotating solution in which we are interested in this paper, Physical parameters The asymptotic metric does not contain a conical singularity, then the mass M 1 , M 2 and the angular momenta J 1 , J 2 of the black holes can be easily calculated 2ðp þ PÞ 2 ½ð1 þ pP þ q 2 ÞΔ − 4pq þ pqðp − PÞ 2 ; and, employing the Smarr relation, we can easily find the expressions for the angular velocities Additionally, the entropy for each black hole can be calculated to give: ðK 0 ð1 þ pP þ qQÞ − 2p 2 ðα − βÞðpðQ þ Pα þ qÞ þ ðqQ þ 1ÞβÞÞ; ðA9Þ
3,878
2019-06-17T00:00:00.000
[ "Physics" ]
Provenance and Sedimentary Context of Clay Mineralogy in an Evolving Forearc Basin, Upper Cretaceous-Paleogene and Eocene Mudstones, San Joaquin Valley, California : Mudstone samples from the Moreno (Upper Cretaceous-Paleocene) and Kreyenhagen (Eocene) formations are analysed using X-ray diffraction (XRD) and X-ray fluorescence (XRF) to determine their mineralogy. Smectite (Reichweite R0) is the predominant phyllosilicate present, 48% to 71.7% bulk rock mineralogy (excluding carbonate cemented and highly bio siliceous samples) and 70% to 98% of the <2 µ m clay fraction. Opal CT and less so cristobalite concentrations cause the main deviations from smectite dominance. Opal A is common only in the Upper Kreyenhagen. In the <2 µ m fraction, the Moreno Fm is significantly more smectite-rich than the Kreyenhagen Fm. Smectite in the Moreno Fm was derived from the alteration of volcaniclastic debris from contemporaneous rhyolitic-dacitic magmatic arc volcanism. No tuff is preserved. Smectite in the Kreyenhagen Fm was derived from intense sub-tropical weathering of granitoid-dioritic terrane during the hypothermal period in the early to mid-Eocene; the derivation from local volcanism is unlikely. All samples had chemical indices of alteration (CIA) indicative of intense weathering of source terrane. Ferriferous enrichment and the occurrence of locally common kaolinite are contributory evidence for the intensity of weathering. Low concentration (max. 7.5%) of clinoptilolite in the Lower Kreyenhagen is possibly indicative of more open marine conditions than in the Upper Kreyenhagen. There is no evidence of volumetrically significant silicate diagenesis. The main diagenetic mineralisation is restricted to low-temperature silica phase transitions. The smectitic dominance of the phyllosilicate fraction in the Kreyenhagen Fm is known to the southeast of our study area, where 51 mudstone samples from boreholes were investigated [6], and a single outcrop sample was investigated in later studies [7,8], approximately 220 km and 55 km from our study area, respectively. Excellent outcrop [2,3] allows continuous sampling through the stratigraphy of both formations and enables the geological evolution of the San Joaquin Basin to be evaluated from the perspective of the fine-grained sedimentary record. Specific attention is given to the origin of smectite in the context of volcanic activity and quiescence in the evolving Sierra Nevadan magmatic arc. In the present study, a broader evaluation of the lithostratigraphy and mineralogy of the San Joaquin Basin was performed. The provenance, weathering, and composition of source terrane were studied by X-ray diffraction (XRD) and X-ray fluorescence (XRF). This approach is a useful tool for the investigation of fine-grained sedimentary rock [9,10]. Introduction Recently, the Moreno and Kreyenhagen Fms became a focus of global geological interest because they host the two largest and best-exposed outcrops of giant sand injection complexes, the Panoche Giant Injection Complex (PGIC) and the Tumey Giant Injection Complex (TGIC),~400 km 2 and >200 km 2 , respectively [1-4] ( Figure 1). As part of understanding the background geological setting of the mudstone-dominated host strata for the PGIC and TGIC, samples were collected and analysed from formal lithostratigraphic units in the Moreno Fm and informal units (Upper and Lower) in the Kreyenhagen Fm. Given the widespread occurrence and large outcrops of the Moreno and Kreyenhagen formations (henceforth Moreno Fm and Kreyenhagen Fm) in the San Joaquin Valley and their significance to petroleum systems, the paucity of mineralogical data in the public domain is surprising. According to Jay [5], the Kreyenhagen is "virtually unmentioned in the resource shale literature" despite producing significant volumes of hydrocarbons continuously since 1956. the resource shale literature" despite producing significant volumes of hydrocarbons co tinuously since 1956. The smectitic dominance of the phyllosilicate fraction in the Kreyenhagen Fm known to the southeast of our study area, where 51 mudstone samples from boreho were investigated [6], and a single outcrop sample was investigated in later studies [7, approximately 220 km and 55 km from our study area, respectively. Excellent outcr [2,3] allows continuous sampling through the stratigraphy of both formations and enab the geological evolution of the San Joaquin Basin to be evaluated from the perspective the fine-grained sedimentary record. Specific attention is given to the origin of smectite the context of volcanic activity and quiescence in the evolving Sierra Nevadan magma arc. In the present study, a broader evaluation of the lithostratigraphy and mineralogy the San Joaquin Basin was performed. The provenance, weathering, and composition source terrane were studied by X-ray diffraction (XRD) and X-ray fluorescence (XRF). T approach is a useful tool for the investigation of fine-grained sedimentary rock [9,10]. Geological Background During the Upper Cretaceous, large sediment loads were deposited in the S Joaquin Basin, resulting from the rapid erosion of the Sierra Nevada arc. The basin gra ually extended due to migration of magmatic activity to the east combined with the f Geological Background During the Upper Cretaceous, large sediment loads were deposited in the San Joaquin Basin, resulting from the rapid erosion of the Sierra Nevada arc. The basin gradually extended due to migration of magmatic activity to the east combined with the formation of the trench to the west [11]. Most of the Moreno Fm examined in this study comprises predominantly fine-grained Maastrichtian slope deposits (Figure 2), but the uppermost unit, the Dos Palos Mbr, deposited in shallower water on the upper slope. In the outcrop area, a regional unconformity truncates the top of the Moreno Fm ( Figure 2). In the Eocene, rapid deformation of the basin occurred, and periods of uplift and subsidence ensued. The latter caused a regional marine transgression, associated with folding and thrusting at the basin margins. In the area of the TGIC outcrop, a regionally developed unconformity eroded deeply (in some cases >50 m) into the TGIC locally reworking shallow parts of the injection complex ( Figure 2). The Eocene-Oligocene boundary is not preserved. Further tectonic movements occurred in the Oligocene, creating normal and thrust faulting and anticlinal folding at the basin peripheries [12]. In the San Joaquin Basin, the Kreyenhagen Fm records approximately 16 million years of slope and basin sedimentation from Middle Eocene to Early Oligocene, an extension of the Sierra Nevada forearc that was created during the subduction of the Pacific plate beneath the North American plate. The formation is predominantly mudstone (also known as the Kreyenhagen Shale), up to 3000 m thick, which is present at outcrop and in boreholes but includes turbiditic and transgressive shallow marine sandstone. Much of the mudstone is bio siliceous and, in some areas are important hydrocarbon source rocks [7,8]. It contains sequences where opal CT is common. mation of the trench to the west [11]. Most of the Moreno Fm examined in this study comprises predominantly fine-grained Maastrichtian slope deposits ( Figure 2), but the uppermost unit, the Dos Palos Mbr, deposited in shallower water on the upper slope. In the outcrop area, a regional unconformity truncates the top of the Moreno Fm ( Figure 2). In the Eocene, rapid deformation of the basin occurred, and periods of uplift and subsidence ensued. The latter caused a regional marine transgression, associated with folding and thrusting at the basin margins. In the area of the TGIC outcrop, a regionally developed unconformity eroded deeply (in some cases >50 m) into the TGIC locally reworking shallow parts of the injection complex ( Figure 2). The Eocene-Oligocene boundary is not preserved. Further tectonic movements occurred in the Oligocene, creating normal and thrust faulting and anticlinal folding at the basin peripheries [12]. In the San Joaquin Basin, the Kreyenhagen Fm records approximately 16 million years of slope and basin sedimentation from Middle Eocene to Early Oligocene, an extension of the Sierra Nevada forearc that was created during the subduction of the Pacific plate beneath the North American plate. The formation is predominantly mudstone (also known as the Kreyenhagen Shale), up to 3000 m thick, which is present at outcrop and in boreholes but includes turbiditic and transgressive shallow marine sandstone. Much of the mudstone is bio siliceous and, in some areas are important hydrocarbon source rocks [7,8]. It contains sequences where opal CT is common. Materials Mudstone samples were collected from transects through the outcrop of the Upper Cretaceous to Lower Paleocene Moreno Formation (1) and the Eocene of the Kreyenhagen Materials Mudstone samples were collected from transects through the outcrop of the Upper Cretaceous to Lower Paleocene Moreno Formation (1) and the Eocene of the Kreyenhagen Formation (3) (Figure 3). Moreno Fm samples were exclusively from the Right-Angle Canyon locality (RAC, Figure 3a), the geology of which was most recently described in Grippa et al. [13]. Regional (Figure 3). Moreno Fm samples were exclusively from the Right-Angle C yon locality (RAC, Figure 3a), the geology of which was most recently described in Grip et al. [13]. Methods All samples were analysed using X-ray fluorescence (XRF) and X-ray diffract Methods All samples were analysed using X-ray fluorescence (XRF) and X-ray diffraction (XRD) to determine chemical and mineralogical compositions of whole-rock samples (XRF and XRD) and clay fractions (XRD). Chemical analyses were carried out for major elements according to the procedure of Franzini et al. [14]. The sample preparation technique and the fusion procedure were those of Claisse [15]. A mixture containing 0.210 g of sample and 7.000 g of flux (50% Lithium tetraborate, Li 2 B 4 O 7 , and 50% Lithium metaborate, LiBO 2 ), corresponding to a 1:30 sample/borate dilution, was carefully homogenized in a 95Pt/5Au crucible using Claisse Fluxer-Bis! ® automatic apparatus (Malvern Panalytical, UK). Ammonium iodide anhydrous powder was added as a non-wetting agent. The mixture was fused at 1000 • C for 20 min while continuously stirring the melt. When the sample was completely dissolved and any reaction ceased, the melt was poured into 95Pt/5Au/2Rh plate and cooled slowly. After cooling the melt formed a glass disc (ø = 32 mm), which was directly analysed by ARL 9400 XP+ sequential X-ray spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) [16]. For the whole-rock XRD investigation, samples were gently crushed and then ground using a vibratory agate disc mill comminuting by friction. The particle size obtained was <5 µm. To separate the clay fraction (<2 µm), the whole sample was gently crushed (not ground) in an agate mortar, disaggregated in distilled water overnight, and then separated by settling in distilled water according to Stoke's law. Clay suspensions for quantitative analysis were saturated with Mg 2+ cations using 1 N MgCl 2 solution. Oriented mounts were prepared by settling clay suspensions (concentration of 5 mg/cm 2 [17]) on glass slides. Each specimen was analysed in an air-dried state, glycolated at 60 • C for 8 h and heated at 375 • C for 1 h [18]. XRD analyses were performed on whole-rock samples and clay fractions using a Rigaku Rint Miniflex powder diffractometer (Rigaku, Tokyo, Japan) with Cu-Kα radiation, sample spinner, and Cu anode at a voltage of 30 kV and a current of 15 mA. Mineralogical analyses of bulk samples were carried out on random mounts using side loading of bulk specimens, to guarantee a satisfactory reproducible density and random orientation [19]. Data were collected in a 2-70 • range of 2θ with 0.02 • step and a speed of 5 s/step. Data from the clay fraction were collected in the 2-33 • range of 2θ with a step of 0.02 • and a speed of 5 s/step. The content of clay minerals in <2 µm fractions was estimated by the peak areas on both glycolated and heated oriented mounts [20]. To distinguish between smectite and illite/smectite mixed-layer clay, the XRD patterns of glycolated clays were used as proposed by Moore and Reynolds [18]. The XRD patterns were processed using the WINFIT computer program [21]. The mineralogical composition was determined in two steps: (i) by XRD analysis using a Reference Intensity Ratio (RIR) method with quartz as an internal standard [20] and (ii) by combining XRD and XRF data using the vbAffina program (Microsoft Visual Basic 6.0) [22,23]. The VbAffina program requires as input data the major element composition of the bulk sample (SiO 2 , Al 2 O 3 , Fe 2 O 3 , MgO, CaO, Na 2 O, K 2 O, and LOI) and XRDP mineralogical data. All these data were processed using a least-squares procedure that minimizes the differences between chemical compositions calculated from the XRD-determined phase percentages that are introduced into vbAffina (i.e., XRDP results) and those determined using XRF. Stoichiometric compositions of quartz, calcite, dolomite, Na-plagioclase (albite), orthoclase, clinoptilolite, gypsum, opal, cristobalite, and kaolinite were used with the vbAffina program. Illite and smectite compositions were selected from a vbAffina database that contained the compositions of these minerals. Bulk-Rock Mineralogy Whole-rock analyses show that the samples are generally composed of quartz, feldspar (K-feldspar and plagioclase), cristobalite, and phyllosilicates ( probably representing opal A ( Figure 4). With one exception (RAC3) in which 58% of the bulk mineralogy is opal CT, samples from the Moreno Fm are dominated by phyllosilicates, and specifically, smectite. PGIC samples contain the highest amounts of quartz and cristobalite (Table 1), which accords with the chemical analyses that reveal a high proportion of SiO 2 ( Table 2). Samples from the Lower Kreyenhagen are characterised by the presence of clinoptilolite (Figure 4b), which although not present throughout, comprises 7.5% of the bulk rock volume in sample EO4-08 (Table 1). In the Upper Kreyenhagen, clinoptilolite is much less common ( Figure 4a) and absent in 8 of the 13 samples. Clinoptilolite is undetected in the Moreno Fm samples (Figure 4c). Opal CT is identified in several samples, and its content varies widely from a few percent up to~70% (Table 1). It is noteworthy that opal CT is present in varied abundance in both stratigraphic successions but absent in more than half of the samples. In the Kreyenhagen Fm, amorphous material attributed to opal A is present in the upper section of the Upper Kreyenhagen and comprises 28% and 24% in the youngest, probably least deeply buried samples (GKR-6 and TH-01, Table 1). It occurs only in one other Kreyenhagen sample (EO4-12) and is absent in the Moreno Fm. Table 1. Sample locations are shown in Figure 3. Potassium feldspar is pervasive in the Moreno Fm and more common than in the Kreyenhagen Fm, where occasionally absent (Table 1). Plagioclase is ubiquitous and always more abundant than potassium feldspar in the Moreno Fm, and generally more abundant than in the Kreyenhagen Fm (Table 1). Calcite and dolomite are absent in most Table 1. Sample locations are shown in Figure 3. Table 1. Whole-rock mineralogy (wt %) for Kreyenhagen and Moreno formations estimated from X-ray diffraction (XRD) analysis. Qtz = quartz; Op CT = opal CT; Opal A? = amorphous material and probably opal A; Crist = cristobalite; K-feld = K-feldspar; Pl = plagioclase; Cal = calcite; Dol = dolomite; Clin = clinoptilolite; Gy = gypsum; Sm = smectite; Ill = illite; Kao = kaolinite; Σ Phy = total phyllosilicates. Mineralogy of the <2 µm Clay Fraction The clay fraction (<2 µm) comprises mainly smectitic clay, with small amounts of illite and kaolinite ( Figure 6). Smectite content ranges from 92% to 98% (mean 95.3%) in the Moreno Fm and from 70% to 95% (mean 88.9%) in the Kreyenhagen Fm (Table 3). In the Kreyenhagen Fm, the Upper and Lower Kreyenhagen have averages of 88.9% and 86.1%, respectively; illite is present in all samples ranging from 1% to 11% (Figure 6a,b). In the Moreno Fm, illite is uncommon, ranging from 1% to 8% (mean 1.9%), and slightly more common Mineralogy of the <2 µm Clay Fraction The clay fraction (<2 µm) comprises mainly smectitic clay, with small amounts of illite and kaolinite ( Figure 6). Smectite content ranges from 92% to 98% (mean 95.3%) in the Moreno Fm and from 70% to 95% (mean 88.9%) in the Kreyenhagen Fm (Table 3). In the Kreyenhagen Fm, the Upper and Lower Kreyenhagen have averages of 88.9% and 86.1%, respectively; illite is present in all samples ranging from 1% to 11% (Figure 6a,b). In the Moreno Fm, illite is uncommon, ranging from 1% to 8% (mean 1.9%), and slightly more common Potassium feldspar is pervasive in the Moreno Fm and more common than in the Kreyenhagen Fm, where occasionally absent (Table 1). Plagioclase is ubiquitous and always more abundant than potassium feldspar in the Moreno Fm, and generally more abundant than in the Kreyenhagen Fm (Table 1). Calcite and dolomite are absent in most samples and, where present, form diagenetic cement. Where calcite and dolomite occur, chemical data show high concentrations of CaO and MgO, respectively ( Table 2). Gypsum is present in one sample only (GKR-2) as 1%, consistent with the percentage of CaO (Table 2). Phyllosilicates in bulk samples from the Moreno Fm and Kreyenhagen Fm are predominantly represented by smectite (probably montmorillonite) or mixed layers illite/smectite R0 with low illite content (<10%). Illite and kaolinite are present in small amounts. Smectite content is in the range~50% to 70% for most samples. Exceptions with much lower smectite content (12% to 28%, Table 1) are enriched in opal CT and, in one case (EO4-04), dolomite cement. Illite is pervasive in low proportions ranging from 5% to 8% (Table 1), while kaolinite is significantly less common and absent in nine samples (Table 1). Bulk Rock Chemistry The chemical data have limited major element data variability ( [24], which is indicative of significant leaching in the source terrane. Further evidence of a leached source terrane is recorded by TiO 2 concentration that averages between 0.68% and 0.85%, slightly higher than average values from the Upper Continental Crust. Al 2 O 3 content has little variability and an average content of 12% and 16% in the Lower Kreyenhagen and the Moreno formations, respectively. In accord with the mineralogical data (Table 1), the highest Al 2 O 3 concentrations are where smectite and kaolinite predominate. In all samples, Al 2 O 3 concentration is comparable with the average values of the Upper Continental Crust [24]. As expected from the high chemical indices of alteration (CIA) values (Table 2), sample compositions are close to the A vertex and the smectite compositional field ( Figure 5). The combined presence of K-feldspar and illite shift the compositional field approximately 10% toward the A-K axis. The plots exclude all carbonate-rich samples. The chemical data have limited major element data variability ( Table 2). In the Lower Kreyenhagen, some exceptional CaO and MgO concentrations are caused by the presence of calcite and dolomite. SiO2 variability is associated with the concentration of bio siliceous opal. Na2O and K2O are present in low concentrations (1-2%); however, where clinoptilolite and plagioclase, and clinoptilolite and K-feldspar occur, the concentration of Na2O and K2O are higher, respectively. Excluding samples with carbonate minerals, Na2O, K2O, and CaO concentrations are lower than their average concentration in the Upper Continental Crust [24], which is indicative of significant leaching in the source terrane. Further evidence of a leached source terrane is recorded by TiO2 concentration that averages between 0.68% and 0.85%, slightly higher than average values from the Upper Continental Crust. Al2O3 content has little variability and an average content of 12% and 16% in the Lower Kreyenhagen and the Moreno formations, respectively. In accord with the mineralogical data (Table 1), the highest Al2O3 concentrations are where smectite and kaolinite predominate. In all samples, Al2O3 concentration is comparable with the average values of the Upper Continental Crust [24]. As expected from the high chemical indices of alteration (CIA) values (Table 2), sample compositions are close to the A vertex and the smectite compositional field ( Figure 5). The combined presence of K-feldspar and illite shift the compositional field approximately 10% toward the A-K axis. The plots exclude all carbonate-rich samples. Table 2 excluding those enriched in carbonate. Figure 5. The A-CN-K plot of samples from Table 2 excluding those enriched in carbonate. Mineralogy of the <2 µm Clay Fraction The clay fraction (<2 µm) comprises mainly smectitic clay, with small amounts of illite and kaolinite ( Figure 6). Smectite content ranges from 92% to 98% (mean 95.3%) in the Moreno Fm and from 70% to 95% (mean 88.9%) in the Kreyenhagen Fm (Table 3). In the Kreyenhagen Fm, the Upper and Lower Kreyenhagen have averages of 88.9% and 86.1%, respectively; illite is present in all samples ranging from 1% to 11% (Figure 6a,b). In the Moreno Fm, illite is uncommon, ranging from 1% to 8% (mean 1.9%), and slightly more common Mineralogy of the <2 µm Clay Fraction The clay fraction (<2 µm) comprises mainly smectitic clay, with small amounts of illite and kaolinite ( Figure 6). Smectite content ranges from 92% to 98% (mean 95.3%) in the Moreno Fm and from 70% to 95% (mean 88.9%) in the Kreyenhagen Fm (Table 3). In the Kreyenhagen Fm, the Upper and Lower Kreyenhagen have averages of 88.9% and 86.1%, respectively; illite is present in all samples ranging from 1% to 11% (Figure 6a,b). In the Moreno Fm, illite is uncommon, ranging from 1% to 8% (mean 1.9%), and slightly more common Mineralogy of the <2 µm Clay Fraction The clay fraction (<2 µm) comprises mainly smectitic clay, with small amounts of illite and kaolinite ( Figure 6). Smectite content ranges from 92% to 98% (mean 95.3%) in the Moreno Fm and from 70% to 95% (mean 88.9%) in the Kreyenhagen Fm (Table 3). In the Kreyenhagen Fm, the Upper and Lower Kreyenhagen have averages of 88.9% and 86.1%, respectively; illite is present in all samples ranging from 1% to 11% (Figure 6a,b). In the Moreno Fm, illite is uncommon, ranging from 1% to 8% (mean 1.9%), and slightly more common in the Kreyenhagen Fm, ranging from 1% to 11% (mean 4.5%), with 3% and 5.8% mean illite in the Upper and Lower Kreyenhagen, respectively. 9.68, respectively, and much greater than in the Moreno Fm. In the Kreyenhagen Fm (Table 3), the clusters of high kaolinite content occur in an ~40 m thick interval at the base of the sampled section (samples EO4-01 to -04), and in an ~25 m interval directly below (EO4-09) and above the transition from the Lower to the Upper Kreyenhagen (EO4-10 and -11). A less pronounced kaolinite enrichment occurs in an ~60 m interval. Samples with either low or no kaolinite content typically contain significant amounts of opal CT; for example, in EO4-06, EO-08, GKR1, GKR4, and RAC3 (Table 1). Table 1. Sample locations are shown in Figure 3. In the Moreno Fm, kaolinite is 4% or less of the <2 µm fraction in samples from the Tierra Loma and Marca Mbrs (Table 3), while in the overlying Dos Palos Mbr, it constitutes 5% or more of the <2 µm fraction (compare RAC2 and RAC8, Figure 6c). Kaolinite's cumulative mean is 3% with a standard deviation of 2.65. In the Kreyenhagen Fm the kaolinite concentration varies significantly, with nine samples with 4% or less, and eight samples with 10% or more ( Table 3). The mean concentration of kaolinite for the entire Kreyenhagen Fm and the Upper and Lower Kreyenhagen individually is 7.9%. The standard deviation for the Upper and Lower Kreyenhagen are significantly different, 6.63 and 9.68, respectively, and much greater than in the Moreno Fm. In the Kreyenhagen Fm (Table 3), the clusters of high kaolinite content occur in an~40 m thick interval at the base of the sampled section (samples EO4-01 to -04), and in an~25 m interval directly below (EO4-09) and above the transition from the Lower to the Upper Kreyenhagen (EO4-10 and -11). A less pronounced kaolinite enrichment occurs in an~60 m interval. Samples with either low or no kaolinite content typically contain significant amounts of opal CT; for example, in EO4-06, EO-08, GKR1, GKR4, and RAC3 (Table 1). Mineralogy of the <2 µm Clay Fraction The clay fraction (<2 µm) comprises mainly smectitic clay, with small amounts of illite and kaolinite ( Figure 6). Smectite content ranges from 92% to 98% (mean 95.3%) in the Moreno Fm and from 70% to 95% (mean 88.9%) in the Kreyenhagen Fm (Table 3). In the Kreyenhagen Fm, the Upper and Lower Kreyenhagen have averages of 88.9% and 86.1%, respectively; illite is present in all samples ranging from 1% to 11% (Figure 6a,b). In the Moreno Fm, illite is uncommon, ranging from 1% to 8% (mean 1.9%), and slightly more common Mineralogy of the <2 µm Clay Fraction The clay fraction (<2 µm) comprises mainly smectitic clay, with small amounts of il kaolinite ( Figure 6). Smectite content ranges from 92% to 98% (mean 95.3%) in the Fm and from 70% to 95% (mean 88.9%) in the Kreyenhagen Fm (Table 3). In the K hagen Fm, the Upper and Lower Kreyenhagen have averages of 88.9% and 86.1%, tively; illite is present in all samples ranging from 1% to 11% (Figure 6a,b). In the Fm, illite is uncommon, ranging from 1% to 8% (mean 1.9%), and slightly more c Opaline Phases The scanning electron microscopy (SEM) analysis reveals micro-textural variations that confirm the presence of opal A and differentiate it from opal CT. In the Upper Kreyenhagen, broken diatom tests exhibit pristine, box-work, shell micro-structure in a groundmass of comminuted bio siliceous debris (Figure 7). During sample preparation for XRD analysis, the opal A is assumed to disintegrate into finer-grained particles that are X-ray amorphous. By contrast, in the Moreno Fm, there is no preservation of shell tests or their micro-texture. Diatom morphology is sometimes preserved as crystalline fills of their internal geometry (Figure 8). XRD analyses of the clay fractions show an XRD pattern that is much closer to low tridymite rather than low cristobalite, with strong reflections at 4.33 Å and at 4.10 Å and weak, broad reflections at 2.49 Å and 2.31 Å. Minerals 2021, 11, x FOR PEER REVIEW 12 of 18 micro-texture. Diatom morphology is sometimes preserved as crystalline fills of their internal geometry (Figure 8). XRD analyses of the clay fractions show an XRD pattern that is much closer to low tridymite rather than low cristobalite, with strong reflections at 4.33 Å and at 4.10 Å and weak, broad reflections at 2.49 Å and 2.31 Å . Origin of Smectite Mudstone of the Moreno and Kreyenhagen fms are dominated by smectite. Except where opal CT or dolomite occur, it is three to more than six times as abundant as any other mineral present (Table 1). Grim During the Upper Cretaceous, deposition of the Moreno Fm coincided with active volcanism in the palaeo-Sierran magmatic arc [33] and erosion of substantial volumes of clastic detritus into a forearc basin [34], the site of the present-day San Joaquin Valley ( Figure 1). Alteration of volcaniclastics from rhyolitic or dacitic composition to form smectite is inferred and supported by the high average SiO 2 content (66.8%, Table 2). This accords with the occurrence of quartz, cristobalite, and opal CT (Table 1). No evidence exists for direct deposition from airborne volcaniclastics and tuff is unrecognized in our study area. Despite the prevalence of smectite (Tables 1 and 3), high (mean 84.44) CIA values [25] persist and are indicative of significant leaching of the source terrane. The widespread contemporary volcanic activity makes alteration of volcaniclastics the likely primary source of smectite; however, prior to marine deposition, we suggest that smectite was likely to be retained and formed in pedogenic settings in which it was stable [28] but where further leaching continued to modify the bulk chemistry of the detritus. The low standard deviation of CIA in the Moreno Fm is a measure of the homogeneity of the bulk chemistry ( Table 2). Contemporary volcanism was absent in the palaeo-Sierra Nevada during the deposition of the Kreyenhagen Fm (Eocene), although it did occur further east on the "Nevadaplano" [35]. As with the Moreno Fm, the absence of tuff makes direct deposition from volcaniclastics unlikely, and weathering processes and soil formation along the western margin of the Sierra Nevada are more plausible origins for most of the smectite. High CIA values prevail, averaging 84.33 and 78.14 in the Upper and Lower Kreyenhagen, respectively. These are indicative of a moderately high level of leaching of the source terrane that coincides with the Paleocene-Eocene thermal maximum followed by almost 10 Ma years of the Eocene hyperthermal [36]. Red soils/paleosols with locally abundant smectite developed along the western margin of the North American continent recording a trend of tropical to sub-tropical climate from Baja California [37] to Oregon [38]. The Kreyenhagen Fm is typically deficient in alkali and alkaline earth cations and is notably ferriferous; Upper and Lower Kreyenhagen with 4.44% and 3.97% mean Fe 2 O 3 , respectively, relative to 2.56% Fe 2 O 3 in the Moreno Fm (Table 2). Local concentrations of kaolinite (10-24%, Table 3) record erosion of more deeply weathered contemporary terrane. Iron oxide stain along cleavage planes in feldspar in Kreyenhagen sandstone is evidence of the erosion of deeply weathered source terrane [4]. Higher concentrations of Fe 2 O 3 are consistent with the generation of smectite by weathering of the granitic or dioritic-granodioritic substrate and probable soil formation rather than by alteration of volcaniclastic debris. Opal CT Textural differences between opal CT from the Moreno Fm ( Figure 7) and opal A from the Upper Kreyenhagen ( Figure 8) show how opal A dissolution occurs by the disappearance of the intricate highly micro-porous diatom fragments and growth of coarser individual crystals of opal CT that sometimes preserve diatom test morphology but none of the original micro-porous structure. In a micro-textural context, the transition of opal A to opal CT is demonstrably not a solid-state transition. Growth of opal CT is manifest as an increase in bulk density in the sedimentary rock in which it occurs and is accompanied by a visible change from a substantial proportion of sub-µm micropores in diatomaceous fragments (Figure 7b) to less but larger (~0.5 µm to 2.5 µm) pores (Figure 8d). Although we have not attempted a quantitative evaluation of this change in micro-texture, it is apparent that the pore-size distribution in opal CT is skewed to larger pore sizes than in opal A, implicitly enhancing pore connectivity. It is worth noting that XRD analyses of the clay fractions show an XRD pattern that is much closer to low tridymite rather than low cristobalite, with strong reflections at 4.33 Å and 4.10 Å and weak, broad reflections at 2.49 Å and 2.31 Å. In contrast, low cristobalite has an XRD pattern with a very strong reflection at 4.05 Å and with medium-strong reflections at 3.14 Å, 2.84 Å, and 2.48 Å [39]. Despite these differences, most authors continue to identify opal CT as a precursor of cristobalite rather than tridymite, even though recent [40] and older work [41] presented spectroscopic evidence in support of opal CT as a precursor of tridymite rather than cristobalite. Clinoptilolite Global occurrence of clinoptilolite in sedimentary rocks is concentrated in strata of Upper Cretaceous to Eocene age [42]. When present in the Lower Kreyenhagen (seven of ten samples), clinoptilolite has a mean abundance of 3.63% whereas it is mainly absent from the Upper Kreyenhagen. Clinoptilolite was known in the Moreno Fm and associated with an early diagenetic alteration of smectite [43], but it is undetected in this study. The general relationship between the co-occurrence of clinoptilolite with opal CT and smectite [42,44] is not present in five of the seven clinoptilolite-bearing samples; smectite is abundant, but opal CT, opal A, and cristobalite are absent (Table 1). Clinoptilolite forms in normal salinity marine conditions in which opal A or CT are freely available and magnesium concentration is low, conditions typical of open oceans rather than where ocean circulation is restricted. In this context, it could be that the Lower Kreyenhagen deposited in more open marine conditions than the Upper Kreyenhagen. An alternative origin for clinoptilolite is from igneous rocks, often as alteration products of ignimbrites and tuffs, and as vesicular crystals [45][46][47]. This may be a potential contributory source for clinoptilolite in the Lower Kreyenhagen. In common with the origin of smectite, the lack of adjacent volcanic sources during the Eocene diminishes the likelihood of significant volcanic provenance. Sedimentology and Diagenesis Content of bio siliceous silica relative to phyllosilicates, and specifically smectite, is the main sedimentological variation present: in the Moreno Fm, the Marca Mbr is a thick, pale grey, regionally developed bio siliceous mudstone [2,13]; in the Kreyenhagen Fm, the Lower and Upper Kreyenhagen are differentiated by their clay mineral and bio siliceous content, respectively [3]. Deposition of the Dos Palos Mbr records a shallowing upward and from the medial part of the Marca Mbr into the Dos Palos Mbr kaolinite content has a significant increase, averaging 4.6% compared with 1% in the underlying mudstone ( Figure 3a, Table 3). In the uppermost four samples (RAC-6, -7, -8, and -9), Fe 2 O 3 is enriched along with alkali and alkali earth elements while SiO 2 is less concentrated ( Table 2). In the absence of any silicate diagenesis, the increase of kaolinite and Fe 2 O 3 records increased terrestrial input with other chemical variations responding to a slight gradual coarsening and increased feldspar content. Independent evaluation of diagenetic grade in the Moreno Fm estimated the thermal maximum to be <50 • C [48], and we have no evidence to contradict that in this study. Localized diagenetic calcite and dolomite cement and opaline silica transformations are identified and gypsum is present in the Kreyenhagen Fm [4]. High silica content and deficiency in alkali and alkaline earth cations is characteristic ( Table 2) and attributed to derivation of clastic material from siliceous magma. The Lower Kreyenhagen is significantly enriched in alkali and alkali earth elements and depleted in Si 4+ relative to the Upper Kreyenhagen and the Moreno Fm. When the four samples with diagenetic carbonate cement from the Lower Kreyenhagen are excluded from the statistical analysis, Mg 2+ and Ca 2+ remain significantly higher than in Upper Kreyenhagen and Moreno samples, thus reinforcing the significance of the chemical difference of the Lower Kreyenhagen. Smectite and Hydrocarbon Generation Unsurprisingly, given the known hydrocarbon reserves in the area [5,49], most of the very few published papers on the clay mineralogy of Paleogene strata in the San Joaquin Basin are associated with hydrocarbons. Previously, smectite in the Kreyenhagen Fm was believed to promote oil expulsion efficiency and was investigated using hydrous pyrolysis to compare natural and thermal maturation of samples [50]. Using the same samples, Lewan et al. [7] concluded that oil expulsion efficiency from smectitic mudstone was reduced by 88% compared to that of mudstone impregnated with kerogen. The reduction in expulsion efficiency was thought to be due to the kerogen in the interlayer region of the smectite structure being converted to bitumen that on heating converted to pyrobitumen through a crosslinking reaction. This actively inhibits oil generation and expulsion efficiency by changing the pore system from water-wet to bitumen-wet. Again, using the same samples, this result was confirmed by Clauer et al. [8], who monitored the changes in chemistry, mineralogy, and K-Ar isotope ratios. Mineralogical changes were monitored by XRD and consisted of recording inhibition of swelling of smectite layers and promotion of illite layers following the impregnation of the pore system and interlayer region of the smectite structure by pyrobitumen after heating to temperatures above 365 • C for 72 h. Data from our <2 µm samples show that smectite in Kreyenhagen samples (Figure 6b,c) is dominated by randomly interstratified mixed-layer illite-smectite (I/S) where the S component exceeds~85% in a Reichweite R0 arrangement. Samples show evidence of inhibition to swelling after treatment with ethylene glycol, a behaviour attributed to the <2 µm fraction being Mg-saturated [51]. When heated at 375 • C for 1 h it produces a very broad 10 Å reflection, which is asymmetric toward the high angle side. This XRD characteristic is caused by the difficulty of removing interlayer water associated with the interlayer Mg 2+ cation [51]. Thus, inhibition of swelling in smectite need not be associated with the adsorption of pyrobitumen in the interlayer space although our data neither confirm nor deny the concept that formation of a pyrobitumen-smectite complex in the interlayer space [7,8] may inhibit oil generation. Further investigation is required in order to resolve this issue. Conclusions Evolution of the palaeo-Sierra Nevada provided remarkably uniform smectite-dominated fine-grained sediment input to the forearc basin during the deposition of the Upper Cretaceous and Paleogene Moreno Fm and the mid-to-late Eocene Kreyenhagen Fm. Deposition of the Moreno Fm was concurrent with volcanic activity in the magmatic arc, and the alteration of rhyolitic or dacitic volcaniclastics is the likely primary source of smectite. The absence of tuff means that there is no direct evidence of volcaniclastic deposition in the marine forearc basin. Volcaniclastics are likely to have been incorporated into terrestrial sedimentary systems, possibly pedogenic, and later reworked into marine environments. Smectite is prevalent in the Kreyenhagen Fm but in the absence of concurrent Sierran magmatism. Weathering of granitic or dioritic-granodioritic source terrane is inferred during the extended period of sub-tropical climate in the late Paleocene and early Eocene. Local periods of kaolinite enrichment are present that record erosion of more deeply weathered terrane. Chemical data (high CIA) confirm significant leaching of source terrane. Diatomaceous opaline silica is the main variable constituent in the fine-grained sediment budget. Opal CT is locally common in the Moreno Fm and the prevailing constituent of the Marca Member. In the Upper Kreyenhagen, opal CT is common and opal A is preserved in the youngest parts of the section. Clinoptilolite is consistently present in small quantities in the Lower Kreyenhagen. but the common general relationship between co-occurrence of clinoptilolite, opal CT, and smectite is not unsustained; opal CT is typically absent where clinoptilolite occurs. Otherwise, the occurrence of clinoptilolite is entirely consistent with the strata of this age globally and indicative of open oceanic conditions. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Some data availability is restricted by confidentially agreements with research sponsors. Conflicts of Interest: There are no conflicts of interest.
8,855
2021-01-13T00:00:00.000
[ "Geology", "Environmental Science", "Geography" ]
Synthesis and Physical Properties of Biodegradable Nanocomposites Fabricated Using Acrylic Acid-Grafted Poly(butylene carbonate-co-terephthalate) and Organically-Modified Layered Zinc Phenylphosphonate A set of novel biocompatible aliphatic–aromatic nanocomposites, including numerous acrylic acid-grafted poly(butylene carbonate-co-terephthalate) (g-PBCT) and organically-modified layered zinc phenylphosphonate (m-PPZn), were successfully synthesized via polycondensation and transesterification. A primary covalent linkage was produced between the biocompatible polymer and the inorganic reinforcements. Fourier transform infrared spectroscopy and 13C-nuclear magnetic resonance spectra demonstrated the successful grafting of acrylic acid into the PBCT (g-PBCT). Both wide-angle X-ray diffraction data and X-ray photoelectron spectroscopy analysis showed that the g-PBCT polymer matrix was intercalated into the interlayer spacing of the m-PPZn and was chemically interacted with the m-PPZn. The addition of m-PPZn in the g-PBCT matrix significantly improved its storage modulus. A slight increase in thermal stability was observed in all the g-PBCT/m-PPZn composites. Both results are attributed to the presence of covalent bond between g-PBCT and m-PPZn. Introduction Biodegradable polymers, such as poly(lactic acid) (PLA) and poly(1,4-butanediol succinate) (PBS), have been commercially utilized to resolve the existing problem of plastic waste accumulation. However, the commercial applications of these aliphatic biodegradable polymers are limited owing to their relatively poor physical properties and high costs [1]. In order to improve the physical properties and reduce the cost of aliphatic biodegradable polyesters, Witt et al. reported an investigation to be copolymerized the aliphatic biodegradable polyesters with aromatic polyesters because aromatic polyesters have excellent mechanical properties [2]. The fabricated aliphatic-aromatic poly(butylene adipate-coterephthalate) (PBAT) copolyester is biodegradable, while the content of the aromatic units is up to 60 mol%. The PBAT are extremely resistant to microbial attack and can be produced at low cost [3,4]. Compared with aliphatic polyesters, aliphatic polycarbonates are more feasible candidates for biomedical and package applications owing to the absence of acidic compounds during its in vivo degradation and photodegradation [5]. Jung et al., conducted condensation copolymerization of dimethyl carbonate (DMC), dimethyl terephthalate (DMT), and 1,4-butanediol (BD) via a two-step process using sodium alkoxide as a catalyst with a varying [DMT]/[DMC] feed ratio to form poly(butylene carbonate-co-terephthalate) (PBCT) [6]. The prepared PBCTs consisting of 40-50 mol% aromatic terephthalate units demonstrated thermal properties comparable to commercial compostable aliphatic polyesters such as PLA, PBS, and poly(butylene succinate-co-adipate) (PBSA). Melting temperatures of the PBCT were observed to be in the range of 95-146 °C, which could be varied by the content of terephthalate unit with a fast crystallization rate. Lee et al. and Park et al. reported the investigations of enzymatic degradation behaviors of PBCT [7,8]. Both results showed that the synthesized PBCT are biocompatible and biodegradable polymers. Recently, a two-dimensional layered zinc phenylphosphonate (PPZn), a character of synthetic organic/inorganic hybrid material with a layered structure that is analogous to montmorillonite (MMT), has attracted a lot of attention owing to their potential application in adsorption topic, catalysis, and polymer composites [9,10]. Alkylamines can be effectively intercalated into the interlayer spacing of PPZn. The interlayer spacing of organically-modified PPZn can be extensively improved and the extent of this enhancement is a function of the chain length of the alkylamines. A large interlayer distance of organically-modified PPZn could act as a reinforcement for polymer nanocomposites to significantly enhance the crystallization rates and physical properties of polymers [11,12]. In our previous study, two biocompatible long-chain primary alkylamines, such as dodecylamine and octadecylamine, were applied to produce the organically-modified PPZn (o-PPZn) through anion exchange method [13]. A series of new biodegradable poly(butylene carbonate-co-terephthalate)/o-PPZn nanocomposites containing various weight ratios of o-PPZn were first reported and successfully synthesized. The physical properties and enzymatic degradation of nanocomposites were investigated. From this investigation, the interaction between polymer and reinforcing material is noncovalent bond. Generally, covalent linkages between polymer and reinforcing material are more useful and stable than noncovalent bonds. Another approach with chemically grafted technique had been applied to enhance the physical properties of polymers. Wu et al. reported the effects of replacing PCL with acrylic acid-grafted PCL (PCL-g-AA) on the structure and properties of a PCL/chitosan blend. The presence of PCL-g-AA in the blend facilitates its compatibility with chitosan, consequently, there is a significant improvement in the properties of the blend due to the formation of ester and imide groups which enhances the dispersion and homogeneity of chitosan in the matrix [14]. Wu et al. investigated the synthesis of nanocomposites containing various acrylic acid-grafted poly(butylene succinate-co-terephthalate) (g-PBST) and poly(butylene adipate-co-terephthalate) (PBAT) with organically-modified zinc phenylphosphonate [15,16]. Their results showed that the mechanical properties and photodegradation behaviors of biodegradable copolymers have been improved with the additional organicallymodified zinc phenylphosphonate. In this study, a set of novel biodegradable composites, including numerous acrylic acid-grafted poly(butylene carbonate-co-terephthalate) (g-PBCT) and organically-modified layered PPZn, were successfully synthesized via polycondensation and transesterification. Organo-modifiers with at least two functional groups are necessary to provide the possible formation of covalent bonding between PPZn and acrylic acid-grafted PBCT. The biocompatible and nontoxic 1,12-diaminododecan with two amino groups was used to manufacture the organically-modified PPZn (m-PPZn) via a co-precipitation method. From our detailed research, no study on g-PBCT/m-PPZn nanocomposites with covalent linkages between the g-PBCT and m-PPZn has been reported, thus, this is novel research. The mechanical and thermal properties of g-PBCT/m-PPZn nanocomposites were systematically studied. Fabrication of g-PBCT/m-PPZn Nanocomposites Three different molar ratios of PBCT were synthesized via polycondensation and transesterification, which has been previously investigated [5][6][7][8][17][18][19]. The feed molar ratios of [DMC] to [DMT] were 90:10, 50:50, and 30:70 and assigned amounts of BD; the fabricating products are hereinafter designated as PBCT-90, PBCT-50, and PBCT-30, respectively. In brief, the BD, DMC, DMT, and NaOH as a catalyst were added to the four-necked flask under N 2 gas, which was heated at 120 °C for 1 h. The pressure was reduced to 570 mmHg and the heating temperature was increased to 190 °C for 5 min; The pressure was then further reduced to 380 mmHg, and condensation was conducted for 5 min at 190 °C. The pressure was then further reduced to 190 mmHg, and the condensation reaction was conducted for another 2 h. Finally, the polycondensation was conducted for 8 h at 210 °C under full evacuation at 0.3 mmHg. The as-fabricated PBCT was dissolved in dichloromethane, and then precipitated from cold methanol for purification. The chemical modification of PBCT and PPZn and the preparation of the resultant composites are presented in Fig. 1. The synthesized PBCT was dissolved in chloroform and a mixture of AIBN and AA were added to the prepared solution at 60 °C for 24 h to allow the grafting reaction to happen (hereafter designated as g-PBCT). The PPZn and 1,12-diaminododecan-modified PPZn (m-PPZn) were synthesized using the approaches reported previously [7,14]. Various amounts of g-PBCT, m-PPZn and EDC as a catalyst for the chemical interaction between the biodegradable polymer and the reinforcing materials were separately dissolved in dichloromethane, and then stirred for 3 days. The fabricated g-PBCT/m-PPZn nanocomposites were dried in vacuum. The nanocomposites are identified as xwt% g-PBCT/ m-PPZn, where xwt% is the weight percent of m-PPZn. Methods The measurements of wide-angle X-ray diffraction (WAXD) were performed using an X-ray diffractometer (Bruker D8) equipped with a Ni-filtered Cu Kα radiation source. The diffraction patterns were obtained in the range of 2θ = 1.5°-30° at a scanning rate of 1°/min. Fourier transform infrared (FTIR) experiments were performed in the range of 400 to 4000 cm −1 on a Perkin-Elmer Spectrum One spectrometer. The transmission electron microscopy (TEM) was performed using Hitachi HF-2000. The samples of TEM experiments encapsulated by epoxy were prepared using a Reichert Ultracut ultramicrotome. The crystalline melting temperature (T m ) for the g-PBCT/ m-PPZn nanocomposites was operated using a PerkinElmer Pyris Diamond DSC. All specimens were heated to the designed temperatures (T ds ) at a rate of 10 °C/min under nitrogen environment, which are about 50 °C higher than the T m of g-PBCT and held for 5 min to eliminate the residual crystals. Subsequently, they were cooled to − 50 °C at a rate of 10 °C/min. Finally, the samples were heated to T ds at a rate of 10 °C/min and the T m for the g-PBCT and g-PBCT/m-PPZn nanocomposites are obtained. The thermal behaviors of specimens were performed using Perkin Elmer TG/DTA 6300 thermoanalyzer. These measurements were obtained from room temperature to 800 °C at a heating rate of 10 °C/min under an air environment. 1 H-nuclear magnetic resonance (NMR) and 13 C-NMR spectra was measured using Agilet Technologies DD2 600 MHz NMR spectrometer via CDCl 3 as solvent and internal standard. X-ray photoelectron spectroscopy (XPS) analysis was obtained on a PHI 5000 Versa Probe X-ray photoelectron spectrometer with the incident radiation consisting of Mg K α X-ray and the takeoff angle fixed at 45°. The gel permeation chromatography (GPC; Waters 717 Plusautosampler, Waters Instruments, Rochester, NY, USA) was used to confirm the weight-average molecular weight (M w ), number-average molecular weight (M n ), and polydispersity PDI = M w /M n of the resulting polymers and composite materials. The narrow molecular-weight distributions of polystyrene standards were utilized as calibration. The storage modulus (E′) was carried out on a Perkin Elmer dynamic mechanical analyzer (DMA) using single cantilever bending mode and compression mode from − 80 to 120 °C at 3 °C/min heating rate and 1 Hz constant frequency. The g-PBCT samples for enzymatic degradation test were put in 24-well plates including 1 ml/mg lipase from Pseudomonas sp. The degraded samples were removed after 12-day incubation, washed with distilled water and vacuum dried. The quantity of degradation was estimated by means of the equation: Wweight loss (%) = 100[(W 0 − W t )/W 0 ], where W 0 is the original weight of a sample and Wt corresponds to the weight of a sample after 12-day degradation period. Synthesis and Structure of the Various g-PBCT/ m-PPZn Nanocomposites The various compositions of PBCT copolyesters were clarified via 1 H-NMR spectroscopy. Figure 2 Figure 3 displays the FTIR spectra of PBCT-50 and g-PBCT-50. The absorption peaks of PBCT-50 and g-PBCT-50 around 1102 and 1233 cm −1 are assigned to the stretching of the -COC-bonds in the ester group [20,21]. The characteristic absorption peaks at 1027 and 1729 cm −1 are attributed to the stretching vibration of the O-C-C bonds in the polymer backbone and the -C=O bonds of the carbonyl group, respectively [22]. An additional peak at about 1716 cm −1 corresponded to the O-C=O bond was observed in the modified polymer, which reveals the existence of free acid in the modified PBCT. This result reveals the acrylic acid group is successfully grafted onto PBCT. Similar results have been reported in previous literature [15,23]. Analysis of the 13 C-NMR spectra presents additional support for the successful grafting of AA as presented in Fig. 4. The 13 C-NMR spectra of g-PBCT-50 contain an additional small peak at δ = 173.3 ppm relative to that of ungrafted PBCT-50. This peak is attributed to the O-C=O bond of the AA, which also confirms the grafting of AA into PBCT [22,23]. The WAXD diffraction profiles of different compositions of g-PBCT copolyesters were represented in Fig. 5. Five strong diffraction peaks at 2θ = 16.1°, 17.4°, 20.7°, 23.3°, and 25.2° were obtained for the g-PBCT-30 and g-PBCT-50 specimens, which indicate the crystalline form of polybutylene terephthalate (PBT) [6,21]. These results demonstrated that the crystal structure of g-PBCT-30 and g-PBCT-50 are dominated by the crystalline PBT. As presented in this figure, the diffraction peaks of the g-PBCT-90 copolymers at 2θ = 21.2° and 21.7° are comparable with those of crystalline polybutylene carbonate (PBC) [6]. This finding discloses that the structure of the fabricated g-PBCT-90 copolymer was transformed from the crystal structure of PBT to the crystal structure of PBC. As presented in Table 2, the melting temperatures of g-PBCT-30, g-PBCT-50, and g-PBCT-90 determined by DSC were 177.6, 159.2, and 41.4 °C, respectively. For the enzymatic degradation test, the lipase from Pseudomonas sp. was utilized to study the effect of copolymerization on the enzymatic degradation behavior of the g-PBCT copolymers. The weight loss of g-PBCT-90 after 12 days degradation was 100%; the weight losses for the g-PBCT-50 and g-PBCT-30 were 4.2% and 1.8%, respectively. These results were consistent with previous [8]. Their data showed that the degradation rate was lower as the BT unit in copolymer is higher than 50%. Figure 6 exhibits the XPS data of the g-PBCT-50 copolyester and g-PBCT/m-PPZn nanocomposite. XPS analysis is an effective instrument that was used to illustrate the formation of amide linkages in the g-PBCT-50/m-PPZn nanocomposites. It is evident from Fig. 6 that an extra peak of nitrogen at the binding energy of 400 eV was observed for the g-PBCT/m-PPZn nanocomposite [15]. This result recommends that the structural change from Figure 7a shows the WAXD diffraction curves of the g-PBCT-50/m-PPZn nanocomposites. For comparison, the X-ray diffraction data of m-PPZn is also shown in this figure. A small trace of diffraction peak at 2θ = 5.9° was clearly observed in the experimental results of the specimens of high m-PPZn content, which contributes to the stacking layers of m-PPZn. These findings indicate that the intercalated conformation was obtained for the g-PBCT-50/m-PPZn nanocomposites. Similar findings were also observed for the g-PBCT-30/m-PPZn and g-PBCT-90/m-PPZn nanocomposites. Furthermore, the morphologies of 5 wt% g-PBCT-50/m-PPZn nanocomposites are wholly examined using TEM. Figure 7b presents the TEM image of 5 wt% loading of m-PPZn into g-PBCT-50 copolymer matrix. This image shows that the stacking layers of the m-PPZn are intercalated into the g-PBCT-50 copolymers. Related observations are also found for the g-PBCT-30/m-PPZn and g-PBCT-90/m-PPZn nanocomposites. Consequently, both WAXD and TEM Physical Properties of the Various g-PBCT/m-PPZn Nanocomposites TGA analysis was operated to investigate the thermal behaviors of the various g-PBCT/m-PPZn nanocomposites. Figure 8 presents the TGA curves of the g-PBCT-50/m-PPZn nanocomposites. Similar findings were also observed for the g-PBCT-30/m-PPZn and g-PBCT-90/m-PPZn nanocomposites. The slight increase in the initial degradation temperature and the temperature of maximum degradation rate illustrated in these patterns is recorded in Table 2. As presented in this table, the temperature of maximum degradation rate for the g-PBCT-30 and g-PBCT-50 is higher than that of g-PBCT-90. These experimental observations reveal that the thermal stability of g-PBCT-90 is relative lower as compared to these synthesized copolyesters, analogous to previously reported results of PBCT copolymers without grafting reaction [5]. Nevertheless, the slight increase in the temperature of maximum degradation rate of the g-PBCT/m-PPZn nanocomposites is higher compared to those of the g-PBCT copolymers. Similar results are also observed in g-poly(butylene succinate-co-terephthalate) (g-PBST)/ m-PPZn and g-poly(butylene adipate-co-terephthalate) (g-PBAT)/m-PPZn nanocomposites [15,21]. However, the thermal stabilities of biodegradable polymers/inorganic filler nanocomposites without the covalent bond between biodegradable polymers and inorganic fillers were relative lower as compared to the pure biodegradable polymers [24,25]. These findings are assigned to the presence of m-PPZn in the g-PBCT matrix, which can cause the covalent bond between g-PBCT and m-PPZn, thus, increasing the thermal stability. The change of storage modulus (E') in bending mode against temperature of g-PBCT-50/m-PPZn nanocomposites in a temperature ranging from − 70 to 120 °C is presented in Fig. 9. The E′ of g-PBCT-50 at − 70 °C is around 1570 MPa and decreases as the temperature increases. This result reveals that the molecular motion of g-PBCT in the glassy state is not enough for a molecular transition. While the temperature is larger than the glass-transition temperature, the thermal energy ends up to be equivalent to the potential energy barriers of the molecular motions. The E′ of the g-PBCT-50/m-PPZn nanocomposites at − 70 °C increases as the content of m-PPZn increases. The E′ values of the g-PBCT-50/m-PPZn nanocomposites were about 1710, 1930 and 2080 MPa for 1, 3, and 5 wt% intercalation of m-PPZn into the g-PBCT-50 polymer matrix, respectively. The improvement of E′ may be ascribed to the addition of inorganic and stiff m-PPZn which results in covalent linkages with the g-PBCT and induces a reinforcement effect, thus, enhancing the rigidity of the g-PBCT polymer matrix. Similar results were also found for the g-PBCT-30/m-PPZn and g-PBCT-90/m-PPZn nanocomposites. Detailed E′ values in bending mode for all the nanocomposites are also presented in Table 2. These enhancement behaviors are also reported in g-PBST/m-PPZn and g-PBAT/m-PPZn nanocomposites [15,21]. The change of storage modulus (E′) in compression mode against temperature of g-PBCT-50/m-PPZn nanocomposites in a temperature ranging from − 70 to 120 °C is presented in Fig. 10. These results show similar tendency as E′ obtained in bending mode. Detailed E' values in compression mode for all the nanocomposites are also presented in Table 2. Conclusions Novel biocompatible g-PBCT/m-PPZn nanocomposites were manufactured via polycondensation and transesterification. FTIR and 13 C-NMR spectra suggest the successful grafting of AA into PBCT. Experimental results of WAXD and XPS revealed that the intercalated conformation was successful for the g-PBCT/m-PPZn nanocomposites. It was Conflict of interest The authors declare no conflict of interest.
3,904.6
2021-03-26T00:00:00.000
[ "Materials Science" ]
Automated SEM Image Analysis of the Sphere Diameter, Sphere-Sphere Separation, and Opening Size Distributions of Nanosphere Lithography Masks Abstract Abstract Colloidal nanosphere monolayers—used as a lithography mask for site-controlled material deposition or removal—offer the possibility of cost-effective patterning of large surface areas. In the present study, an automated analysis of scanning electron microscopy (SEM) images is described, which enables the recognition of the individual nanospheres in densely packed monolayers in order to perform a statistical quantification of the sphere size, mask opening size, and sphere-sphere separation distributions. Search algorithms based on Fourier transformation, cross-correlation, multiple-angle intensity profiling, and sphere edge point detection techniques allow for a sphere detection efficiency of at least 99.8%, even in the case of considerable sphere size variations. While the sphere positions and diameters are determined by fitting circles to the spheres edge points, the openings between sphere triples are detected by intensity thresholding. For the analyzed polystyrene sphere monolayers with sphere sizes between 220 and 600 nm and a diameter spread of around 3% coefficients of variation of 6.8–8.1% for the opening size are found. By correlating the mentioned size distributions, it is shown that, in this case, the dominant contribution to the opening size variation stems from nanometer-scale positional variations of the spheres. Introduction Nanosphere lithography (NSL) provides a cost-effective method to fabricate periodic nanopatterns on large surface areas (Hulteen & Van Duyne, 1995;Boneberg et al., 1997;Haginoya et al., 1997;Burmeister et al., 1999). Regular arrays have a great potential for applications, for example, in optoelectronics (Sim et al., 2011;Zhang et al., 2013), electrochemical sensors (Purwidyantri et al., 2016), optical fiber tip nanoprobes (Pisco et al., 2017), and as metamaterials (Gwinner et al., 2009). Recently, it has been demonstrated that NSL masks can be fabricated in roll-to-roll processes, opening the path to industrial scale applications (Chen et al., 2020). Often, a narrow size distribution of the nanoobjects is desired in order to realize devices with welldefined emission or absorption properties (Haynes & Van Duyne, 2001;Qian et al., 2008). Therefore, it is important to determine the factors governing the size distribution of the nanoscale openings in the sphere monolayers or double layers, which serve as lithographic masks. In case of close-packed monolayer masks, the mask openings, also referred to as interstices, are defined by sphere triples. Up to now, the mask opening size distribution has been investigated only indirectly by analyzing atomic force microscopy or scanning electron microscopy (SEM) images of nanoparticle arrays fabricated by physical vapor deposition through sphere layer masks (Hulteen & Van Duyne, 1995;Hulteen et al., 1999;Li & Zinke-Allmang, 2002;Riedl & Lindner, 2014). Li & Zinke-Allmang found that the size distribution of Ge particles deposited at the interstices of a polystyrene sphere monolayer is broader than that of the spheres, and ascribed the difference to packing imperfections of the spheres (Li & Zinke-Allmang, 2002). Hulteen et al. studied the size distributions of Ag nanoparticles obtained at the openings of polystyrene sphere mono-and double layers (Hulteen et al., 1999). The authors conclude that the standard deviation of the interstice size roughly equals that of the sphere diameter. However, if the sphere size variation was the only cause for the interstice size distribution, the coefficient of variation (CV) of the interstice size would be given by the CV of the average sphere diameters of the groups of three spheres (i.e., sphere triples) defining an interstice. In other words, the standard deviation of the interstice size would be significantly smaller than that of the sphere diameter. The reasons for this are two-fold: First, since the average diameter of the spheres in a triple determines the interstice size as a first approximation, sphere diameter differences level out partially, that is, the standard deviation of the average diameters of the spheres in a triple is smaller than that of the sphere diameters by a factor of 3 1/2 . Second, the sphere diameter D is linearly interrelated to the interstice equivalent diameter D eq,is , that is, the diameter of a circle with the same area as the interstice area (see Supplementary Section 1.1): with a proportionality factor < 1. Thus, the experimentally observed larger CV of the interstice size as compared to the CV of the average sphere diameters of the triples indicates that there must exist further sources of size broadening. To the best of our knowledge, a direct analysis of the mask opening size distribution and of its geometric origins such as the sphere diameter distribution, sphere position variations and interstice corner rounding/filling due to surface energy minimization or deposition of solutes from the sphere suspension has not yet been made. To analyze the size of the mask openings together with that of the spheres forming them, the individual spheres and triples of them have to be identified in the image. In order to detect nearly spherical objects corresponding to circles in the image, several methods are reported in the literature: Circle Hough transformation and its variants (Duda & Hart, 1975;Scaramuzza et al., 2005), the random sample consensus algorithm (Götzinger, 2015), correlation-based template matching (Ceccarelli et al., 2001), marker-based watershed segmentation (Gostick, 2017), and region-based convolutional neuronal networks (Girshick et al., 2014). While these techniques have enabled a remarkable progress in the detection of spherical objects, they also exhibit shortcomings. In the case of high-resolution SEM images of sphere monolayers, the sphere objects are partly interconnected in a nearly regular array containing defects. Due to the concomitant image intensity variations, in addition to noise, this can complicate the complete capture of the sphere edges, and thus lower the accuracy of the analysis, as well as give rise to false negative and false positive detection events. In our contribution, we develop an automated analysis procedure based on intensity profiling and cross-correlation, which is capable of quantifying the sphere diameters, the sizes of sphere triple interstices and sphere positions. The method is applied to widely used monodisperse polystyrene sphere monolayer masks with sphere diameters between 220 and 600 nm. In particular, the correlation between the evaluated quantities is assessed, and the relative importance of the factors leading to the observed interstice size distribution widths is determined. This knowledge will help identifying the materials and conditions favorable for obtaining ultra-narrow mask opening and thus nanostructure size distributions. Materials and Methods The experimental procedure used consists of a cleaning and hydrophilization treatment of Si wafer pieces, followed by the deposition of polystyrene sphere monolayers and then imaging in a scanning electron microscope. Rectangular pieces (4 cm × 2 cm) were cut from a 4-inch n-doped Si (001) wafer. After cleaning the surface by rinsing with deionized H 2 O and isopropyl alcohol, the wafer pieces were hydrophilized by an O 2 /Ar plasma treatment (2 sccm O 2 , 8 sccm Ar, pressure 10 Pa, RF power 50 W, duration 3 min) in a Plasmalab 80plus machine (Oxford Instruments). The hydrophilicity was checked by means of optical measurements of the contact angle of deionized H 2 O droplets placed on the wafer surface using a Drop Shape Analyzer DSA25E (Krüss company). Next, monolayers of monodisperse polystyrene spheres with three different sphere sizes, 220, 370, and 600 nm were deposited from aqueous suspensions (10 wt% solids, Thermo Fisher Scientific Inc.) on the hydrophilized Si substrate by means of the doctor blade technique (Kumnorkaew et al., 2008;Riedl & Lindner, 2014). A drop of the colloidal suspension is pipetted onto the substrate surface, brought into contact with a blade and then moved at a constant velocity across the surface ( Fig. 1). At the three-phase contact line between the substrate, the suspension and the gas atmosphere two-dimensional closepacked sphere layers form due to the action of attractive capillary forces between the spheres and the convective stream of suspension toward the three-phase line (Dimitrov et al., 1994;Kumnorkaew et al., 2008). In order to obtain sphere monolayers a suitable blade velocity was chosen, which depends on sphere size, temperature, and atmosphere humidity. The resulting sphere layers were imaged in a field emission SEM electron beam lithography system (Raith Pioneer) at 5 kV using an inlens detector. Image Analysis Method In this section, the individual image processing and evaluation steps are described. All steps are implemented in a script developed for the Gatan DigitalMicrograph software (GMS 2, 2014). The script is available in the DigitalMicrograph Script Database hosted by FELMI, Graz University of Technology (FELMI, 2021). As input data, unprocessed SEM images of sphere monolayers are used, which should have (i) sufficient spatial resolution and pixel density in order to clearly resolve the sphere outlines, (ii) a sufficient signal-to-noise ratio, and (iii) sufficient contrast between the spheres and their interstices. Recommended values are listed in Table 1. For statistical analyses, the image area should cover several hundreds of spheres. The typical image size is 2,000 to 3,000 pixels in each dimension. An example of such an SEM image of a particularly defective sphere monolayer is given in Figure 2. Numerous line defects in the sphere arrangement and various mask defects involving overly small spheres are visible. Line defects appear brighter in the image since the spheres bordering the line charge up electrically, leading to a locally enhanced secondary electron emission. This example is chosen to demonstrate the robustness of the image analysis method developed in the following section. Filtering and Average Correction At the beginning of image analysis, two preparatory steps are performed. First, in order to minimize noise, the raw image is smooth-filtered by convolution in real space using a filter kernel. Each pixel is replaced by a weighted sum of its surrounding pixels. The array of weights for each of the surrounding pixels forms the kernel K. Here, the kernel K = 1 2 1 2 4 2 1 2 1 (2) implemented in the "smooth" menu command of the DigitalMicrograph software is used. In this kernel, the center pixel has fourfold weight, whereas the central top, bottom, left, and right pixels have double, and the corner pixels single weight. Second, a local average-corrected image I corr (x, y) is computed from the smooth-filtered image I sm (x, y), in order to reduce largescale intensity variations due to differences in the secondary electron emission rate between the perfect crystalline areas and the zones containing defects: I denotes the mean image intensity in the entire image and the local average intensity image, which is obtained by assigning to each pixel (x, y) the average intensity of a square region of interest (ROI) of size b (with b = even integer and b ≈ 40% of the average sphere diameter) centered at (x, y). The loop variables x ′ , y ′ are confined inside the image boundaries. Figure 3 depicts the average-corrected image of the SEM image in Figure 2. Taking the local average-corrected image as input, the sphere detection is achieved in three steps. In step 1, seed spheres are detected (20-30% of all spheres), followed by the detection of the large majority of spheres in step 2 and a search optimized for spheres of deviating size in step 3. All three steps include fitting circles to the sphere circumferences. Seed Sphere Detection Seed sphere detection starts with the preselection of possible sphere positions by defining a grid of square ROIs (ROI size: 1.3 times the sphere diameter D) on the image. Each ROI is then cross-correlated with an equally sized reference image of a fully six-fold coordinated sphere, which is taken from the image at a position specified by the user. If the distance between the detected position of the correlation maximum and the ROI center is below a threshold value of D/3 (determined from test runs), an ROI of the same size centered at the correlation maximum is examined by means of an intensity profiling module. This module extracts intensity profiles along lines inclined by various angles centered at the correlation maximum position (Figs. 4a, 4b). Typically, 18 profiles are extracted using equidistant angular steps. In order to minimize the intensity scatter, the profiles have a width of a few pixels (not shown in Fig. 4a). Next, intensity steps with sufficient height and not too large width are searched in these profiles in order to determine the edges of the sphere. If these intensity steps (i) either have positive sign and are situated in the first profile half, or have negative sign and occur in the second profile half, and if (ii) the distance between two such steps of opposite sign lies within the interval with k 1 = k 2 = 0.2 and D is an approximate estimate of the average sphere diameter (input by the user), then the step positions are recognized as points on the sphere circumference, that is, sphere edge points (Fig. 4c). The restriction for the separation of two steps of opposite sign helps to minimize the detection of false positions. If more than a user-defined number of edge points (typically 16 for 18 profiles) are found, a circle is fitted to them by means of an iterative procedure (Fig. 4d). In case of a sufficiently high fit quality (measured as the sum of squared distances between circle and edge points, normalized to the number of points), the fitted position is accepted as a sphere center position. Figure 5 depicts the SEM image with all 133 detected seed spheres marked. Sphere Detection Step 2 In contrast to the raster search for the seed spheres, the detection algorithms used in step 2 to find the large majority of spheres rely on combined star and 360°searches. The star search works most effectively in case of perfect monocrystalline sphere arrangements. The name of this search reflects the six different search directions along the close-packed directions in the 2D lattice starting out from the seed sphere positions. For the determination of these directions, the angular positions of the maxima in the Fourier transforms of ROIs are evaluated, where the ROIs are centered at the positions of the seed spheres (Figs. 6a, 6b). In this way, new potential sphere positions in the first nearest-neighbor shell are identified at a distance to the seed sphere that equals the estimated average sphere diameter. In addition, further 54 potential positions are identified in the second, third, and fourth nearest- neighbor shells by assuming the spheres to sit on regular positions of the close-packed 2D lattice (Fig. 6c). Analogously to the seed sphere detection procedure, the positions are refined by means of cross-correlation, then subjected to the intensity profiling (see Section "Seed sphere detection"), sphere edge point detection and fitting routines described above. As the star search assumes a perfect lattice, it often does not recognize spheres located at defects occurring in the sphere monolayers, namely small gaps between neighboring spheres, line defects and small zones with reduced sphere coordination. Therefore, a 360°search has been developed, which is suitable for disordered sphere arrangements with 1-6 nearest neighbors. Here, the cross-correlation between the reference sphere image and image ROIs centered at a fixed distance (estimated sphere diameter) from the seed sphere center is evaluated as a function of the azimuth angle (Figs. 7a, 7b). The positions with maximum correlation are then selected for the subsequent intensity profiling, edge point, and fitting operations, as described above. In order to include positions further away from the seed sphere, the 360°search has been programmed as an endless routine, that is, it is repeated taking the newly detected position as starting position. The process continues as long as further not yet detected nearest-neighbor spheres are recognized inside the image boundaries. As illustrated in Figure 8, the combined star and 360°s earches are capable of detecting around 99% of the spheres (seed spheres included, image border regions excluded). Figure 3 with 519 marked fitted spheres, which have been detected in course of the sphere searches of step 1 (seed spheres) and step 2 (combined star and 360°searches). Detection of Spheres of Deviating Size As the sphere search steps 1 and 2 are designed for average-sized spheres, spheres of deviating size, in the present case mostly significantly smaller spheres, are often overlooked. To include these spheres with diameters significantly below the average, a third search step is conducted. In this step, potential sphere positions outside the fitted circles of the before detected spheres are identified. These positions are examined by using intensity profiling and a modified sphere edge point detection routine which is opened for a wider range of sphere diameters by choosing appropriate values for k 1 and k 2 in equation (5). The fitting is performed as described in steps 1 and 2. Overall, at least 99.8% of all spheres are detected (image border regions excluded), and accurate fit results for their center positions and diameters obtained (Fig. 9). Based on these data, the sphere diameter histogram and statistical key figures such as average, standard deviation, and CV are evaluated. Identification of Sphere Triples Based on the analysis described before, another module of the program identifies the sphere triples as a preparatory step for the interstice size evaluation. By sphere triples we mean sets of three spheres, where each of the three spheres forms a contact to the other two spheres. The triple identification is performed by means of intensity profile analyses. For each two spheres with separation (M 1 M 2 is the distance between the sphere centers M 1 , M 2 ; R 1 , R 2 are fitted sphere radii) below a threshold value, the M 1 M 2 center-to-center profile and the profile perpendicular to M 1 M 2 through the radius weighted center point S 12 of M 1 M 2 are extracted (Fig. 10). The coordinates of point S 12 are given by On the one hand, in case of two spheres forming a contact to each other, the perpendicular profile displays a significant maximum in the vicinity of S 12 , which separates two minima corresponding to the two adjacent interstices. On the other hand, if the two spheres do not touch each other, the M 1 M 2 profile shows a pronounced minimum in the vicinity of S 12 . Therefore, the two spheres are regarded to have a common contact, if the average intensity difference between the maximum and the two adjacent minima in the perpendicular profile exceeds a certain fraction f CP of the depth of the intensity minimum in the M 1 M 2 profile. The depth of this minimum is taken as the average intensity difference between the minimum and the adjacent two maxima. Both intensity differences are marked as horizontal lines in Figure 10. In agreement with the visual perception of contact points, f CP = 0.6 has been chosen. Once the sphere pairs have been identified, the sphere triples are found by searching for sets composed of three sphere pairs that have three common spheres. Interstice Quantification After having identified the sphere triples, the corresponding interstice areas are quantified. For each triple i, an intensity threshold I thres, i is applied to the triangular region defined by the three sphere-sphere contact points S 12 , S 23 , and S 13 (Fig. 11a): Figure 3 showing a sphere triple and its central interstice. Marked are the sphere centers, the sphere-sphere contact points, as well as the regions in the spheres and in the interstice that are used for defining the intensity threshold for interstice quantification [equation (8)]. (b) Same SEM image subarea as (a) with thresholded interstice pixels marked in black. 〈I is center, i 〉 denotes the average intensity of a small region at the interstice center, and 〈I triple spheres,i 〉 denotes the average intensity in the center of the three spheres. The interstice center is approximated by the center of gravity of the triangle ΔS 12 S 23 S 13 . A threshold factor of f = 0.5 has been chosen so that the thresholded interstice region matches the visual perception of the interstice in the SEM image (Figs. 11a, 11b). Figure 12 displays the entire SEM image with all detected interstices marked in black. Results and Discussion Figures 13-15 display the sphere size, interstice size, and spheresphere separation histograms for nominal sphere diameters of 600, 370, and 220 nm, respectively. The quantified measure for the interstice size is the equivalent diameter D eq, is , that is, the diameter of a circle having the same area as the interstice. For each sphere size, SEM images covering more than 1,400 spheres were evaluated. It is found that the variations of the evaluated quantities with the macroscopic position in the sphere layer are small compared to their distribution widths, that is, the monolayers are homogeneous across the sample surface. The sphere diameter distribution is characterized by an approximately symmetric maximum with the average 〈D〉 being very close to the nominal sphere diameter specified by the supplier (Fig. 13). Remarkably, diameters significantly below the average occur more often than above-average diameters, owing to the sphere synthesis. Although all three sphere sizes have a CV value of ≤3% specified by the supplier, the results show that the sphere diameter distribution width increases from 2.2 to 3.2% when decreasing the sphere size from 600 to 220 nm. For comparison, a manual evaluation of arbitrarily chosen subsets has been performed by tracing the outlines of around 50 spheres (part of sphere triples) and counting the number of enclosed pixels. This yields CV values of 1.7 and 2.7%, while those of the program for the same subsets amount to 1.7 and 2.9%, respectively. In contrast to the sphere diameter distributions, the interstice size histograms show a clear asymmetry with a pronounced tail on the above-average side (Figs. 14a, 14c, 14e). The interstice equivalent diameter CV values range between 6.8 and 8.1%, exhibiting no clear dependence on sphere size. A manual evaluation of arbitrarily chosen subsets comprising 50 interstices of the 600 and 220 nm sphere layers, respectively, gives CV values of 7.5 and 6.4%, as compared to 7.0 and 6.9% when applying the automatic evaluation to the same subsets. Obviously, the interstice size CVs are larger by a factor of 2.5-3.4 than those of the sphere diameters. Figures 14b, 14d, and 14f depict the histograms of the sizes that the interstices would have if they were defined by ideal spheres having (i) contact points instead of contact areas or necks and (ii) the same diameter distributions as that of the detected spheres (for the calculation, see Supplementary Section 1.2). These ideal interstice size histograms display narrow, approximately symmetric distributions with CV values equal to around half the CV of the sphere size distributions, because of the levelling of sphere diameter variations in the triples. Moreover, the average actual interstice sizes amount to only 89-96% of the average ideal sizes, which can be explained by the presence of extended sphere-sphere contact zones leading to a shortening of the interstice corners. . Equivalent diameter histograms of interstice subsets for the 600 nm polystyrene sphere monolayer: (a) interstices with the large and small average diameter of the spheres in a triple, (b) interstices with the large and small average separation s of the spheres in a triple. "Large" and "small" refer to the 10% percentiles. Each histogram includes 185 interstices. To the right of each histogram, SEM image ROIs with example interstices illustrating the effects of sphere size and sphere-sphere separation are displayed. The encircled numbers 1 or 3 correspond to small sphere diameter or s, and 2 or 4 to large sphere diameter or s, marked in the histograms, respectively. The discrepancy between the actual and the ideal interstice size distribution widths can be ascribed to small positional variations of the spheres in conjunction with the formation of contact necks between adjacent spheres as well as sphere deformations. As shown in Figures 15a and 15c, the separation s [equation (6)] between the circles fitted to the outlines of touching spheres ranges from around −6 to +6% of the sphere diameter. In case of negative s, the spheres are flattened along the contact zone, whereas the spheres are connected via small necks or bridges in case of positive s (Fig. 15d). These morphologies may arise as a consequence of collisions between the viscoelastic polymer spheres as well as condensation of styrene units from the liquid phase during the convective sphere layer self-assembly. Moreover, attractive van der Waals forces act between polystyrene chains on opposing sphere surfaces leading to a reduction of surface area. In order to analyze the influence of the sphere diameters on the interstice sizes, the interstice equivalent diameter distributions are plotted for the 10% largest and the 10% smallest average sphere diameters, respectively (Figs. 16a, 17a, 18a). The average diameter refers here to the spheres in a triple forming a mask opening. The histograms for large and small spheres largely overlap, where the average size of interstices 〈D eq,is 〉 formed by large spheres exceeds that of the small spheres by 4.3% for 600 nm spheres, 3.3% for 370 nm spheres, and 8.9% for 220 nm spheres. A more pronounced difference is found for the interstice equivalent diameters formed by spheres having the 10% largest and smallest sphere-sphere separations s (Figs. 16b, 17b, 18b): For Fig. 17. Equivalent diameter histograms of interstice subsets for the 370 nm polystyrene sphere monolayer: (a) interstices with the large and small average diameter of the spheres in a triple, (b) interstices with the large and small average separation s of the spheres in a triple. "Large" and "small" refer to the 10% percentiles. Each histogram includes 212 interstices. To the right of each histogram, SEM image ROIs with example interstices illustrating the effects of sphere size and sphere-sphere separation are displayed. The encircled numbers 1 or 3 correspond to small sphere diameter or s, and 2 or 4 to large sphere diameter or s, marked in the histograms, respectively. Fig. 18. Equivalent diameter histograms of interstice subsets for the 220 nm polystyrene sphere monolayer: (a) interstices with the large and small average diameter of the spheres in a triple, (b) interstices with the large and small average separation s of the spheres in a triple. "Large" and "small" refer to the 10% percentiles. Each histogram includes 178 interstices. To the right of each histogram, SEM image ROIs with example interstices illustrating the effects of sphere size and sphere-sphere separation are displayed. The encircled numbers 1 or 3 correspond to small sphere diameter or s, and 2 or 4 to large sphere diameter or s, marked in the histograms, respectively. all sphere sizes, the average interstice diameter in case of large s is 19% larger than that for small s. Therefore, the position variations of the spheres in a triple constitute a major contribution to the interstice size distribution width for the analyzed sphere layers. However, this might not necessarily hold for other sphere suspensions and/or sphere layer deposition conditions, since the extent of position variations is expected to depend on the interplay between the self-assembly dynamics and the properties of the solid sphere and substrate materials as well as of the liquid phase. An investigation of these factors is beyond the scope of the present study and subject of further research. Conclusion In summary, a program for the automatic analysis of sphere diameters, interstice sizes, and sphere-sphere separations from SEM images of sphere monolayer masks has been devised. The program also enables a correlated analysis of these quantities, that is, the evaluation of interstice size distributions as a function of sphere size and sphere-sphere separation. This analysis has been applied to polystyrene sphere monolayers with sphere diameters in the range between 220 and 600 nm. For all sphere sizes, interstice equivalent diameter CV values of 6.8-8.1% have been determined, which are significantly larger than the diameter CV values of the spheres in a triple of 2.2-3.2%. As a result, the largest contribution to these observed spreads of interstice sizes stems from the spread of the sphere-sphere contact areas, which is closely related to positional variations of the spheres in a triple. In conjunction with the convective sphere layer self-assembly, the polymeric nature of the sphere material and related viscoelastic behavior as well as residual monomers in the colloidal liquid condensing at the sphere contacts lead to the formation of spheresphere contact zones as well as necks of variable dimension. In order to minimize the spread of interstice sizes, non-viscous inorganic spheres with low deformability such as SiO 2 could be used.
6,620.6
2021-12-27T00:00:00.000
[ "Physics" ]
Direct observation and simultaneous use of linear and quadratic electro-optical effects We report on the direct observation and simultaneous use of the linear and quadratic electro-optical effect and propose a method by which higher-order susceptibilities of electro-optical materials can be determined. The evaluation is based on the separation of the second- and third-order susceptibilities and the experimental technique uses a slot waveguide ring resonator fabricated in integrated photonic circuit technology, which is embedded by a guest-host polymer system consisting of the azobenzene dye Disperse Red 1 in a poly(methyl methacrylate) matrix as an active electro-optical material. The contribution of both effects on the electro-optical response under the influence of static and time-varying electrical fields is investigated. We show that the quadratic electro-optical effect has a significant influence on the overall electro-optical response even with acentric molecular orientated molecules. Our findings have important implications for developing electro-optical devices based on polymer-filled slot waveguides and give rise to advanced photonic circuits. Introduction Silicon-organic hybrid (SOH) photonics have received massive research interest because they combine the advantages of well-established silicon-on-insulator (SOI) technology with that of highly efficient electro-optical (EO) polymers [1]. In recent years, slot waveguides have pushed the hybrid integration of EO polymers in mature SOI technology, since they allow a large overlap of the optical and electrical field inside the polymer-cladding. Such EO polymers typically rely on either the linear EO effect [2] or on the quadratic EO effect [3]. SOH-based modulators have shown a lower signal chirp and therefore better signal linearity compared to modulators employing the plasma dispersion effect [4]. Moreover, SOH modulators are extremely energy-efficient in terms of energy per-bit consumption [5] and can overcome limitations of current modulators based on the plasma dispersion effect with regards to speed, noise and power consumption [6]. This relies on the high values of the electric fields attainable with small electrode separation, able to induce a more effective refractive index change with respect to rejection/removal of charges. In fact, the electric field induces a delocalization of electrons along the conjugated polymer chain and, thus, no carrier transport is necessary, as is required in case of the plasma dispersion effect. The ability to use organic materials in a silicon-on-insulator (SOI)-technology has created significant interest in various fields of science, including but not limited to high-speed modulators [7], tunable optical filters [8], highprecision metrology [9] and frequency combs [10]. However, the hybrid integration of nonlinear optical materials in a SOI technology platform is still a focus of current research. Both effects, the linear and the quadratic EO effect, are the main subjects in this progress and need to technology, electro-optical polymers, slot waveguide, photonic integrated circuit (Some figures may appear in colour only in the online journal) Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. be carefully analyzed. The research to date has tended to focus on the linear EO effect rather than quadratic EO effect [11,12]. However, both effects are present simultaneously in acentric materials and current research has demonstrated that the quadratic EO effect in slot waveguide structures becomes significantly strong [3,13,14]. Therefore, current studies may tend to overestimate the linear EO effect because they neglect the quadratic EO contribution. Both EO effects receive great interest for programmable and reconfigurable photonic circuits since they provide different EO responses and, hence, different device concepts can be envisioned. Therefore, the precise analysis of both EO contributiions is crucial for the device design. During the last few decades, several works were published addressing the simultaneous determination of the linear and quadratic EO effect in bulk material, e.g. using the modified Teng-Man technique [15]. These efforts provided a better knowledge of both effects and supported the development of new organic EO materials. In practice, however, it is difficult to obtain reliable information from this type of measurement, as the quadratic EO response is close to the detection limit. Moreover, since high-voltages are necessary to induce an adequate electric field to obtain the quadratic EO effect, voids and air gaps can easily lead to a breakdown and therefore the samples are usually baked at glass transition temper ature under mechanical pressure [16], which makes the sample preparation relatively extensive. This paper is devoted to describe an experimental on-chip measurement technique to determine both effects simultaneously, while sophisticated sample preparation and high poling voltages are avoided. An experimental approach is presented by which higher-order susceptibilities of EO materials can be determined. The evaluation is based on the separation of the secondand third-order susceptibilities and the experimental technique uses a SOH slot waveguide phase shifter implemented in a silicon ring resonator. The scope of the present paper is to set the experimental conditions under which the present technique constitutes a good investigation tool. In addition, the application of both effects for intensity modulation is presented and the conditions to achieve them separately are described. Theory Linear and quadratic EO effects are simultaneously present in materials with acentric molecular order. In isotropic materials, the linear EO effect vanishes and only the quadratic EO effect is observable. A poling procedure is required to obtain an acentric molecular order from an isotropic material. There are different poling procedures such as optical poling [17], corona poling [18] and contact poling [19]. However, since the slot waveguide requires an optical transverse electric (TE) field mode to avoid optical losses [20], the nonlinear optical molecules need to be aligned parallel to it in order to achieve a large linear EO effect [21]. Therefore, it is reasonable to employ contact poling, in which the slot waveguide forms the electrodes allowing for a strong electric field parallel to the electric field component of the optical TE-mode [14]. A comprehensive analysis of the poling procedure and implications of the narrow gap of the slot waveguide on the poling efficiency can be found in [11]. However, the EO material is in between the electrodes and exhibits an intrinsic polarization. As a consequence, the molecules are orientated along the electric field breaking the centrosymmetry of the isotropic EO material. To obtain a measurable effect, the EO material is heated up near to its glass transition temperature. However, this state is not stable due to thermal effects and, therefore, the material is cooled down to ambient temperature while the electric field is still applied. This yields a temporal acentric molecular order. The EO effect is usually considered in terms of the change of the optical indicatrix [22] where r ijk and R ijkl are the Pockels coefficient and Kerr coefficient, which describe the linear and quadratic EO effect, respectively. For small changes and anisotropic materials, the refractive index change ∆n i along the main direction of the optical indicatrix can be approximated by [23] which can also be expressed by using the nonlinear optical susceptibility tensor (3) Comparing the coefficients in equations (2) and (3) The quadratic EO effect is inversely proportional to the square of the electrode distance, which is in fact the slot width s. For an EO material in a vertical slot waveguide, the dominant tensor components are represented by χ (2) 333 (−ω; ω, 0) and χ (3) 3333 (−ω; ω, 0, 0). In this case, the optical and electrical field are orientated along the 3rd axis. Therefore, this case will be considered throughout this work and the notation χ (2) and χ (3) will be used, for the sake of simplicity. Additionally, we assume an approximately isotropic refractive index of the EO polymer at the absence of an electric field, which will be referred to as n eop . One key statement of the present work is that the quadratic EO component χ (3) of linear conjugated polymers in slot waveguides is not negligible. As a consequence, it has to be taken into account for the calculation of the linear EO component χ (2) since the quadratic EO effect is obtained in any molecular order. The proportion gives an estimation of the contribution of the quadratic EO effect on the overall refractive index change. Here, the overall refractive index change is given by the the sum of the linear EO effect (∆n L = χ (2) E/n eop ) and quadratic EO effect (∆n Q = 3χ (3) E 2 /(2n eop )) in the denominator. As an example, we have determined Ψ using values from [24] and plotted the results in figure 1. It is apparent from this figure that the quadratic EO effect is negligible in bulk material but may contribute 10% to the overall refractive index change in slot waveguides. Therefore, a technique to separate the quadratic EO effect from the linear EO effect directly from the measured on-chip performance is beneficial. In the following, a method to infer χ (2) and χ (3) from a static EO response is provided. Device fabrication and sample preparation To validate our theoretical assumption, we employ a silicon ring resonator covered with an EO polymer. Here, a partially slotted ring resonator (PSRR) is used consisting of a slot waveguide phase shifter introduced in the straight part of a racetrack ring. A schematic of the PSRR is shown in figure 2(a). The slot waveguide is connected through doped silicon striploads and tungsten vias to ground-signal-ground (GSG) metal electrodes, as shown in figure 2(b). Vertical silicon slot waveguides have the advantage of a strong overlap of optical and electrical field [25]. Details on the ring resonator geometry and fabrication details can be found in our previous work [8,[26][27][28]. In this work, we employ a PSRR having a slot waveguide length of L slot = 12 µm and a slot width of s = 150 nm. The PSRR is fabricated on a 200 nm SOI wafer in a photonic integrated circuit (PIC) technology at IHP [29]. Here, we have chosen a slot width of 150 nm to be compatible with the PIC technology. After a cleaning procedure with acetone and 2-propanol, the EO polymer is directly spun onto each chip at 80 rps. We employed the azobenzene dye Disperse Red 1 (DR1) from Sigma Aldrich doped at 10 wt% in poly(methyl methacrylate) (PMMA) [30]. The chemical structures are shown in figure 2(c). This guest-host polymer system is solved in 1.1.2.2-tetrachloethane and filtered through a PTFEmembrane filter with 0.2 µm pores. To remove the solvent after deposition on the chip, we have dried the samples at 70 • C in an oven. In addition, we have prepared bulk polymer films in order to obtain material properties of PMMA/DR1. Spectroscopic ellipsometry (Sentech SE 850) was carried out to measure the refractive index, which was supported by spectroscopic photometry data (PerkinElmer Lambda 1050). The dispersion is plotted in figure 2(d). Here, the DR1/PMMA thin film was modelled with the Bruggemann effective medium approx imation. A Sellmeier model was employed for the non-absorbing PMMA matrix and the Tauc-Lorentz oscillator model for DR1. From figure 2(d), it can be inferred that within the studied wavelength range (optical C-band), the EO effects are off-resonant since it is far away from the first resonance peak. Measurement of higher-order susceptibilities using a slot waveguide ring resonator The refractive index change ∆n of the polymer correlates directly to a phase shift in the slot waveguide given by ∆φ = ∆n · k 0 · L slot · Γ slot [3], where L slot is the slot waveguide length, k 0 = 2π/λ is the wavenumber and Γ slot is the field confinement factor in the slot region. The latter is taken into account to avoid an overestimation of the phase shift ∆φ [31]. In principle, it can be defined as the ratio of the time averaged energy flow through the slot region to the time averaged energy flow through the total domain [32]. It can be calculated by [33] Here, E and H are the electric and magnetic field vectors, respectively, and e z is the unit vector in the z direction. The slot region and the total domain are denoted as slot and total, respectively. The field confinement factor for the given waveguide geometry is Γ slot ≈ 0.2 according to our simulation study [31,34]. A more comprehensive description of the aforementioned measurement technique can be found in [3,14]. Applying a DC voltage U to the slot waveguide leads to an electric field inside the slot by E = U/s. This assumption is valid since the electric field is approximately homogeneous inside the slot. Taking the relation ∆φ/2π = ∆λ/λ into account, the applied voltage leads to a resonance wavelength shift given by We obtained the wavelength shift ∆λ from the transmission spectra at different DC voltages. The experimental set-up is schematically shown in figure 3. Here, a tunable external cavity laser (Yenista TUNICS T100S-HP) is used in order to measure the wavelength dependent behavior of the ring resonator. The polarization of the laser light was adjusted in such a way that highest transmission is achieved by using a paddle style fiber polarization rotator (Thorlabs FPC031). The light is transmitted through a polarization maintaining single mode fiber and then coupled into the silicon waveguides through a fiber grating coupler. The transmitted light is then coupled by a second grating coupler to a polarization maintaining single mode fiber and then to a photodiode (Thorlabs DET08CFC/M). All measurements were carried out using a temperature-controlled sample holder which was stabilized to 35 • C in order to avoid changes in the transmission due to temperature fluctuation. For active measurements, the GSG electrodes are connected to an electric power source (Keysight Sourcemeter 2400) through tungsten DC probes (Picoprobes A 40A-GSG-150-P). Depending on the experiment, the optical signal is measured with a digital sampling oscilloscope or a digital multimeter. We have performed two experiments. The first is carried out without the poling procedure to determine the quadratic EO effect. In the second experiment, we poled the EO poly mer to obtain a non-centrosymmetric molecular orientation of the nonlinear optical dye, which leads to an increase of the linear EO effect. The poling procedure is shown in figure 4. First, the sample is heated up from ambient temperature T a ( figure 4(a)) to the glass transition temperature T g = 110 • C for 30 min ( figure 4(b)). This is followed by applying a poling voltage of 7 V to align the dye molecules (figure 4(c)), and then the temper ature was rapidly reduced to ambient temper ature T a , while the poling voltage was kept on (figure 4(d)). One major issue of organic EO materials is that they suffer from longterm stability due to relaxation processes (figures 4(e) and (f)) [35]. However, with the present technique, we are able to perform the measurement directly after the poling procedure using the same set-up. In both experiments, we measured the resonance wavelength shift as function of the applied voltage. The results are plotted in figure 5(a). The experimental data in figure 5(a) is then fitted using a polynomial regression model given by ∆λ = C 1 U + C 2 U 2 . Please note, the coefficient of determination is >R 2 = 0.997 and the residual sum of squares is < 8.8716 · 10 5 for both graphs, reflecting the high accuracy of the least square fit. A static electric field E = U DC /s was taken into account and equation (8) was compared with our regression model to yield the second-and third-order susceptibility coefficients, which are given by From the data in figure 5, we inferred a secondorder susceptibility of χ (2) = 2.376 · 10 −13 m V −1 and χ (3) = 2.925 · 10 −19 m 2 V −2 without poling procedure. After the molecular alignment of the EO dyes through a poling procedure, we observed an increased second-order susceptibility of χ (2) = 6.169 · 10 −12 m V −1 , while the third-order susceptibility keeps approximately the same χ (3) = 2.846 · 10 −19 m 2 V −2 , as shown in figure 5(b). Table 1 summarizes all EO values. Due to the non-optimized poling procedure, the poling efficiency is about 35% compared to the EO coefficient reported in bulk EO polymer films [35]. Polar chromophores like DR1 tend to interfacial interactions with the electrodes and thus do not contribute to poling induced EO activity [11]. Surface/material interfacial effects are magnified in nanoscale slot waveguides, which is also Figure 3. Experimental set-up: the polarization of an external cavity laser is controlled by a paddle style fiber rotator and coupled to the photonic chip by means of fiber grating coupling. A photodiode translates the optical signal to an electrical. For static electric field measurements, a digital multimeter is used, while for dynamic measurements an digital sampling scope is employed. The electric field inside the slot waveguide is induced by applying a DC or AC signal to the metal electrodes using either a function generator or a DC source. suggested by recent simulation studies [36,37]. In addition, the time window to conduct the experiment is of importance, since a decay of the linear EO effect is possible due to relaxation processes [35]. In our experiment, we have performed the experiments to observe the linear EO effect right after the poling procedure by simply switching between a high-voltage necessary for the poling procedure to a low-voltage for the conducted experiments with a static as well as time varying electric field. However, our findings clearly demonstrate the strong impact of the quadratic EO effect on the overall EO response. In particular, according to equation (1) we have evaluated a contribution of the quadratic EO effect to the overall refractive index change of about Ψ = 32% after the poling procedure was applied. This finding suggests that the quadratic EO effect is not negligible in narrow slot widths. Intensity modulation using the linear and quadratic EO effect In this section, we show the influence of both EO effects on intensity modulation. Inserting the electric field E = E DC + E m sin(ωt) into equation (2) yields an expression for the time-varying refractive index change ∆n ω (χ (2) , χ (3) ) = ∆n 0 (χ (2) , χ (3) ) + ∆n ω (χ (3) ) + ∆n ω (χ (2) , χ (3) ) (11) comprising a static refractive index change ∆n 0 (χ (2) , χ (3) ), a time-varying refractive index change ∆n ω (χ (3) ) due to the quadratic EO effect and a time-varying refractive index change ∆n ω (χ (2) , χ (3) ) due to the linear and quadratic EO effect, where ∆n 0 (χ (2) , χ (3) ) = χ (2) n eop E DC + 3 2 ∆n ω (χ (3) ) = 3 2 ∆n ω (χ (2) , Since the static electric field E DC is fixed, equation (12) is a constant and has no influence on the modulated optical signal. In fact, this term induces an offset of the resonance peak and has to be taken into account to find the operation point, which typically lies in the linear regime of the resonance peak. Equation (13) is solely influenced by the quadratic EO effect and gives a sine-squared signal. Figure 6(a) shows a schematic overview of all possible scenarios and experimentally obtained EO responses (oscilloscope traces) for each case. In contrast, equation (14) shows that the sine response is affected by the linear and the quadratic EO effect. Note that the term 3χ (3) E DC in equation (14) is also known as the electric field-induced second-order nonlinear optical effect [38]. To distinguish between the ∆n ω (χ (3) ) and ∆n ω (χ (2) , χ (3) ) contribution we define two cases. In the first case, the EO response is characterized by a sine-squared signal if E m E DC , as shown in figure 6(a). In the second case, the EO response shows a sine signal if E m E DC ( figure 6(b)). In this way, it is possible to observe a linear response by applying an offset voltage E DC = U DC /s in an isotropic material (centric order). However, an acentric molecular order can be obtained by applying a poling procedure. In this case, the observed EO response is characterized by a sine signal as well (figure 6(c)), since the second-order effect dominates the third-order one. Applying an offset voltage gives approximately the same signal, as shown in figure 6(d). An implication of our findings is that both linear and quadratic EO effect should be taken into account when extracting material properties from experimentally observed phase shifts. In the present work, we have determined the higherorder susceptibility tensor components χ (2) 333 (−ω; ω, 0) and χ (3) 3333 (−ω; ω, 0, 0). In the case where one wishes to use the material for devices based on second harmonic generation and on the optical Kerr effect, it is necessary to correct our results by the dispersion in order to retrieve the evaluation of the χ (2) 333 (−2ω; ω, ω) and χ (3) 3333 (−ω; ω, ω, −ω) tensor components [15]. Conclusions We have presented a method by which higher-order nonlinear optical susceptibilities of EO polymers can be determined. A silicon slot waveguide ring resonator fabricated in a photonic integrated circuit technology was used to observe the EO response and we applied our model to the nonlinear optical dye DR1 doped in PMMA. We have analysed the EO response before and after applying a poling procedure. It is revealed that the quadratic EO effect has a significant impact on the overall EO response even with an acentric molecular order due to the strong electric field inside the slot waveguide. As a consequence, our findings implicate that both the linear and quadratic EO effect should be taken into account when extracting material properties from experimentally observed phase shifts in nano-scale slot waveguides. With the presented Figure 6. Schematic representation and experimentally observed oscilloscope traces of the EO response: (a) the electric modulation field (blue) induces a sine squared modulated optical field (green) if E m E DC and χ (2) ≈ 0. (b) If the symmetry is broken and χ (2) = 0, the electric modulation field (blue) is causing a sine modulated optical field (green) and a small refractive index offset due to the nonlinear response curve. (c) If an offset voltage is applied and the modulation amplitude is small enough ( E m E DC ), the modulated optical response follows a sine function. (d) Applying an offset voltage to a acentric material leads to a larger refractive index offset but also to a larger EO response due to the nonlinear response curve. method, it is possible to distinguish between the linear and the quadratic EO response, which is an important step for the design and validation of future reconfigurable and field-programmable photonic devices based on slot waveguides and for the evaluation of EO material properties.
5,383
2020-01-10T00:00:00.000
[ "Physics" ]